Photo by Steve Johnson on Unsplash
How Does AI Affect A New Developer's Ability to Learn?
Hot takes versus real research
There are many hot takes on how using AI and LLMs affects your ability to learn programming. Some, like Kyle Gawley, imply that using AI to write code is cheating and that you won't learn anything. Others say it is a great way to learn and that it can help you become a better programmer.
Is Hands-On Coding Critical for Learning?
I mostly disagree with this take, but at the same time, it's probably true that you need to type lines of code as a complete beginner, or you won't learn the fundamentals. I was taught to literally type out code from a textbook in order to learn. This sort of approach certainly has some benefits, especially before smart code completion was available — being able to type basic programmatic structures and functions and have the muscle memory for good syntax was incredibly useful. In my career, I got a lot of value just from being able to type some repetitive things like <!DOCTYPE html> without thinking about it. I do think it's helpful to have the basics down to that level.
How ChatGPT Helped Me Evolve as a Programmer
On the flip side, as an experienced developer, I've improved my skills and knowledge significantly and at a faster pace than ever by working with ChatGPT and learning new techniques and design patterns through it. Previously, I might have written barely functional code and shrugged it off. Now, I can iterate with ChatGPT to come up with good designs that incorporate best practices, and I've produced the best work of my career in no small part thanks to ChatGPT.
Make ChatGPT Your Teammate, not your Oracle
The thing is, the iteration is key. Utley and Gohar of Stanford's d.school researched how teams integrate AI into existing workflows and found something really fascinating: the most successful teams were those that iterated with AI, treating it like a teammate, not those that just used it like an "Oracle" to generate a solution. This means that the best way to get better results with AI is to use it as a tool to help you learn, a way to bounce your ideas off something that can rephrase your thoughts in new ways, and a way to dig deeper and get more information or further your understanding — not as a crutch to do the work for you.
There is a huge difference between those who lazily do the minimum possible to have AI write some code without giving a thought to how it works and someone who has an idea and iterates multiple times, asking themselves (and the AI) questions like: How does this work?" How could this be better? What are the trade-offs? What are the implications of this design? What are the edge cases? How can this be optimized?
If you are fortunate enough to work with a team of experienced developers, you can ask others such questions; therein lies much of the power of pair programming or mob programming. On the other hand, if you are working alone or with less experienced developers, you might not have that luxury. In that case, AI can be a great tool to help you learn and improve your skills. It is an excellent second set of eyes for you to check your work and find mistakes that may otherwise take a lot of time for you to find when they cause some weird edge-case bug.
Opening Up Coding To The Masses
The barrier to entry for beginners is now lower. This could be the difference between someone building something interesting and useful and someone giving up on programming at the outset because they can't get basic things to work. Maybe people will be more likely to persist until they learn and figure things out. Maybe that's the difference between an okay programmer who relies heavily on AI and someone who tried for ten minutes and swore off coding forever.
When I started doing web development in 1999, I met people who said things like, "I took a programming in college and hated it." Maybe people like that would stick with it a lot longer if using AI meant it wasn't such an epic struggle just to get started. Perhaps many such people have more to offer the world than merely the tenacity to struggle with complex technical problems until they understand the innermost workings of Von Neumann architecture, have read Cormen, Leiserson, Rivest, and Stein's 'Introduction to Algorithms' cover-to-cover, and memorized the method signature of every broadly useful Java class.
Some folks might be brilliant at understanding how people think and making great user experiences. Maybe they have an artistic flair that allows them to create some gorgeous graphics and designs, or maybe they are expert writers who can engage and inspire readers. The products of their coding may provide a canvas for these other creative endeavors. Not having an expert-level ability to understand programming without any tooling doesn't inherently make what they do less valuable, does it?
Like in so many other communities, I think a lot of the "people who use AI to code are amateurs" sentiment just amounts to hostile gatekeeping. "You're not good enough to be one of us because you're not like us" is more likely what this really amounts to in many cases.
Future Programmers
Large language models are like many other tools: they can be a crutch, or they can help you learn and increase productivity. Until we can plug our brains in Matrix-style, learning anything to a high level will always require reading books, documentation, and tutorials. And, of course, one must write code and struggle with solving problems. That won't likely change, no matter how advanced tooling (like AI) becomes. It takes time and effort, even as new tools make things easier.
In ten years, the industry will be filled with young folks who have never coded without AI tools, and more experienced developers will have long-used them to maximize their productivity and output quality. It'll be hard to be competitive without them. It's like someone today insisting on coding by hand in machine code, refusing to use high-level languages or IDEs with syntax highlighting or code completion. Such a person might be brilliant at understanding how a computer works, and they might create highly efficient small programs, but how fast could they build complex, modern software?
It will definitely be interesting to see studies of how LLM-based tooling affects learning outcomes and skill development, but it's still in the early days.
What do you think? Should young developers immediately reach for these tools, or should they learn things the hard way until they develop basic competence? Is it risky to rely too heavily on LLMs?