Coding Is Dead, Long Live Coding: What Builders Actually Need to Learn Now
Do product managers and designers still need to know how to code in the AI era?
The head of Claude Code at Anthropic has not edited a single line of code by hand since November 2025. One hundred percent of his code is written by AI. Meanwhile, the CEO of Devin -- the AI software engineer -- says learning computer science is more important than ever. The CEO of Vercel says learn to code AND learn to prompt. OpenAI's CPO says the real skill is writing evals, not writing code. And a PM leader at Meta says learning to code is a direct competitive advantage for anyone working in AI products.
Everyone agrees the world is changing. Nobody agrees on what that means for you. The answer depends on which of these voices you trust -- and as you will see, the disagreement is not superficial. It reflects fundamentally different beliefs about how quickly AI will replace human technical skill, and what "technical skill" even means anymore.
If AI can write code, should product builders, aspiring PMs, and career-changers still invest hundreds of hours learning to program? Or has that time become better spent elsewhere?
The 5 Positions
Evidence from the Archive
Boris personally ships 10-30 pull requests daily with zero hand-edited code since November 2025
Five AI agents running simultaneously during the podcast recording, producing code in parallel
Vercel going from 150 engineers to 600 builders using v0 across marketing, sales, and product management
Product management team creating 'live PRDs' so detailed the engineering team says 'just ship it'
The Google Maps rewrite -- once cutting-edge R&D, now achievable by many React developers -- as evidence that...
The Google Maps rewrite -- once cutting-edge R&D, now achievable by many React developers -- as evidence that technical moats erode but product moats persist
Python as an existing example of 'explaining in English what you want and the computer does it' -- from the...
Python as an existing example of 'explaining in English what you want and the computer does it' -- from the perspective of a 1970s programmer
The Lenny & Friends Summit panel where the eval-writing comment resonated most with the audience
Building OpenAI's deep research product: designing evals and product simultaneously, hill-climbing on eval performance
Deciding whether to launch a model at 70% or 80% accuracy -- a product decision that requires technical understanding
The advice to 'fake the AI with a Figma prototype' for MVPs instead of investing in model training prematurely
The Synthesis
Everyone in this debate is right -- they are answering different questions for different people at different stages. The question "should I learn to code?" has been replaced by a more useful question: "what layer of the stack should I understand?"
The hierarchy of value: syntax (least valuable, rapidly commoditizing), frameworks and tools (still useful, also commoditizing), systems architecture (highly valuable, slow to commoditize), product judgment about what to build (most valuable, not commoditizing), and eval/quality judgment (newly critical, not yet widely distributed).
The people closest to the AI frontier disagree most sharply. One says coding is solved in a year, another says learning CS is more important than ever, another says the real skill is evals. This genuine uncertainty tells you to hedge: learn enough to understand systems, but do not make any single technical skill your only investment.
Effective prompting involves role-playing, chain-of-thought reasoning, few-shot learning, RAG pipelines, and LLM-as-judge evaluation. These are programming skills involving logic, debugging, and systematic iteration -- but they look nothing like writing functions in Python. The definition of 'coding' itself is changing.
Which Approach Fits You?
Answer 3 questions about your situation. We'll match you to the right approach.
What is your current role?
What layer of the tech stack matters most for your work?
How do you think about AI's impact on coding?
Notable Absences
The Bottom Line
There is one more dimension that none of these voices explicitly names but all of them imply: the definition of "coding" itself is changing. Lenny's newsletter on prompt engineering techniques reveals that effective prompting involves role-playing, chain-of-thought reasoning, few-shot learning, RAG pipelines, and LLM-as-judge evaluation. These are programming skills -- they involve logic, debugging, and systematic iteration -- but they look nothing like writing functions in Python. The person who learns to orchestrate AI agents effectively is doing a form of programming that is arguably more valuable than traditional coding, because the output scales differently.
Lenny's own newsletter reinforces this convergence. His guide to AI prototyping for product managers shows that PMs can now build functional prototypes with zero coding ability using v0, Bolt, Replit, and Cursor. His survey of vibe coding produced over 1,000 examples of non-engineers building real, useful software -- from nicotine pouch trackers to custom flight radar apps. And Perplexity's co-founder predicts that "technical PMs or engineers with product taste will become the most valuable people at a company over time." The thread connecting all of this: the definition of "technical" is broadening, not disappearing.
Sources
- Bret Taylor — "He saved OpenAI, invented the “Like” button, and built Google Maps: Bret Taylor on the future of careers, coding, agents, and more" — Lenny's Podcast, July 31, 2025
- Scott Wu — "How Devin replaces your junior engineers with infinite AI interns that never sleep | Scott Wu (Cognition CEO)" — Lenny's Podcast, September 8, 2025
- Guillermo Rauch — "Everyone’s an engineer now: Inside v0’s mission to create a hundred million builders | Guillermo Rauch (founder and CEO of Vercel, creators of v0 and Next.js)" — Lenny's Podcast, April 13, 2025
- Marily Nika — "AI and product management | Marily Nika (Meta, Google)" — Lenny's Podcast, February 5, 2023