At MIT’s Media Lab, researchers looked at what happens in the brain when people write with and without AI . What they found was thought-provoking: AI-assisted writers produced stronger text but seemed less mentally engaged in the process: participants using ChatGPT wrote cleaner essays, but their brain activity patterns showed noticeably lower engagement compared with those writing unaided. The easier the tool made the task, the less people seemed to think.
AI didn’t just change the output. It changed how people thought.
Software engineering teams are starting to feel that same effect. Coding’s now faster with AI, but it’s shaving away the slow, sometimes messy problem-solving that once helped junior engineers learn how things really work. And if we don’t rethink how growth and mentorship happen, we risk building teams that can write code that runs but can’t explain why it works.
How Engineers Used to Learn
Before AI, engineers built their skills through trial and error. You’d join a team and take on tasks like low-stakes UI tweaks and copy changes. That hands-on work taught you how systems behaved. Every debugging session or code review was part of your informal apprenticeship.
Now, with tools like GitHub Copilot or Cursor that can generate large blocks of code in minutes, that learning process is gone. The growth that used to come from wrestling with uncertainty now stops at a single prompt. It’s created a convincing illusion of fluency: junior engineers may seem fluent because their code runs, but when you ask why it works, or what to do when it doesn’t, the confidence quickly fades.
A 2025 literature review covering 80 studies of early-career developers shows a clear pattern: AI tools accelerate delivery, but often at the cost of reflection and skill growth. Without structure around how juniors learn, teams risk producing coders who can build things that work but can’t yet explain why they do.
The Limits of Speed Without Mastery
When engineers stop deepening their skills, it doesn’t just affect the individual. It weakens the entire organization. Teams lose resilience when there’s no one left who can rebuild a brittle system or make good calls under pressure.
As AI takes overmore of the work once given to junior developers, the knowledge gap is widening. Teams are shipping faster, but fewer people understand how things work beneath the surface. When problems appear, it’s often the same small group of experienced engineers that are left to fix them, a model that simply can’t scale.
The data echoes these concerns. A recent LeadDev survey of nearly 900 software engineers on the impact of AI found that more than a third (38%) believe juniors will struggle learning new skills due to a lack of real-world experience, while 43% named critical thinking as the most in-demand skill over the next three years as AI adoption grows. Even more telling, 38% of engineering leaders agreed that AI tools have reduced the amount of direct mentoring juniors receive from senior engineers.
This isn’t a case against AI. But if we want AI to make us better engineers, not just faster ones, leaders need to rethink how they grow and mentor their juniors.
Designing a New Growth Path for Juniors
Since the old growth ladder is broken, it’s on engineering leaders to design a new one. And that doesn’t mean more training sessions or certifications. It means structuring daily work so that growth happens naturally as teams build and solve problems together.
Here are a few ways to start rebuilding that ladder:
Start with design, not syntax
The surest way to kill growth is to have engineers only focus on the code in front of them. Growth comes from understanding why something should exist and how it fits into the larger system.Leaders can help juniors build that muscle by holding design discussions, not just code reviews. Get them to think in flows, sketch the system before diving into code, and clarify requirements before they even name a variable.
Teach debugging before building
The best engineers sharpen their skills through debugging. You learn how systems behave by seeing where they fail and develop instincts you just can’t get from clean, finished code.Instead of deliberately giving engineers flawed AI output, encourage them to analyze and improve code that was generated to build or enhance a real feature. The goal is to help them spot inefficiencies or design gaps. Perhaps the AI solution isn’t optimized, doesn’t account for scalability, or misses the broader architectural context. Having engineers identify and refine these areas builds stronger problem-solving instincts and technical judgment.
Give them ownership over decisions
Development isn’t just about execution. It’s about judgment. And that means giving junior engineers decisions to make, not just tasks to complete. One way to do that is to have them compare two AI-generated outputs and explain which they’d choose and why. After all, the only way to build judgment is by making judgment calls .It’s also important to treat prompts and reasoning as part of the work itself, reviewing them as you would code. This turns engineers from code executors into decision-makers, and it’s how they start thinking like seniors long before they have the title.
Mentorship by Design, Not by Chance
The growth that used to happen naturally during long debugging sessions and deep code reviews is being lost as AI automates more of the process. To keep developing junior engineers in this AI-first reality, leaders must find ways to build those learning moments back into daily work.
That starts with how we approach mentorship. Go beyond reviewing what the model produced and focus on how people got there. Hold short “AI retros” to discuss what worked, what didn’t, and why. Ask senior engineers to talk through their reasoning out loud so juniors can see how good judgment is formed. Reward curiosity as much as completion.
Progress also needs to be measured differently. Output alone isn’t proof of growth. Teams that deploy quickly but learn shallowly will eventually stall. What matters is how clearly engineers can explain their choices, challenge AI’s output, and apply those lessons to the next problem.
AI will keep improving, handling more complexity, making fewer mistakes, and offering clearer logic. But progress will always come down to people. Judgment and curiosity are what will turn speed into innovation. And the leaders who invest in building those qualities now won’t just ship faster, they’ll set a new bar for what engineering excellence looks like.