What a Two-Minute Speed Gain Costs Developers in Learning

The findings highlight a growing tension in AI-powered workplaces.

Topics

  • The IT industry always knew the trade-off. AI coding assistants speed up work, but they can also dull learning along the way. New research now puts data behind that everyday experience, giving shape to a concern long discussed in the community.

    Early observational analysis of Claude.ai usage data showed that AI can accelerate parts of people’s work by up to 80%, suggesting large productivity gains when people bring AI into familiar tasks. But a follow-up experimental study reveals a more complex story about how those gains relate to human skill development.

    In a randomized controlled trial with 52 mostly junior software engineers, participants were asked to learn and use Trio, a previously unfamiliar Python library, to complete coding tasks. One group had access to an AI coding assistant, while the other coded without AI. Although the AI-assisted group finished slightly faster, by about two minutes on average, the speed boost was not statistically significant. What was significant was the learning outcome.

    On a post-task quiz testing comprehension of recently used concepts, participants who used AI scored 17% lower than those who coded by hand, nearly the equivalent of two letter grades. 

    The largest gaps appeared in debugging, a critical skill for identifying and fixing errors in code. Researchers suggest this reflects “cognitive offloading,” where developers lean on AI to do the thinking, reducing engagement with the material.

    But the story isn’t entirely bleak. The way developers interacted with AI proved crucial: those who asked conceptual questions, requested explanations, or used AI to deepen understanding retained far more knowledge than peers who relied on AI to generate or debug code entirely.

    The findings highlight a growing tension in AI-powered workplaces. While earlier Claude.ai data suggested dramatic speed improvements on familiar tasks, this controlled study shows that AI may undermine learning when users are still acquiring new skills. 

    That has implications for how companies integrate AI tools, especially for junior staff who must build the expertise to oversee AI-generated systems in high-stakes environments.

    The findings have triggered polarized reactions online. One X user summed up the results bluntly: “We improved coding speed by two minutes and melted 17% of the skills,” calling it an honest snapshot of AI developer culture in 2025–26. 

    Another user, however, was more skeptical, arguing the study itself may be overly generous: “I doubt it was honest. They’ve been hyping it like crazy; this is likely the best spin they could manage from a real degradation in efficiency and skills. They say they kill their own product, but framing it this way makes it look like a skill issue instead of a product issue.”

    Together, the reactions highlight a broader concern for companies rolling out AI at scale. While AI can clearly accelerate work, especially where skills are already mature, aggressive use during learning may erode the very expertise needed to supervise, validate, and correct AI systems in high-stakes environments.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.