Contrary to popular belief, a new study from METR found that using AI coding tools to familiar open-source projects actually reduced the productivity of experienced coders by 19%.
Imagine the unexpected turn of events! Despite evidence to the contrary, most participants reported feeling faster and believed there was a 20% improvement.
The experiment involved 16 experienced devs tackling 246 real-world code tasks. Those using assistants like Cursor with Claude 3.5 and 3.7 spent significantly more time prompting, waiting, reviewing, and debugging AI-generated code.
Around 9% of their time was spent cleaning up AI suggestions, and only 44% of those suggestions were accepted.
Developers anticipated a 24% increase in speed from AI prior to the study. After finishing the tasks, individuals reported feeling 20% quicker. According to the experts at METR, this exemplifies how users’ subjective experiences can deceive them even when objective data suggests otherwise.
This slowed down, according to experts, because everyone was so familiar with their own codebases that artificial intelligence couldn’t do anything to help. Claiming AI always enhances productivity is a mistake, according to the study. Rather, it implies that AI technologies could be more useful for less experienced developers or tasks that are unknown.
Despite the slowdown, many developers continued using their AI tools because the experience felt smoother, like editing a draft instead of writing from scratch. Experts see AI as an aesthetic or comfort-enhancing tool, not a universal turbocharger for coding speed.
Know Your Context – AI excels in new or low-context tasks.
Measure, Don’t Assume – Track real output, not impressions.
Target AI to Need – Use it for boilerplate code, tests, or unfamiliar tasks.
Monitor Overhead – Prompting and reviewing AI-generated code adds time.