“Before starting tasks, developers forecast that allowing AI will reduce completion time by 24%. After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%”
https://metr.org/Early_2025_AI_Experienced_OS_Devs_Study.pdf
Study finds AI actually slows down experienced devs by 19% instead of speeding them up! The kicker? They expected 24% speedup, felt they got 20%, but reality said otherwise. Yet they keep using it because it makes coding feel less like staring at a blank page
"Anthropic is very likely losing money on every single Claude Code customer, and based on my analysis, appears to be losing hundreds or even thousands of dollars per customer.
There is a gaping wound in the side of Anthropic, and it threatens financial doom for the company.
Some caveats before we continue:
- CCusage is not direct information from Anthropic, and thus there may be things we don’t know about how it charges customers, or any means of efficiency it may have.
- Despite the amount of evidence I’ve found, we do not have a representative sample of exact pricing. This evidence comes from people who use Claude Code, are measuring their usage, and elected to post their CCusage dashboards online — which likely represents a small sample of the total user base.
- Nevertheless, the amount of cases I’ve found online of egregious, unrelentingly unprofitable burn are deeply concerning, and it’s hard to imagine that these examples are outliers.
- We do not know if the current, unrestricted version of Claude Code will last.
The reason I’m leading with these caveats is because the numbers I’ve found about the sheer amount of money Claude Code’s users are burning are absolutely shocking.
In the event that they are representative of the greater picture of Anthropic’s customer base, this company is wilfully burning 200% to 3000% of each Pro or Max customer that interacts with Claude Code, and in each price point’s case I have found repeated evidence that customers are allowed to burn their entire monthly payment in compute within, at best, eight days, with some cases involving customers on a $200-a-month subscription burning as much as $10,000 worth of compute."
исчерпал лимит автокомплита бесплатного курсора за полтора часа хД
Well, who would've thought: "Their findings were that using #LLM-based tools like #Cursor Pro with #Claude 3.5/3.7 Sonnet reduced #productivity by about 19% ..."
https://hackaday.com/2025/07/11/measuring-the-impact-of-llms-on-experienced-developer-productivity/
(Original paper: https://metr.org/Early_2025_AI_Experienced_OS_Devs_Study.pdf)
I've been using Claude Code, and I like it. It's produced decent code and configuration files and everything, but I've only so far used it for "evergreen", fully vibe coded projects. So having Claude start from scratch.
Meanwhile, I *have* used Cursor on existing projects to add features, fix bugs, and add tests. And I found that to work pretty well too.
The problem I have is that with Cursor, I can see the diffs of the code in my editor, step by step, and approve or deny individual changes.
With Claude, it seems like it just prints a diff in the console and I have to accept or reject the whole thing there, with no context of the rest of my project, and no ability to tweak it.
Am I just doing something wrong? Is this the reason to stick to Cursor?
Looking for insights.
We do not provide evidence that:
- #AI systems do not currently speed up many or most #softwareDevelopers
- AI systems do not speed up individuals or groups in domains other than #softwareDevelopment
- AI systems in the near future will not speed up #developers
- There are not ways of using existing AI systems more effectively
> Measuring the Impact of Early-2025 AI on Experienced #OpenSource #Developer #Productivity
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
METR ran a rare randomized trial on AI's impact in real-world dev work.
Result?
Tasks took 20% longer with AI tools. Even though devs felt 20% faster and experts expected +40%.
No hype - big open-source repos, seasoned devs, Claude 3.5–3.7, Cursor Pro, proper metrics & stat sig.
Turns out - AI slows down experienced devs on real projects.
Full study:
https://metr.org/Early_2025_AI_Experienced_OS_Devs_Study.pdf
"We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers. The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't."
"we find that when developers use AI tools, they take 19% longer than without - AI makes them slower." [1]
- says a study with a randomized and controlled trial, which includes a chart of hilariously overly optimistic forecasts against woefully observed results
@Patchbot_de Wenn ihr wollt, dass Microsoft sich bewegt, macht Microsoft Konkurrenz.¹ #VSCode #cursor
___
¹ mit AI
“The users who choose Cursor are hardcore vibe addicts. They are tech incompetents who somehow BSed their way into a developer job. They cannot code without a vibe coding bot.”
I see no lie.
https://pivot-to-ai.com/2025/07/09/cursor-tries-setting-less-money-on-fire-ai-vibe-coders-outraged/
cursor - ai coding tool - price increases