mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

8.4K
active users

#cursor

32 posts30 participants1 post today

"Anthropic is very likely losing money on every single Claude Code customer, and based on my analysis, appears to be losing hundreds or even thousands of dollars per customer.

There is a gaping wound in the side of Anthropic, and it threatens financial doom for the company.

Some caveats before we continue:

- CCusage is not direct information from Anthropic, and thus there may be things we don’t know about how it charges customers, or any means of efficiency it may have.
- Despite the amount of evidence I’ve found, we do not have a representative sample of exact pricing. This evidence comes from people who use Claude Code, are measuring their usage, and elected to post their CCusage dashboards online — which likely represents a small sample of the total user base.
- Nevertheless, the amount of cases I’ve found online of egregious, unrelentingly unprofitable burn are deeply concerning, and it’s hard to imagine that these examples are outliers.
- We do not know if the current, unrestricted version of Claude Code will last.

The reason I’m leading with these caveats is because the numbers I’ve found about the sheer amount of money Claude Code’s users are burning are absolutely shocking.

In the event that they are representative of the greater picture of Anthropic’s customer base, this company is wilfully burning 200% to 3000% of each Pro or Max customer that interacts with Claude Code, and in each price point’s case I have found repeated evidence that customers are allowed to burn their entire monthly payment in compute within, at best, eight days, with some cases involving customers on a $200-a-month subscription burning as much as $10,000 worth of compute."

wheresyoured.at/anthropic-is-b

Ed Zitron's Where's Your Ed At · Anthropic Is Bleeding OutHello premium customers! Feel free to get in touch at ez@betteroffline.com if you're ever feeling chatty. And if you're not one yet, please subscribe and support my independent brain madness. Also, thank you to Kasey Kagawa for helping with the maths on this. Soundtrack: Killer Be Killed -

I've been using Claude Code, and I like it. It's produced decent code and configuration files and everything, but I've only so far used it for "evergreen", fully vibe coded projects. So having Claude start from scratch.

Meanwhile, I *have* used Cursor on existing projects to add features, fix bugs, and add tests. And I found that to work pretty well too.

The problem I have is that with Cursor, I can see the diffs of the code in my editor, step by step, and approve or deny individual changes.

With Claude, it seems like it just prints a diff in the console and I have to accept or reject the whole thing there, with no context of the rest of my project, and no ability to tweak it.

Am I just doing something wrong? Is this the reason to stick to Cursor?

Looking for insights.

We do not provide evidence that:

- #AI systems do not currently speed up many or most #softwareDevelopers

- AI systems do not speed up individuals or groups in domains other than #softwareDevelopment

- AI systems in the near future will not speed up #developers

- There are not ways of using existing AI systems more effectively

> Measuring the Impact of Early-2025 AI on Experienced #OpenSource #Developer #Productivity

metr.org/blog/2025-07-10-early

metr.orgMeasuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
#llm#copilot#cursor

"We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers. The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't."

bsky.app/profile/metr.org/post

Bluesky Social · METR (@metr.org)We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers. The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't.

"we find that when developers use AI tools, they take 19% longer than without - AI makes them slower." [1]

- says a study with a randomized and controlled trial, which includes a chart of hilariously overly optimistic forecasts against woefully observed results

[1] metr.org/blog/2025-07-10-early.

#AI#LLM#Bard