mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

9.5K
active users

#LLMs

131 posts91 participants11 posts today

"Apple Inc. is teaming up with startup Anthropic PBC on a new “vibe-coding” software platform that will use artificial intelligence to write, edit and test code on behalf of programmers.

The system is a new version of Xcode, Apple’s programming software, that will integrate Anthropic’s Claude Sonnet model, according to people with knowledge of the matter. Apple will roll out the software internally and hasn’t yet decided whether to launch it publicly, said the people, who asked not to be identified because the initiative hasn’t been announced.

The work shows how Apple is using AI to improve its internal workflow, aiming to speed up and modernize product development. The approach is similar to one used by companies such as Windsurf and Cursor maker Anysphere, which offer advanced AI coding assistants popular with software developers."

bloomberg.com/news/articles/20

"Rabelais shows us that, when the production of discourse is automated, it becomes strictly monologic and loses its illocutionary social power. This sort of autonomous language is just like an ambassador: it speaks for us, but it cannot speak as us." - Hannah Katznelson

#illocutionary #LLMs #language #Technology #Renaissance #Rabelais #Erasmus #Automation

Source:
aeon.co/essays/who-needs-ai-te

"After poring through a century of varied conceptualizations, I’ll write out my current stance, half-baked as it is:

I think “AGI” is better understood through the lenses of faith, field-building, and ingroup signaling than as a concrete technical milestone. AGI represents an ambition and an aspiration; a Schelling point, a shibboleth.

The AGI-pilled share the belief that we will soon build machines more cognitively capable than ourselves—that humans won’t retain our species hegemony on intelligence for long. Many AGI researchers view their project as something like raising a genius alien child: We have an obligation to be the best parents we can, instilling the model with knowledge and moral guidance, yet understanding the limits of our understanding and control. The specific milestones aren’t important: it’s a feeling of existential weight.

However, the definition debates suggest that we won’t know AGI when we see it. Instead, it’ll play out more like this: Some company will declare that it reached AGI first, maybe an upstart trying to make a splash or raise a round, maybe after acing a slate of benchmarks. We’ll all argue on Twitter over whether it counts, and the argument will be fiercer if the model is internal-only and/or not open-weights. Regulators will take a second look. Enterprise software will be sold. All the while, the outside world will look basically the same as the day before.

I’d like to accept this anti-climactic outcome sooner than later. Decades of contention will not be resolved next year. AGI is not like nuclear weapons, where you either have it or you don’t; even electricity took decades to diffuse. Current LLMs have already surpassed the first two levels on OpenAI and DeepMind’s progress ladders. A(G)I does matter, but it will arrive—no, is already arriving—in fits and starts."

jasmi.news/p/agi

@jasmine · AGI (disambiguation)By Jasmine Sun

"Asking scientists to identify a paradigm shift, especially in real time, can be tricky. After all, truly ground-shifting updates in knowledge may take decades to unfold. But you don’t necessarily have to invoke the P-word to acknowledge that one field in particular — natural language processing, or NLP — has changed. A lot.

The goal of natural language processing is right there on the tin: making the unruliness of human language (the “natural” part) tractable by computers (the “processing” part). A blend of engineering and science that dates back to the 1940s, NLP gave Stephen Hawking a voice, Siri a brain and social media companies another way to target us with ads. It was also ground zero for the emergence of large language models — a technology that NLP helped to invent but whose explosive growth and transformative power still managed to take many people in the field entirely by surprise.

To put it another way: In 2019, Quanta reported on a then-groundbreaking NLP system called BERT without once using the phrase “large language model.” A mere five and a half years later, LLMs are everywhere, igniting discovery, disruption and debate in whatever scientific community they touch. But the one they touched first — for better, worse and everything in between — was natural language processing. What did that impact feel like to the people experiencing it firsthand?

Quanta interviewed 19 current and former NLP researchers to tell that story. From experts to students, tenured academics to startup founders, they describe a series of moments — dawning realizations, elated encounters and at least one “existential crisis” — that changed their world. And ours."

quantamagazine.org/when-chatgp

A hand with letters on it reaching out to symbols
Quanta Magazine · When ChatGPT Broke an Entire Field: An Oral History | Quanta MagazineResearchers in “natural language processing” tried to tame human language. Then came the transformer.

"Usually, AdSense ads appear in search results and are scattered around websites. Google ran a small test of chatbot ads late last year, partnering with select AI startups, including AI search apps iAsk and Liner.

The testing must have gone well because Google is now allowing more chatbot makers to sign up for AdSense. "AdSense for Search is available for websites that want to show relevant ads in their conversational AI experiences," said a Google spokesperson.

If people continue shifting to using AI chatbots to find information, this expansion of AdSense could help prop up profits. There's no hint of advertising in Google's own Gemini chatbot or AI Mode search, but the day may be coming when you won't get the clean, ad-free experience at no cost."

arstechnica.com/ai/2025/05/goo

A large Google logo in the shape of a multi-colored G is seen outside Google's Mountain View offices.
Ars Technica · Google is quietly testing ads in AI chatbotsBy Ryan Whitwam

"I think there is a real need for a book on actual vibe coding: helping people who are not software developers—and who don’t want to become developers—learn how to use vibe coding techniques safely, effectively and responsibly to solve their problems.

This is a rich, deep topic! Most of the population of the world are never going to learn to code, but thanks to vibe coding tools those people now have a path to building custom software.

Everyone deserves the right to automate tedious things in their lives with a computer. They shouldn’t have to learn programming in order to do that. That is who vibe coding is for. It’s not for people who are software engineers already!

There are so many questions to be answered here. What kind of projects can be built in this way? How can you avoid the traps around security, privacy, reliability and a risk of over-spending? How can you navigate the jagged frontier of things that can be achieved in this way versus things that are completely impossible?

A book for people like that could be a genuine bestseller! But because three authors and the staff of two publishers didn’t read to the end of the tweet we now need to find a new buzzy term for that, despite having the perfect term for it already."

simonwillison.net/2025/May/1/n

Simon Willison’s WeblogTwo publishers and three authors fail to understand what “vibe coding” meansVibe coding does not mean “using AI tools to help write code”. It means “generating code with AI without caring about the code that is produced”. See Not all AI-assisted …

The thesis that each unauthorized use of a copyrighted work amounts to a lost sale going down the drain...

"At times, it sounded like the case was the authors’ to lose, with Chhabria noting that Meta was “destined to fail” if the plaintiffs could prove that Meta’s tools created similar works that cratered how much money they could make from their work. But Chhabria also stressed that he was unconvinced the authors would be able to show the necessary evidence. When he turned to the authors’ legal team, led by high-profile attorney David Boies, Chhabria repeatedly asked whether the plaintiffs could actually substantiate accusations that Meta’s AI tools were likely to hurt their commercial prospects. “It seems like you’re asking me to speculate that the market for Sarah Silverman’s memoir will be affected,” he told Boies. “It’s not obvious to me that is the case.”

When defendants invoke the fair use doctrine, the burden of proof shifts to them to demonstrate that their use of copyrighted works is legal. Boies stressed this point during the hearing, but Chhabria remained skeptical that the authors’ legal team would be able to successfully argue that Meta could plausibly crater their sales. He also appeared lukewarm about whether Meta’s decision to download books from places like LibGen was as central to the fair use issue as the plaintiffs argued it was. “It seems kind of messed up,” he said. “The question, as the courts tell us over and over again, is not whether something is messed up but whether it’s copyright infringement.”

A ruling in the Kadrey case could play a pivotal role in the outcomes of the ongoing legal battles over generative AI and copyright."

wired.com/story/meta-lawsuit-c

WIRED · A Judge Says Meta’s AI Copyright Case Is About ‘the Next Taylor Swift’By Kate Knibbs