mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

8.9K
active users

#MetaAI

15 posts14 participants1 post today

"At so crucial a political moment we need deep, critical thinkers and powerful writers - we must not let the entire future of human understanding be stolen from us, frozen in time because no one who comes next knows how to write."--Tansy Hoskins.@tansy This blog post is essential reading tansyhoskins.org/my-books-have #Meta #MetaAI #GenAI

Tansy E Hoskins · My books have been stolenI know exactly who did it and why.

Facebook / Insta Business Seiten sind öffentlich. Facebook nutzt Daten aus öffentlichen Quellen für das KI-Training. Der Widerspruch für private Profile HAT KEINE Auswirkungen auf das Business Profil. Business Profile werden ab dem 28. Mai 2025 für das KI Training genutzt. (Quelle: E-Mail von Facebook 22.5.2025)
dtnschtz.de/facebook-ki-widersprechen/
#meta #metaai #ki #kitraining #metakitraining #datenschutz #aiact #FacebookMeta #facebook #instagram #aitraining

A crazy development at the crossroads of government efficiency and AI: The Department of Government Efficiency (DOGE), an initiative led by Elon Musk, utilized Meta's AI model (Llama 2) to review and classify email responses from federal workers. The core aim? To assess job necessity based on responses to the famous "Fork in the Road" email asking employees to justify their work.

💡 This scenario prompts some vital points:
👉 AI in Governance: How are private AI models being deployed to evaluate public sector roles and productivity?
🔐 Data Privacy Concerns: The use of federal worker emails raises significant questions regarding sensitive government data and privacy safeguards.
🔍 Ethical Oversight: What frameworks are in place to ensure fair and unbiased AI analysis in such critical contexts?
🤔 Transparency Imperative: Understanding the full scope and implications of such AI applications is crucial for public and professional trust.

This highlights the complex and evolving relationship between tech, governance, and workforce management. What about the ethical and practical considerations when AI is used to evaluate government functions?

#AI #GovernmentEfficiency #DataPrivacy #ElonMusk #MetaAI #security #privacy #cloud #infosec #cybersecurity
wired.com/story/doge-used-meta

WIRED · DOGE Used Meta AI Model to Review Emails From Federal WorkersBy Makena Kelly

So, #Facebook #Meta sent me an email to notify me they'll be processing my public FB info to train their AIs. By the end of the email is a link saying I can object to processing.

Clicked the link, the end of the form reads that despite my objection, my info will still be processed if it's tagged or shared publicly by someone else.

Like all of my fb friends who haven't read that email to the end and objected.

facebook.com/help/contact/6359

Replied in thread
The number of questions being asked on StackOverflow is dropping rapidly

Like cutting down a forest without growing new trees, the AI corporations seem to be consuming the natural raw material of their money-making machines faster than it can be replenished.

Natural, human-generated information, be they works of art, or conversations about factual things like how to write software, are the source of training data for Large Language Models (LLMs), which is what people are calling “artificial intelligence” nowadays. LLM shops spend untold millions on curating the information they harvest to ensure this data is strictly human-generated and free of other LLM-generated content. If they do not do this, the non-factual “hallucinations” (fictional content) that these LLMs generate may come to dominate the factual human-made training data, making the answers that the LLMs generate increasingly more prone to hallucination.

The Internet is already so full of LLM-generated content that it has become a major problem for these companies. The new LLMs are more and more often trained on fictional LLM-generated content that passes as factual and human-made, which is rapidly making LLMs less and less accurate as time goes on — a viscous downward spiral.

But it gets worse. Thanks to all of the AI hype, everyone is asking questions of LLMs nowadays and not of other humans. So the source of these LLMs training data, web sites like StackOverflow and Reddit, are now no longer recording as many questions from humans to other humans. If that human-made information disappears, so does the source of natural resources that make it possible to build these LLMs.

Even worse still, if there are any new innovations in science or technology, unless humans are asking question to the human innovators, the LLMs can’t learn about these things innovations either. Everyone will be stuck in this churning maelstrom of AI “slop,” asking only questions that have asked by millions of others before, and never receiving any true or accurate answers on new technology. And nobody, neither the humans nor the machines, will be learning anything new at all, while the LLMs become more and more prone to hallucinations with each new generation of AI released to the public.

I think we are finally starting to see the real limitations of this LLM technology come into clear view, the rate at which it is innovating is simply not sustainable. Clearly pouring more and more money and energy into scaling up these LLM project will not lead to increased return-on-investment, and will definitely not lead to the “singularity” in which machine intelligence surpasses human intelligence. So how long before the masses finally realize they have been sold nothing but a bill of goods by these AI corporations?

The Pragmatic Engineer · Stack overflow is almost deadToday, Stack overflow has almost as few questions asked per month, as when it launched back in 2009. A recap of its slow, then rapid, downfall.
#tech#AI#Slop