mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

8.8K
active users

#LargeLanguageModelsLLM_

3 posts2 participants0 posts today
ResearchBuzz: Firehose<p>TechCrunch: Google quietly released an app that lets you download and run AI models locally. “Called Google AI Edge Gallery, the app is available for Android and will soon come to iOS. It allows users to find, download, and run compatible models that generate images, answer questions, write and edit code, and more. The models run offline, without needing an internet connection, tapping into […]</p><p><a href="https://rbfirehose.com/2025/06/01/techcrunch-google-quietly-released-an-app-that-lets-you-download-and-run-ai-models-locally/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/06/01/techcrunch-google-quietly-released-an-app-that-lets-you-download-and-run-ai-models-locally/</a></p>
ResearchBuzz: Firehose<p>AFP: UAE unveils new Arabic-language AI model. “The United Arab Emirates on Wednesday announced a new Arabic-language artificial intelligence model, describing it as the best-performing in the region.”</p><p><a href="https://rbfirehose.com/2025/05/31/afp-uae-unveils-new-arabic-language-ai-model/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/05/31/afp-uae-unveils-new-arabic-language-ai-model/</a></p>
ResearchBuzz: Firehose<p>The Register: Research reimagines LLMs as tireless tools of torture. “Large language models (LLMs) are not just about assistance and hallucinations. The technology has a darker side. In research titled ‘LLM-Enabled Coercive Interrogation,’ developer Morgan Lee explored how the technology could be put to use for non-physical coercion.</p><p><a href="https://rbfirehose.com/2025/05/27/the-register-research-reimagines-llms-as-tireless-tools-of-torture/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/05/27/the-register-research-reimagines-llms-as-tireless-tools-of-torture/</a></p>
ResearchBuzz: Firehose<p>The Verge: Anthropic’s Claude 4 AI models are better at coding and reasoning. “Anthropic has introduced Claude Opus 4 and Claude Sonnet 4, its latest generation of hybrid-reasoning AI models optimized for coding tasks and solving complex problems.”</p><p><a href="https://rbfirehose.com/2025/05/23/the-verge-anthropics-claude-4-ai-models-are-better-at-coding-and-reasoning/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/05/23/the-verge-anthropics-claude-4-ai-models-are-better-at-coding-and-reasoning/</a></p>
ResearchBuzz: Firehose<p>PsyPost: AI chatbots often misrepresent scientific studies — and newer models may be worse. “Published in Royal Society Open Science, the study found that the most widely used language models frequently overgeneralize the results of scientific studies—sometimes making broader or more confident claims than the original research supports. This tendency was more common in newer models and, […]</p><p><a href="https://rbfirehose.com/2025/05/22/psypost-ai-chatbots-often-misrepresent-scientific-studies-and-newer-models-may-be-worse/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/05/22/psypost-ai-chatbots-often-misrepresent-scientific-studies-and-newer-models-may-be-worse/</a></p>
ResearchBuzz: Firehose<p>TechXplore: Third-party data annotators often fail to accurately read the emotions of others, study finds. “Machine learning algorithms and large language models (LLMs), such as the model underpinning the functioning of the platform ChatGPT, have proved to be effective in tackling a wide range of tasks. These models are trained on various types of data (e.g., texts, images, videos, and/or […]</p><p><a href="https://rbfirehose.com/2025/05/22/techxplore-third-party-data-annotators-often-fail-to-accurately-read-the-emotions-of-others-study-finds/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/05/22/techxplore-third-party-data-annotators-often-fail-to-accurately-read-the-emotions-of-others-study-finds/</a></p>
ResearchBuzz: Firehose<p>The New Stack: Data Commons Can Save Open AI. “Two paradigm shifts are needed. First, AI developers can no longer afford to build datasets alone, treating vast bodies of knowledge, culture and information as a raw resource they can turn into tokens. Datasets must be viewed as tools for solving AI development challenges and addressing other stakeholders’ needs. This entails collaboration, […]</p><p><a href="https://rbfirehose.com/2025/05/21/the-new-stack-data-commons-can-save-open-ai/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/05/21/the-new-stack-data-commons-can-save-open-ai/</a></p>
ResearchBuzz: Firehose<p>MakeUseOf: Anyone Can Enjoy the Benefits of a Local LLM With These 5 Apps . “Cloud-based AI chatbots like ChatGPT and Gemini are convenient, but they come with trade-offs. Running a local LLM—the tech behind the AI chatbot—puts you in control, offering offline access and stronger data privacy. And while it might sound technical, the right apps make it easy for anyone to get started.”</p><p><a href="https://rbfirehose.com/2025/05/19/makeuseof-anyone-can-enjoy-the-benefits-of-a-local-llm-with-these-5-apps/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/05/19/makeuseof-anyone-can-enjoy-the-benefits-of-a-local-llm-with-these-5-apps/</a></p>
ResearchBuzz: Firehose<p>CNBC: OpenAI will show how models do on hallucination tests and ‘illicit advice’. “OpenAI on Wednesday announced a new ‘safety evaluations hub,’ a webpage where it will publicly display artificial intelligence models’ safety results and how they perform on tests for hallucinations, jailbreaks and harmful content, such as ‘hateful content or illicit advice.'”</p><p><a href="https://rbfirehose.com/2025/05/17/cnbc-openai-will-show-how-models-do-on-hallucination-tests-and-illicit-advice/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/05/17/cnbc-openai-will-show-how-models-do-on-hallucination-tests-and-illicit-advice/</a></p>
ResearchBuzz: Firehose<p>Cornell Chronicle: Developers, educators view AI harms differently, research finds. “Teachers are increasingly using educational tools that leverage large language models (LLMs) like ChatGPT for lesson planning, personalized tutoring and more in K-12 classrooms around the world. Cornell researchers have found the developers of such tools and the educators who use them have different ideas […]</p><p><a href="https://rbfirehose.com/2025/05/17/cornell-chronicle-developers-educators-view-ai-harms-differently-research-finds/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/05/17/cornell-chronicle-developers-educators-view-ai-harms-differently-research-finds/</a></p>
ResearchBuzz: Firehose<p>TechCrunch: Google’s Gemma AI models surpass 150M downloads. “Google’s openly available Gemma collection of AI models has reached a milestone: over 150 million downloads. Omar Sanseviero, a developer relations engineer at Google DeepMind, announced the figure on X over the weekend, also revealing that developers have created more than 70,000 variants of Gemma on the AI dev platform […]</p><p><a href="https://rbfirehose.com/2025/05/16/techcrunch-googles-gemma-ai-models-surpass-150m-downloads/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/05/16/techcrunch-googles-gemma-ai-models-surpass-150m-downloads/</a></p>
ResearchBuzz: Firehose<p>Ars Technica: New Lego-building AI creates models that actually stand up in real life. “On Thursday, researchers at Carnegie Mellon University unveiled LegoGPT, an AI model that creates physically stable Lego structures from text prompts. The new system not only designs Lego models that match text descriptions (prompts) but also ensures they can be built brick by brick in the real world, […]</p><p><a href="https://rbfirehose.com/2025/05/15/ars-technica-new-lego-building-ai-creates-models-that-actually-stand-up-in-real-life/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/05/15/ars-technica-new-lego-building-ai-creates-models-that-actually-stand-up-in-real-life/</a></p>
ResearchBuzz: Firehose<p>Hackaday: An LLM For The Raspberry Pi. “Microsoft’s latest Phi4 LLM has 14 billion parameters that require about 11 GB of storage. Can you run it on a Raspberry Pi? Get serious. However, the Phi4-mini-reasoning model is a cut-down version with “only” 3.8 billion parameters that requires 3.2 GB. That’s more realistic and, in a recent video, [Gary Explains] tells you how to add this LLM […]</p><p><a href="https://rbfirehose.com/2025/05/14/hackaday-an-llm-for-the-raspberry-pi/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/05/14/hackaday-an-llm-for-the-raspberry-pi/</a></p>
ResearchBuzz: Firehose<p>The Register: Update turns Google Gemini into a prude, breaking apps for trauma survivors. “Google’s latest update to its Gemini family of large language models appears to have broken the controls for configuring safety settings, breaking applications that require lowered guardrails, such as apps providing solace for sexual assault victims.”</p><p><a href="https://rbfirehose.com/2025/05/11/the-register-update-turns-google-gemini-into-a-prude-breaking-apps-for-trauma-survivors/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/05/11/the-register-update-turns-google-gemini-into-a-prude-breaking-apps-for-trauma-survivors/</a></p>
ResearchBuzz: Firehose<p>Hackaday: LLM Ported To The C64, Kinda. “[ytm] did the hard work of porting the Llama 2 model to the most popular computer ever made. Of course, as you might expect, the ancient 8-bit machine doesn’t really have the stones to run an LLM on its own. You will need one rather significant upgrade, in the form of 2 MB additional RAM via a C64 REU.”</p><p><a href="https://rbfirehose.com/2025/05/04/hackaday-llm-ported-to-the-c64-kinda/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/05/04/hackaday-llm-ported-to-the-c64-kinda/</a></p>
ResearchBuzz: Firehose<p>Carnegie Mellon University: Copilot Arena Helps Rank Real-World LLM Coding Abilities. “With so many AI coding assistants out there, it can be hard to keep track of ones that perform well on real-world tasks. To help analyze which leading or emerging code-writing large language models (LLMs) the developer community prefers, researchers at Carnegie Mellon University developed Copilot Arena, a […]</p><p><a href="https://rbfirehose.com/2025/05/04/carnegie-mellon-university-copilot-arena-helps-rank-real-world-llm-coding-abilities/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/05/04/carnegie-mellon-university-copilot-arena-helps-rank-real-world-llm-coding-abilities/</a></p>
ResearchBuzz: Firehose<p>The Register: AI models routinely lie when honesty conflicts with their goals. “Some smart cookies have found that when AI models face a conflict between telling the truth or accomplishing a specific goal, they lie more than 50 percent of the time. The underlying issue is that there’s no right or wrong way to configure an AI model. AI model output varies depending on the settings applied and […]</p><p><a href="https://rbfirehose.com/2025/05/03/the-register-ai-models-routinely-lie-when-honesty-conflicts-with-their-goals/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/05/03/the-register-ai-models-routinely-lie-when-honesty-conflicts-with-their-goals/</a></p>
ResearchBuzz: Firehose<p>Engadget: OpenAI rolls back update that made ChatGPT an ass-kissing weirdo. “OpenAI is rolling back a recent update to GPT-4o, the default model that powers ChatGPT, following complaints from users that it made the chat bot act like a weirdo.”</p><p><a href="https://rbfirehose.com/2025/04/30/engadget-openai-rolls-back-update-that-made-chatgpt-an-ass-kissing-weirdo/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/04/30/engadget-openai-rolls-back-update-that-made-chatgpt-an-ass-kissing-weirdo/</a></p>
ResearchBuzz: Firehose<p>The Conversation: Popular AIs head-to-head: OpenAI beats DeepSeek on sentence-level reasoning. “I’m a computer scientist. My colleagues − researchers from the AI Institute at the University of South Carolina, Ohio State University and University of Maryland Baltimore County − and I have developed the Reasons benchmark to test how well large language models can automatically generate […]</p><p><a href="https://rbfirehose.com/2025/04/28/the-conversation-popular-ais-head-to-head-openai-beats-deepseek-on-sentence-level-reasoning/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/04/28/the-conversation-popular-ais-head-to-head-openai-beats-deepseek-on-sentence-level-reasoning/</a></p>
ResearchBuzz: Firehose<p>Florida International University: “Poisoned” AI models can unleash real-world chaos. Can these attacks be prevented?. “The majority of AI systems we encounter today — from ChatGPT to Netflix’s personalized recommendations — are only ‘intelligent’ enough to pull off such impressive feats because of the extensive amounts of text, imagery, speech and other data they are trained on. If […]</p><p><a href="https://rbfirehose.com/2025/04/27/florida-international-university-poisoned-ai-models-can-unleash-real-world-chaos-can-these-attacks-be-prevented/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/04/27/florida-international-university-poisoned-ai-models-can-unleash-real-world-chaos-can-these-attacks-be-prevented/</a></p>