LMArena Gets $100M at $600M Valuation for AI Model Testing
#AI #LMArena #AIFunding #ChatbotArena #AIBenchmarks #UCBerkeley
https://winbuzzer.com/2025/05/21/lmarena-gets-100m-at-600m-valuation-for-ai-model-testing-xcxwbn/
LMArena Gets $100M at $600M Valuation for AI Model Testing
#AI #LMArena #AIFunding #ChatbotArena #AIBenchmarks #UCBerkeley
https://winbuzzer.com/2025/05/21/lmarena-gets-100m-at-600m-valuation-for-ai-model-testing-xcxwbn/
"Measuring progress is fundamental to the advancement of any scientific field. As benchmarks play an increasingly central role, they also grow more susceptible to distortion. Chatbot Arena has emerged as the go-to leaderboard for ranking the most capable AI systems. Yet, in this work we identify systematic issues that have resulted in a distorted playing field. We find that undisclosed private testing practices benefit a handful of providers who are able to test multiple variants before public release and retract scores if desired. We establish that the ability of these providers to choose the best score leads to biased Arena scores due to selective disclosure of performance results. At an extreme, we identify 27 private LLM variants tested by Meta in the lead-up to the Llama-4 release. We also establish that proprietary closed models are sampled at higher rates (number of battles) and have fewer models removed from the arena than open-weight and open-source alternatives. Both these policies lead to large data access asymmetries over time. Providers like Google and OpenAI have received an estimated 19.2% and 20.4% of all data on the arena, respectively. In contrast, a combined 83 open-weight models have only received an estimated 29.7% of the total data. We show that access to Chatbot Arena data yields substantial benefits; even limited additional data can result in relative performance gains of up to 112% on the arena distribution, based on our conservative estimates. Together, these dynamics result in overfitting to Arena-specific dynamics rather than general model quality. The Arena builds on the substantial efforts of both the organizers and an open community that maintains this valuable evaluation platform. We offer actionable recommendations to reform the Chatbot Arena’s evaluation framework and promote fairer, more transparent benchmarking for the field."
Experts Challenge Validity and Ethics of Crowdsourced AI Benchmarks Like LMArena (Chatbot Arena)
#AI #AIBenchmarks #AIModels #LMArena #ChatbotArena #AIethics #LLMs #AIEvaluation #Crowdsourcing #GenAI
AI Benchmarking Platform Chatbot Arena Forms New Company, Launches LMArena
#AI #GenAI #LLMs #AIChatbots #LMArena #ChatbotArena #AIBenchmarks #AIModels #AIevaluation
Wow! I didn't really like Gemma 2, but Gemma 3, released today, is awesome. It comes in four sizes, 1b, 4b, 12b and 27b. It's super fast and except for the 1b version it can even handle images.
The 27B version apparently outperforms both DeepSeek v3 and LLaMA3-405 on the ChatbotArena benchmark.
It's also the first small model I've tested that's good at German.
#ChatbotArena Italia è una piattaforma che ha l'obiettivo di comparare e valutare i Large Language Models sulla lingua italiana.
Se volete partecipare, basta sottoporre un prompt a due modelli #AI scelti a caso dal sistema e votare la migliore. C'è anche la classifica!
Die #Top5 diese Woche im Blog: #TheStoryGraph als Alternative zu #Goodreads: https://blog.clickomania.ch/2025/02/18/the-storygraph-book-recommendations-review/
Vergleich von LLMs in der #ChatbotArena: https://blog.clickomania.ch/2025/02/20/lmarena-ai-llm-comparison-platform-review/
Eine Kult-Website! https://blog.clickomania.ch/2025/02/21/wikifeet-com-review-and-critical-acclaim/
Wie #BillGates in der Schweiz lange nicht ernst genommen wurde: https://blog.clickomania.ch/2025/02/19/erste-erwaehnung-bill-gates-in-den-schweizer-medien/
#LeChat von Mistral brilliert beim Karin-Keller-Sutter-Test: https://blog.clickomania.ch/2025/02/17/mistral-le-chat-review/
#clickomaniach
Eine globale Rangliste der KI-Sprachmodelle – und die Möglichkeit, mehrere LLMs blind zu vergleichen: Beides gibt es auf #ChatbotArena. Tipp: Hier war Deepseek zu entdecken, bevor der mediale Hype ausgebrochen ist.
https://blog.clickomania.ch/2025/02/20/lmarena-ai-llm-comparison-platform-review/
#clickomaniach
#OpenSource #LLMs are getting much more competitive. On the LMSYS #ChatbotArena Leaderboard, #Llama31 already ranks fourth: https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard.
Before launching, GPT-4o broke records on chatbot leaderboard under a secret name - Enlarge (credit: Getty Images)
On Monday, OpenAI employee Will... - https://arstechnica.com/?p=2024084 #largelanguagemodels #multimodalmodels #machinelearning #simonwillison #chatbotarena #gpt2-chatbot #gpt-4-turbo #aivibes #chatgpt #chatgtp #biz #gpt-4o #openai #gpt-4 #lmsys #ai
Mysterious “gpt2-chatbot” AI model appears suddenly, confuses experts - Enlarge (credit: Getty Images)
On Sunday, word began to spread... - https://arstechnica.com/?p=2020588 #machinelearning #simonwillison #aibenchmarks #chatbotarena #ethanmollick #gpt2-chatbot #samaltman #aivibes #gpt-3.5 #gpt-4.5 #biz #openai #gpt-3 #gpt-4 #gpt-5 #lmsys #ai
Words are flowing out like endless rain: Recapping a busy week of LLM news - Enlarge / An image of a boy amazed by flying letters. (credit: Getty Im... - https://arstechnica.com/?p=2016005 #largelanguagemodels #machinelearning #claude3sonnet #simonwillison #chatbotarena #googlegemini #mixtral8x22b #claude3opus #gpt-4-turbo #anthropic #gemini1.5 #whirlwind #chatgpt #chatgtp #claude3 #mistral #mixtral #biz #gemini #openai #gpt-4 #recap #meta #ai
Модель штучного інтелекту Claude 3 вперше перевершила GPT-4 на Chatbot Arena https://itc.ua/ua/novini/model-shtuchnogo-intelektu-claude-3-vpershe-perevershyla-gpt-4-na-chatbot-arena/ #Штучнийінтелект #ChatbotArena #Новини #Claude #GPT-4 #Софт
“The king is dead”—Claude 3 surpasses GPT-4 on Chatbot Arena for the first time - Enlarge (credit: Getty Images / Benj Edwards)
On Tuesday, Anth... - https://arstechnica.com/?p=2012778 #machinelearning #aileaderboard #aibenchmarks #chatbotarena #claude3haiku #claude3opus #gpt-4-turbo #claudeopus #anthropic #claude3 #gpt-3.5 #biz #openai #gpt-4 #ai
LMSYS Chatbot Arena for LLM. They've collected over 350,000 human preference votes, ranking 73 language models with the sophisticated Elo rating system.
Total #models: 73. Total #votes: 374418. Last updated: March 7, 2024.
https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard
Boeiende benchmark voor LLMs , https://lmsys.org/blog/2023-12-07-leaderboard/
en hier kun je zelf bijdragen:
https://arena.lmsys.org/
#LLM #AI #GenerativeAI
#chatbotarena