mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

8.5K
active users

#multilingualai

0 posts0 participants0 posts today

🇯🇵 JAPAN | 🇪🇺 EU
🔴 Japan Asks EU to Build Non-English/Chinese AI

🔸 Tokyo seeks EU cooperation to reduce language bias in LLMs.
🔸 Japan sees risk in closed-source models favoring dominant languages.
🔸 METI & MIC preparing AI guidelines for businesses.
🔸 Emphasis on cross-border regulatory interoperability.

#AI#Japan#EU

🌍 #CohereForAI introduces #Aya, a groundbreaking #MultilingualAI research initiative

🤝 3,000+ independent researchers from 119 countries collaborating on advancing language accessibility in #AI

📚 Key achievements:
• 513M dataset covering 101 languages
• 204K original human annotations
• Three #opensource models: Aya-101, Aya Expanse 8B & 32B
• Supporting previously underserved languages

🔬 Research highlights:
• State-of-the-art performance in multilingual benchmarks
• Complete dataset and model weights openly available on #HuggingFace
• Focus on natural language understanding, summarization, and translation

⚡️ Project scope:
• 250+ language ambassadors
• 81K Discord messages
• Comprehensive research papers and technical documentation
• Named after "fern" in Twi language, symbolizing endurance

cohere.com/research/aya

CohereAyaCohere’s non-profit research lab, C4AI, released the Aya model, a state-of-the-art, open source, massively multilingual, research LLM covering 101 languages – including more than 50 previously underserved languages.

#TechNews: #Qwen Releases New #VisionLanguage #LLM Qwen2-VL 🖥️👁️

After a year of development, #Qwen has released Qwen2-VL, its latest #AI system for interpreting visual and textual information. 🚀

Key Features of Qwen2-VL:

1. 🖼️ Image Understanding:

Qwen2-VL shows performance on #VisualUnderstanding benchmarks including #MathVista, #DocVQA, #RealWorldQA, and #MTVQA.

2. 🎬 Video Analysis:

Qwen2-VL can analyze videos over 20 minutes in length. This is achieved through online streaming capabilities, allowing for video-based #QuestionAnswering, #Dialog, and #ContentCreation. #VideoAnalysis

3. 🤖 Device Integration:

The #AI can be integrated with #mobile phones, #robots, and other devices. It uses reasoning and decision-making abilities to interpret visual environments and text instructions for device control. #AIAssistants 📱

4. 🌍 Multilingual Capabilities:

Qwen2-VL understands text in images across multiple languages. It supports most European languages, Japanese, Korean, Arabic, Vietnamese, among others, in addition to English and Chinese. #MultilingualAI

This release represents an advancement in #ArtificialIntelligence, combining visual perception and language understanding. 🧠 Potential applications include #education, #healthcare, #robotics, and #contentmoderation.

github.com/QwenLM/Qwen2-VL

GitHubGitHub - QwenLM/Qwen2-VL: Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud. - QwenLM/Qwen2-VL