mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

8.1K
active users

#finetuning

0 posts0 participants0 posts today
trndgtr.com<p>Cosmology's Multiverse Discovery - Alex OConnor</p><p><a href="https://mastodon.social/tags/finetuningargument" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>finetuningargument</span></a> <a href="https://mastodon.social/tags/multiverse" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multiverse</span></a> <a href="https://mastodon.social/tags/finetuning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>finetuning</span></a></p>
Habr<p>Retrieval-Augmented Generation (RAG): глубокий технический обзор</p><p>Retrieval-Augmented Generation (RAG) – это архитектурный подход к генеративным моделям, который сочетает навыки поиска информации с генеративными возможностями больших языковых моделей (LLM). Идея RAG была предложена в 2020 году, чтобы преодолеть ограничение LLM – замкнутость на знаниях из обучающих данных. Вместо попыток «вживить» все знания в параметры модели, RAG-подход позволяет модели запрашивать актуальные сведения из внешних источников (баз знаний) во время генерации ответа . Это обеспечивает более точные и актуальные ответы, опирающиеся на факты, а не только на память модели. В этой статье мы подробно рассмотрим : архитектуру RAG, её компоненты и этапы работы, современные инструменты и практики для реализации RAG, примеры кода на Python, кейсы применения в бизнесе и науке, технические вызовы и лучшие практики, сравнение RAG с классическим fine-tuning, перспективы технологии.</p><p><a href="https://habr.com/ru/articles/931396/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">habr.com/ru/articles/931396/</span><span class="invisible"></span></a></p><p><a href="https://zhub.link/tags/rag" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>rag</span></a> <a href="https://zhub.link/tags/retrieval_augmented_generation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>retrieval_augmented_generation</span></a> <a href="https://zhub.link/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://zhub.link/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://zhub.link/tags/rag_pipeline" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>rag_pipeline</span></a> <a href="https://zhub.link/tags/rag_ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>rag_ai</span></a> <a href="https://zhub.link/tags/finetuning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>finetuning</span></a> <a href="https://zhub.link/tags/ragas" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ragas</span></a></p>
LLMsRetrieval-Augmented Generation (RAG): глубокий технический обзор Retrieval-Augmented Generation (RAG) – это архитектурный подход к генерати...<br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/rag" target="_blank">#rag</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/retrieval" target="_blank">#retrieval</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/augmented" target="_blank">#augmented</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/generation" target="_blank">#generation</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/llm" target="_blank">#llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/ai" target="_blank">#ai</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/rag" target="_blank">#rag</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/pipeline" target="_blank">#pipeline</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/rag" target="_blank">#rag</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/ai" target="_blank">#ai</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/fine-tuning" target="_blank">#fine-tuning</a><br><br><a href="https://habr.com/ru/articles/931396/?utm_source=habrahabr&amp;utm_medium=rss&amp;utm_campaign=931396" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=T9y77beN5vWBVNNW00aQ96XwyHo&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>
Blosc Development Team<p>Python-Blosc2 comes with an extensively tested partition sizes algorithm for automatically setting the best chunk and block sizes for you.</p><p>However, you can always bypass this mechanism for more appropriate fine-tuning in different scenarios. See for example our revamped reduction tutorial: 🚀 </p><p><a href="https://www.blosc.org/python-blosc2/getting_started/tutorials/04.reductions.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">blosc.org/python-blosc2/gettin</span><span class="invisible">g_started/tutorials/04.reductions.html</span></a></p><p>Compress Better, Compute Bigger!</p><p><a href="https://fosstodon.org/tags/Compression" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Compression</span></a> <a href="https://fosstodon.org/tags/FineTuning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FineTuning</span></a> <a href="https://fosstodon.org/tags/Performance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Performance</span></a></p>
LLMsUnlocking Efficient Fine-Tuning with LoRA and QLoRA When working with large language models (LLMs), full fine-tuning is computationally expensive and memory-intensive. Enter LoRA (Low-Rank… Conti...<br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/fine-tuning" target="_blank">#fine-tuning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/lora" target="_blank">#lora</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/artificial-intelligence" target="_blank">#artificial-intelligence</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/llm" target="_blank">#llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/machine-learning" target="_blank">#machine-learning</a><br><br><a href="https://medium.com/@kaushiktd/unlocking-efficient-fine-tuning-with-lora-and-qlora-f13660101eca?source=rss------machine_learning-5" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=NEQqMHIUvUK71kNEZtLO4bfqJQO&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>
Habr<p>LiberalMind 1.5- LLM на уровне Gemini 2.5, созданная в России</p><p>Сама идея возникла еще год назад.Хотелось создать LLM, которая будет больше всего приближена к AGI.В октябре 2024 было разработано и продумано несколько систем претрейна моделей,а также их дообучение и reinforcement learning системы.Также была разработана новая система декодера на основе декодировщика ROPE.Но к сожалению ресурсов на внедрение таких технологий хватало лишь на модели до 20M параметров,что означало и маленький набор данных для обучения,поэтому смысла в этом ине было. В апреле был разработан опенсорс агент на основе гемини,который с помощью технологии нескольких вариантов ответа и их анализа был по качеству намного лучше grmini 2.5 pro, хотя агент был разработан на основе gemini 2.0.Агент был назван LiberalMind 1.0</p><p><a href="https://habr.com/ru/articles/930352/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">habr.com/ru/articles/930352/</span><span class="invisible"></span></a></p><p><a href="https://zhub.link/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://zhub.link/tags/ml" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ml</span></a> <a href="https://zhub.link/tags/llm%D0%BC%D0%BE%D0%B4%D0%B5%D0%BB%D0%B8" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llmмодели</span></a> <a href="https://zhub.link/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://zhub.link/tags/%D0%BC%D0%B0%D1%88%D0%B8%D0%BD%D0%BD%D0%BE%D0%B5_%D0%BE%D0%B1%D1%83%D1%87%D0%B5%D0%BD%D0%B8%D0%B5" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>машинное_обучение</span></a> <a href="https://zhub.link/tags/%D0%B8%D1%81%D0%BA%D1%83%D1%81%D1%81%D1%82%D0%B2%D0%B5%D0%BD%D0%BD%D1%8B%D0%B9_%D0%B8%D0%BD%D1%82%D0%B5%D0%BB%D0%BB%D0%B5%D0%BA%D1%82" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>искусственный_интеллект</span></a> <a href="https://zhub.link/tags/lora%D0%B0%D0%B4%D0%B0%D0%BF%D1%82%D0%B5%D1%80%D1%8B" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>loraадаптеры</span></a> <a href="https://zhub.link/tags/finetuning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>finetuning</span></a> <a href="https://zhub.link/tags/reinforcement_learning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>reinforcement_learning</span></a> <a href="https://zhub.link/tags/%D1%8F%D0%B7%D1%8B%D0%BA%D0%BE%D0%B2%D1%8B%D0%B5_%D0%BC%D0%BE%D0%B4%D0%B5%D0%BB%D0%B8" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>языковые_модели</span></a></p>
LLMsLiberalMind 1.5- LLM на уровне Gemini 2.5, созданная в России Сама идея возникла еще год назад.Хотелось создать LLM, которая...<br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/ai" target="_blank">#ai</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/ml" target="_blank">#ml</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/llm-модели" target="_blank">#llm-модели</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/llm" target="_blank">#llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/машинное" target="_blank">#машинное</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/обучение" target="_blank">#обучение</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/искусственный" target="_blank">#искусственный</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/интеллект" target="_blank">#интеллект</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/lora-адаптеры" target="_blank">#lora-адаптеры</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/fine-tuning" target="_blank">#fine-tuning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/reinforcement" target="_blank">#reinforcement</a><br><br><a href="https://habr.com/ru/articles/930352/?utm_source=habrahabr&amp;utm_medium=rss&amp;utm_campaign=930352" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=J9b5r4Q0YO7ak2PZ2uK8LxdWDdA&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>
LLMsFine-tuning LLMs for Tool Use How to get models to search the web, run code, and do your taxes Continue reading on Medium » <br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/data-science" target="_blank">#data-science</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/ai" target="_blank">#ai</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/llm" target="_blank">#llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/fine-tuning" target="_blank">#fine-tuning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/machine-learning" target="_blank">#machine-learning</a><br><br><a href="https://shawhin.medium.com/fine-tuning-llms-for-tool-use-5f1db03d7c55?source=rss------machine_learning-5" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=ZPqNSOL5IuLSEB8Xs7qMymgKJ2u&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>
LLMsFine-Tuning Isn’t Always Fine “In our quest to teach machines more, we often forget how much they already know.” Continue reading on Medium » <br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/fine-tuning" target="_blank">#fine-tuning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/rags" target="_blank">#rags</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/machine-learning" target="_blank">#machine-learning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/llm" target="_blank">#llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/ai-engineering" target="_blank">#ai-engineering</a><br><br><a href="https://medium.com/@smquasim016/fine-tuning-isnt-always-fine-1ece2013cf62?source=rss------machine_learning-5" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=RdxgILB4ZjX36x8cqoMhvvPilhA&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>
LLMsFine-Tuning, Prompt Fine-Tuning, and Prompt Engineering Discover the Key Differences and When to Use Each for Superior Results Continue reading on Level Up Coding » <br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/llm" target="_blank">#llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/fine-tuning" target="_blank">#fine-tuning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/machine-learning" target="_blank">#machine-learning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/artificial-intelligence" target="_blank">#artificial-intelligence</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/generative-ai-tools" target="_blank">#generative-ai-tools</a><br><br><a href="https://levelup.gitconnected.com/fine-tuning-prompt-fine-tuning-and-prompt-engineering-c8ccfe51deeb?source=rss------machine_learning-5" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=5kZ3QG8lo61s5V5As0Cxtu3W2ts&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>
Alvin Ashcraft 🐿️<p>AI Toolkit for VS Code July Update | by Junjie Li.</p><p><a href="https://techcommunity.microsoft.com/blog/azuredevcommunityblog/ai-toolkit-for-vs-code-july-update/4431548" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">techcommunity.microsoft.com/bl</span><span class="invisible">og/azuredevcommunityblog/ai-toolkit-for-vs-code-july-update/4431548</span></a></p><p><a href="https://hachyderm.io/tags/aitoolkit" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aitoolkit</span></a> <a href="https://hachyderm.io/tags/vscode" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>vscode</span></a> <a href="https://hachyderm.io/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://hachyderm.io/tags/github" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>github</span></a> <a href="https://hachyderm.io/tags/aimodels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aimodels</span></a> <a href="https://hachyderm.io/tags/finetuning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>finetuning</span></a></p>
LLMsFine-Tuning vs. RAG: How to Decide for Your LLM Project Large Language Models (LLMs) have transformed the way we build applications — from chatbots to document summarization to question… Co...<br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/fine-tuning" target="_blank">#fine-tuning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/llm" target="_blank">#llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/generative-ai-use-cases" target="_blank">#generative-ai-use-cases</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/retrieval-augmented-gen" target="_blank">#retrieval-augmented-gen</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/machine-learning" target="_blank">#machine-learning</a><br><br><a href="https://medium.com/@saravananpalanisamy_54774/fine-tuning-vs-rag-how-to-decide-for-your-llm-project-15201cf5c2ac?source=rss------machine_learning-5" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=MQx3uk4MGlZ8jYKRemDLaCaOU8O&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>
LLMsText-to-LoRA: мгновенная адаптация трансформеров 😎 Следуй за белым кроликом 💊 📌 Telegram&nbsp;@TheWeeklyBrief&nbsp;— краткие обзо...<br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/AI" target="_blank">#AI</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/finetuning" target="_blank">#finetuning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Hypernetwork" target="_blank">#Hypernetwork</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/llm" target="_blank">#llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/lora" target="_blank">#lora</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/ml" target="_blank">#ml</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/sakana" target="_blank">#sakana</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/TextToLoRA" target="_blank">#TextToLoRA</a><br><br><a href="https://www.pvsm.ru/ai/424457" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=EGhYTR95NpgnEwRADN91y9NS8jA&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>
LLMsText-to-LoRA: мгновенная адаптация трансформеров Исследователи Sakana AI разработали&nbsp; Text-to-LoRA (T2L) , гиперсеть, котора...<br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/ai" target="_blank">#ai</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/ml" target="_blank">#ml</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/llm" target="_blank">#llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/lora" target="_blank">#lora</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/sakana" target="_blank">#sakana</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/TextToLoRA" target="_blank">#TextToLoRA</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Hypernetwork" target="_blank">#Hypernetwork</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/finetuning" target="_blank">#finetuning</a><br><br><a href="https://habr.com/ru/articles/925404/?utm_source=habrahabr&amp;utm_medium=rss&amp;utm_campaign=925404" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=HUrGJlWIOz6AnzA1thdikCRyXmC&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>
Habr<p>Text-to-LoRA: мгновенная адаптация трансформеров</p><p>Исследователи Sakana AI разработали Text-to-LoRA (T2L) , гиперсеть, которая динамически генерирует веса Low-Rank Adaptation (LoRA) для больших языковых моделей на основе описаний целевых задач на естественном языке. Этот метод обеспечивает эффективную адаптацию без предварительной настройки (zero-shot), превосходя установленные базовые показатели и достигая производительности, сравнимой с тонко настроенными адаптерами на ранее не встречавшихся задачах.</p><p><a href="https://habr.com/ru/articles/925404/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">habr.com/ru/articles/925404/</span><span class="invisible"></span></a></p><p><a href="https://zhub.link/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://zhub.link/tags/ml" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ml</span></a> <a href="https://zhub.link/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://zhub.link/tags/lora" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>lora</span></a> <a href="https://zhub.link/tags/sakana" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>sakana</span></a> <a href="https://zhub.link/tags/TextToLoRA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TextToLoRA</span></a> <a href="https://zhub.link/tags/Hypernetwork" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Hypernetwork</span></a> <a href="https://zhub.link/tags/finetuning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>finetuning</span></a></p>
LLMsProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language Models The key idea The key idea Reinforcement learning (RL) has had a resurgence in LLMs with application to ...<br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/LLMs" target="_blank">#LLMs</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/training-dynamics" target="_blank">#training-dynamics</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/fine-tuning" target="_blank">#fine-tuning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/reasoning" target="_blank">#reasoning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/reinforcement-learning" target="_blank">#reinforcement-learning</a><br><br><a href="https://graphcore-research.github.io/prorl/" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=9N5KR0r2wXZVbgWCE3Sv9k3kf6e&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>
Irène Langlet<p>En train de commencer à utiliser <a href="https://piaille.fr/tags/calibre" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>calibre</span></a>, et très contente de l'éditeur de métadonnées par lot. Cette fonction "tester le résultat" existe-t-elle sur <a href="https://piaille.fr/tags/Zotero" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Zotero</span></a>? Ce serait drôlement pratique, et jusqu'ici je n'en ai pas vu.</p><p>(Certes je reviens d'un colloque où on me parlait de <a href="https://piaille.fr/tags/finetuning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>finetuning</span></a>, mais je ne suis pas en mesure d'utiliser ça pour nettoyer mes listes. Par contre on m'avait conseillé calibre depuis un bail.)</p>
Habr<p>[Перевод] Кто, как и зачем внедряет Gen AI в 2025: опыт 100 CIO</p><p>Чуть больше года назад мы выделили 16 ключевых изменений в том, как компании подходили к разработке и закупке генеративных ИИ. С тех пор ландшафт продолжил стремительно эволюционировать, поэтому мы снова провели беседы с более чем двумя десятками корпоративных заказчиков и опросили 100 CIO из 15 отраслей, чтобы помочь фаундерам понять, как в 2025 в корпорациях используют, приобретают и закладывают бюджеты под generative AI . Даже в такой динамичной сфере, где единственная постоянная — это перемены, структура рынка genAI изменилась куда сильнее, чем мы ожидали после прошлого исследования.</p><p><a href="https://habr.com/ru/articles/923112/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">habr.com/ru/articles/923112/</span><span class="invisible"></span></a></p><p><a href="https://zhub.link/tags/genai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>genai</span></a> <a href="https://zhub.link/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://zhub.link/tags/%D0%B8%D0%B8" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ии</span></a> <a href="https://zhub.link/tags/%D0%B3%D0%B5%D0%BD%D0%B5%D1%80%D0%B0%D1%82%D0%B8%D0%B2%D0%BD%D1%8B%D0%B9_%D0%B8%D0%B8" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>генеративный_ии</span></a> <a href="https://zhub.link/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://zhub.link/tags/finetuning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>finetuning</span></a> <a href="https://zhub.link/tags/openai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>openai</span></a> <a href="https://zhub.link/tags/anthropic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>anthropic</span></a></p>
LLMs[Перевод] Кто, как и зачем внедряет Gen AI в 2025: опыт 100 CIO Чуть больше года назад мы выделили 16 ключевых изменени...<br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/genai" target="_blank">#genai</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/ai" target="_blank">#ai</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/ии" target="_blank">#ии</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/генеративный" target="_blank">#генеративный</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/ии" target="_blank">#ии</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/llm" target="_blank">#llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/fine-tuning" target="_blank">#fine-tuning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/openai" target="_blank">#openai</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/anthropic" target="_blank">#anthropic</a><br><br><a href="https://habr.com/ru/articles/923112/?utm_source=habrahabr&amp;utm_medium=rss&amp;utm_campaign=923112" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=Pp9KLCWHR7rsk8Gv5qIKIcjAl8K&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>
LLMs<br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/AI" target="_blank">#AI</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/AI" target="_blank">#AI</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/research" target="_blank">#research</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/AI," target="_blank">#AI,</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/ML" target="_blank">#ML</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/and" target="_blank">#and</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Deep" target="_blank">#Deep</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Learning" target="_blank">#Learning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Fine-tuning" target="_blank">#Fine-tuning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/large" target="_blank">#large</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/language" target="_blank">#language</a><br><br><a href="https://venturebeat.com/ai/beyond-static-ai-mits-new-framework-lets-models-teach-themselves/" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=3laUm5TIM1tpnnKbKnkJ913Ukl6&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>