mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

8.2K
active users

#benchmarking

3 posts3 participants0 posts today
Anisse<p>You'll find this benchmarking adventure in its own blog post "Performance lessons of implementing lbzcat in Rust" <a href="https://anisse.astier.eu/lbzip2-rs.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">anisse.astier.eu/lbzip2-rs.htm</span><span class="invisible">l</span></a></p><p><a href="https://social.treehouse.systems/tags/RustLang" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RustLang</span></a> <a href="https://social.treehouse.systems/tags/lbzip2" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>lbzip2</span></a> <a href="https://social.treehouse.systems/tags/bzip2" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>bzip2</span></a> <a href="https://social.treehouse.systems/tags/benchmarking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>benchmarking</span></a> <a href="https://social.treehouse.systems/tags/performance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>performance</span></a></p>
Patrick Poitras :nixos:<p>This is an interesting slide from Daniel Lemire on <a href="https://fosstodon.org/tags/benchmarking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>benchmarking</span></a> I don't think I've seen log-normal distributions in the wild, but I've also not been looking for them. Definitely something to consider moving forward.</p>
Institute for AI<p>Is complex query answering really complex? A paper at the International Conference on Machine Learning (<a href="https://xn--baw-joa.social/tags/ICML2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ICML2025</span></a>) presented by Cosimo Gregucci, PhD student at <span class="h-card" translate="no"><a href="https://xn--baw-joa.social/@UniStuttgartAI" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>UniStuttgartAI</span></a></span> <span class="h-card" translate="no"><a href="https://xn--baw-joa.social/@Uni_Stuttgart" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>Uni_Stuttgart</span></a></span>, discussed this question.</p><p>In this paper, Cosimo Gregucci, Bo Xiong, Daniel Hernández (<span class="h-card" translate="no"><a href="https://mstdn.degu.cl/@daniel" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>daniel</span></a></span>), Lorenzo Loconte, Pasquale Minervini (<span class="h-card" translate="no"><a href="https://sigmoid.social/@pminervini" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>pminervini</span></a></span>), Steffen Staab, and Antonio Vergari (<span class="h-card" translate="no"><a href="https://ellis.social/@nolovedeeplearning" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>nolovedeeplearning</span></a></span>) reveal that the “good” performance of SoTA approaches predominantly comes from answers that can be boiled down to single link prediction. Current neural and hybrid solvers can exploit (different) forms of triple memorization to make complex queries much easier. The authors confirm this by reporting the performance of these methods in a stratified analysis and by proposing a hybrid solver, CQD-Hybrid, which, while being a simple extension of an old method like CQD, can be very competitive against other SoTA models.</p><p>The paper proposed a way to make query answering benchmarks more challenging in order to advance science.</p><p><a href="https://arxiv.org/abs/2410.12537" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2410.12537</span><span class="invisible"></span></a></p><p><a href="https://xn--baw-joa.social/tags/KnowledgeGraphs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KnowledgeGraphs</span></a> <a href="https://xn--baw-joa.social/tags/QueryAnswering" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>QueryAnswering</span></a> <a href="https://xn--baw-joa.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://xn--baw-joa.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://xn--baw-joa.social/tags/Benchmarking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Benchmarking</span></a> <a href="https://xn--baw-joa.social/tags/CQA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CQA</span></a></p>
LLMsDeep Dive into LLM Performance: Benchmarking Token Generation in LLaMA 3.1 Large Language Models (LLMs) have transformed AI, but understanding their performance characteristics remains challenging ...<br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/large-language-models" target="_blank">#large-language-models</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/benchmarking" target="_blank">#benchmarking</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/llama-3" target="_blank">#llama-3</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/machine-learning" target="_blank">#machine-learning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/performance-optimization" target="_blank">#performance-optimization</a><br><br><a href="https://medium.com/@losstar77/deep-dive-into-llm-performance-benchmarking-token-generation-in-llama-3-1-3e8721ae9846?source=rss------machine_learning-5" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=BqG8ZvxrWqeQiA2iW2g3pZNosxU&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>
HAPPY HAGGEN<p>"Partitioning boosts system performance in benchmarks but adds overhead. Proper design is key to balance scalability &amp; consistency. <a href="https://techhub.social/tags/Tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Tech</span></a> <a href="https://techhub.social/tags/Benchmarking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Benchmarking</span></a>" <a href="https://milvus.io/ai-quick-reference/what-is-the-impact-of-partitioning-on-benchmarks" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">milvus.io/ai-quick-reference/w</span><span class="invisible">hat-is-the-impact-of-partitioning-on-benchmarks</span></a></p>
LLMsLLM Benchmarking Shows Capabilities Doubling Every 7 Months The main purpose of many large language models (LLMs) is providing compelling text that’s as close as possible to being indistinguishab...<br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Exponential" target="_blank">#Exponential</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/improvement" target="_blank">#improvement</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Large" target="_blank">#Large</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/language" target="_blank">#language</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/models" target="_blank">#models</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/The" target="_blank">#The</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/singularity" target="_blank">#singularity</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Llm" target="_blank">#Llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/benchmarking" target="_blank">#benchmarking</a><br><br><a href="https://spectrum.ieee.org/llm-benchmarking-metr" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=VLN1lkeZheC8ebVdXYRjJ5RH24u&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>
FunctionalProgrammingParEval-Repo: A Benchmark Suite for Evaluating LLMs with Repository-level HPC Translation Tasks GPGPU architectures have become significantly diverse in recent years, which has led to an emergence ...<br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Computer" target="_blank">#Computer</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/science" target="_blank">#science</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/CUDA" target="_blank">#CUDA</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/paper" target="_blank">#paper</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Benchmarking" target="_blank">#Benchmarking</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Code" target="_blank">#Code</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/generation" target="_blank">#generation</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/LLM" target="_blank">#LLM</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/nVidia" target="_blank">#nVidia</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/nVidia" target="_blank">#nVidia</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/A100" target="_blank">#A100</a><br><br><a href="https://hgpu.org/?p=30005" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=FunctionalProgramming" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=0ZSpOkB1OCZFvx1wqUggVo8kC12&amp;interestId=FunctionalProgramming" rel="nofollow noopener" target="_blank">Match</a>
HGPU group<p>ParEval-Repo: A Benchmark Suite for Evaluating LLMs with Repository-level HPC Translation Tasks</p><p><a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/OpenMP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenMP</span></a> <a href="https://mast.hpc.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mast.hpc.social/tags/CodeGeneration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CodeGeneration</span></a> <a href="https://mast.hpc.social/tags/Benchmarking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Benchmarking</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=30005" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=30005</span><span class="invisible"></span></a></p>
LLMsLLM Benchmarking Shows Capabilities Doubling Every 7 Months The main purpose of many large language models (LLMs) is providing compelling text that’s as close as possible to being indistinguishab...<br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Exponential" target="_blank">#Exponential</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/improvement" target="_blank">#improvement</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Large" target="_blank">#Large</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/language" target="_blank">#language</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/models" target="_blank">#models</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/The" target="_blank">#The</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/singularity" target="_blank">#singularity</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Llm" target="_blank">#Llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/benchmarking" target="_blank">#benchmarking</a><br><br><a href="https://spectrum.ieee.org/llm-benchmarking-metr" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=KYXoxbWXmYqfU9PrYNY2z7lnTBQ&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>
LLMsLLM Benchmarking Shows Capabilities Doubling Every 7 Months The main purpose of many large language models (LLMs) is providing compelling text that’s as close as possible to being indistinguishab...<br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Exponential" target="_blank">#Exponential</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/improvement" target="_blank">#improvement</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Large" target="_blank">#Large</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/language" target="_blank">#language</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/models" target="_blank">#models</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/The" target="_blank">#The</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/singularity" target="_blank">#singularity</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Llm" target="_blank">#Llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/benchmarking" target="_blank">#benchmarking</a><br><br><a href="https://spectrum.ieee.org/llm-benchmarking-metr" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=T8mphqLG5t2wkBiVvnzpcw5z4z2&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>
HAPPY HAGGEN<p>"Partitioning boosts system performance in benchmarks but adds overhead. Proper design is key to balance scalability &amp; consistency. <a href="https://techhub.social/tags/Tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Tech</span></a> <a href="https://techhub.social/tags/Benchmarking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Benchmarking</span></a>" <a href="https://milvus.io/ai-quick-reference/what-is-the-impact-of-partitioning-on-benchmarks" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">milvus.io/ai-quick-reference/w</span><span class="invisible">hat-is-the-impact-of-partitioning-on-benchmarks</span></a></p>
LLMsLLM Benchmarking Shows Capabilities Doubling Every 7 Months The main purpose of many large language models (LLMs) is providing compelling text that’s as close as possible to being indistinguishab...<br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Exponential" target="_blank">#Exponential</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/improvement" target="_blank">#improvement</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Large" target="_blank">#Large</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/language" target="_blank">#language</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/models" target="_blank">#models</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/The" target="_blank">#The</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/singularity" target="_blank">#singularity</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Llm" target="_blank">#Llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/benchmarking" target="_blank">#benchmarking</a><br><br><a href="https://spectrum.ieee.org/llm-benchmarking-metr" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=Fi8asFDDPW9l5XhpIVKaUzmvUcy&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>
LLMsLLM Benchmarking Shows Capabilities Doubling Every 7 Months The main purpose of many large language models (LLMs) is providing compelling text that’s as close as possible to being indistinguishab...<br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Exponential" target="_blank">#Exponential</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/improvement" target="_blank">#improvement</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Large" target="_blank">#Large</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/language" target="_blank">#language</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/models" target="_blank">#models</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/The" target="_blank">#The</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/singularity" target="_blank">#singularity</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Llm" target="_blank">#Llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/benchmarking" target="_blank">#benchmarking</a><br><br><a href="https://spectrum.ieee.org/llm-benchmarking-metr" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=MsezbESIoDv0SkIS2gN9oLOwDiK&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>
N-gated Hacker News<p>🥳 Oh, goody! Another groundbreaking revelation: "benchmarking" is evidently the new buzzword for charging you more 💸 while you drown in buzzword soup 🍲. <a href="https://mastodon.social/tags/PlanetScale" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PlanetScale</span></a> for <a href="https://mastodon.social/tags/Postgres" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Postgres</span></a> promises to bring you the ultimate Postgres experience, assuming you survive the labyrinth of documentation and sales pitches first. 🙄<br><a href="https://planetscale.com/blog/benchmarking-postgres" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">planetscale.com/blog/benchmark</span><span class="invisible">ing-postgres</span></a> <a href="https://mastodon.social/tags/buzzwordbingo" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>buzzwordbingo</span></a> <a href="https://mastodon.social/tags/benchmarking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>benchmarking</span></a> <a href="https://mastodon.social/tags/salespitches" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>salespitches</span></a> <a href="https://mastodon.social/tags/documentationdilemma" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>documentationdilemma</span></a> <a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/ngated" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ngated</span></a></p>
Hacker News<p>Benchmarking Postgres</p><p><a href="https://planetscale.com/blog/benchmarking-postgres" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">planetscale.com/blog/benchmark</span><span class="invisible">ing-postgres</span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/Benchmarking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Benchmarking</span></a> <a href="https://mastodon.social/tags/Postgres" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Postgres</span></a> <a href="https://mastodon.social/tags/Database" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Database</span></a> <a href="https://mastodon.social/tags/Performance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Performance</span></a> <a href="https://mastodon.social/tags/PostgreSQL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PostgreSQL</span></a> <a href="https://mastodon.social/tags/TechInsights" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechInsights</span></a></p>
Raspberry-PiBenchmarking the Orange Pi 5 Ultra, Orange Pi 5 Max and Orange Pi RV2 I benchmark the Orange Pi 5 Ultra, Orange Pi 5 Max, Orange Pi RV2 against a Raspberry Pi 5 and a DreamQuest N100 Mini PC. The t...<br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Blog" target="_blank">#Blog</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/benchmarking" target="_blank">#benchmarking</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Orange" target="_blank">#Orange</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Pi" target="_blank">#Pi</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Orange" target="_blank">#Orange</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Pi" target="_blank">#Pi</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/5" target="_blank">#5</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Ultra" target="_blank">#Ultra</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Orange" target="_blank">#Orange</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Pi" target="_blank">#Pi</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/RV2" target="_blank">#RV2</a><br><br><a href="https://www.linuxtoday.com/blog/benchmarking-the-orange-pi-5-ultra-orange-pi-5-max-and-orange-pi-rv2/" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=Raspberry-Pi" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=E5psqNGzXs3HYdJEqoebD5OEUgi&amp;interestId=Raspberry-Pi" rel="nofollow noopener" target="_blank">Match</a>
Linux ✅<p>📐 OCCT (a popular stress testing / hardware info app) finally available on Steam ✅ </p><p>◉Official support<br>◉Free version very expansive<br>◉Works also on Steam Deck<br>◉Native Linux<br>◉Simply install in Steam &amp; enjoy<br>◉Happy days for Linux continue</p><p>👉 <a href="https://www.pcguide.com/news/stress-testing-app-occt-is-finally-available-on-steam-and-it-works-just-fine-on-the-steam-deck/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">pcguide.com/news/stress-testin</span><span class="invisible">g-app-occt-is-finally-available-on-steam-and-it-works-just-fine-on-the-steam-deck/</span></a></p><p><a href="https://linuxrocks.online/tags/OCCT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OCCT</span></a> <a href="https://linuxrocks.online/tags/Linux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Linux</span></a> <a href="https://linuxrocks.online/tags/Steam" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Steam</span></a> <a href="https://linuxrocks.online/tags/testing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>testing</span></a> <a href="https://linuxrocks.online/tags/benchmarking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>benchmarking</span></a> <a href="https://linuxrocks.online/tags/native" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>native</span></a></p>
Habr<p>[Перевод] Анатомия неудачного микробенчмарка</p><p>В новом переводе от команды Spring АйО подробно разбираются концептуальные, методологические и технические ошибки, на которые легко наткнуться при попытке протестировать такие механизмы, как synchronized и ReentrantLock . Автор объясняет, почему микробенчмарки часто измеряют не то, что вы думаете, и почему для получения осмысленных результатов лучше использовать макротесты или полагаться на экспертов.</p><p><a href="https://habr.com/ru/companies/spring_aio/articles/922848/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">habr.com/ru/companies/spring_a</span><span class="invisible">io/articles/922848/</span></a></p><p><a href="https://zhub.link/tags/java" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>java</span></a> <a href="https://zhub.link/tags/kotlin" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>kotlin</span></a> <a href="https://zhub.link/tags/benchmark" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>benchmark</span></a> <a href="https://zhub.link/tags/benchmarking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>benchmarking</span></a> <a href="https://zhub.link/tags/benchmarks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>benchmarks</span></a> <a href="https://zhub.link/tags/performance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>performance</span></a> <a href="https://zhub.link/tags/performance_optimization" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>performance_optimization</span></a> <a href="https://zhub.link/tags/spring" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>spring</span></a> <a href="https://zhub.link/tags/spring_boot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>spring_boot</span></a> <a href="https://zhub.link/tags/spring_framework" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>spring_framework</span></a></p>
Habr<p>[Перевод] Как написать микробенчмарк</p><p>Команда Spring АйО перевела статью, в которой приведено несколько правил, которые следует учитывать при написании микробенчмарков для HotSpot JVM.</p><p><a href="https://habr.com/ru/companies/spring_aio/articles/920146/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">habr.com/ru/companies/spring_a</span><span class="invisible">io/articles/920146/</span></a></p><p><a href="https://zhub.link/tags/java" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>java</span></a> <a href="https://zhub.link/tags/kotlin" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>kotlin</span></a> <a href="https://zhub.link/tags/performance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>performance</span></a> <a href="https://zhub.link/tags/microbenchmarks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>microbenchmarks</span></a> <a href="https://zhub.link/tags/benchmarking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>benchmarking</span></a> <a href="https://zhub.link/tags/benchmarks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>benchmarks</span></a> <a href="https://zhub.link/tags/benchmark" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>benchmark</span></a> <a href="https://zhub.link/tags/spring" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>spring</span></a> <a href="https://zhub.link/tags/spring_boot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>spring_boot</span></a> <a href="https://zhub.link/tags/spring_framework" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>spring_framework</span></a></p>
LLMsThe Measure of Intelligence (The ARC Benchmark) Assume you made an a Machine Learning model. Now how do you benchmark its performance. Continue reading on Medium » <br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/agi" target="_blank">#agi</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/artificial-intelligence" target="_blank">#artificial-intelligence</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/machine-learning" target="_blank">#machine-learning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/llm" target="_blank">#llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/benchmarking" target="_blank">#benchmarking</a><br><br><a href="https://medium.com/@thevishesh16/the-measure-of-intelligence-the-arc-benchmark-3d85304a920a?source=rss------machine_learning-5" rel="nofollow noopener" target="_blank">Origin</a> | <a href="https://awakari.com/sub-details.html?id=LLMs" rel="nofollow noopener" target="_blank">Interest</a> | <a href="https://awakari.com/pub-msg.html?id=88dco4qKVRDG9Ur4PaIDCMTMNma&amp;interestId=LLMs" rel="nofollow noopener" target="_blank">Match</a>