mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

8.1K
active users

#oneapi

0 posts0 participants0 posts today
Christos Argyropoulos<p>Update on my <a href="https://mast.hpc.social/tags/retroHPCcomputing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>retroHPCcomputing</span></a> project: it seems the first pcie slot is dead, but the X99 mobo has 7 slots, so this is not a big deal. Shown also the <a href="https://mast.hpc.social/tags/XeonPhi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>XeonPhi</span></a> the old Tesla <a href="https://mast.hpc.social/tags/Nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nvidia</span></a> c2075, the gt960 with its riser cable (prior to vertical install on the enthoo 719 case) &amp; the RAID array.<br>I opted for Centos 7.3 as an initial choice to compile the Phi stack (will likely move to Alma 8 as I found instructions to compile the stack on CentOs 8 relatives). Hopefully, I have <a href="https://mast.hpc.social/tags/OneApi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OneApi</span></a> 2021.2 somewhere</p>
Christos Argyropoulos MD, PhD<p>Update on my <a href="https://mstdn.science/tags/retroHPCcomputing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>retroHPCcomputing</span></a> project: it seems the first pcie slot is dead, but the X99 mobo has 7 slots, so this is not a big deal. Shown also the <a href="https://mstdn.science/tags/XeonPhi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>XeonPhi</span></a> the old Tesla <a href="https://mstdn.science/tags/Nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nvidia</span></a> c2075, the gt960 with its riser cable (prior to vertical install on the enthoo 719 case) &amp; the RAID array.<br>I opted for Centos 7.3 as an initial choice to compile the Phi stack (will likely move to Alma 8 as I found instructions to compile the stack on CentOs 8 relatives). Hopefully, I have <a href="https://mstdn.science/tags/OneApi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OneApi</span></a> 2021.2 somewhere</p>
HGPU group<p>Thesis: Hardware-Assisted Software Testing and Debugging for Heterogeneous Computing</p><p><a href="https://mast.hpc.social/tags/oneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneAPI</span></a> <a href="https://mast.hpc.social/tags/FPGA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FPGA</span></a> <a href="https://mast.hpc.social/tags/Python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Python</span></a></p><p><a href="https://hgpu.org/?p=29840" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29840</span><span class="invisible"></span></a></p>
HGPU group<p>ML-Triton, A Multi-Level Compilation and Language Extension to Triton GPU Programming</p><p><a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> <a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/oneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneAPI</span></a> <a href="https://mast.hpc.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mast.hpc.social/tags/Triton" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Triton</span></a> <a href="https://mast.hpc.social/tags/Compilers" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Compilers</span></a> <a href="https://mast.hpc.social/tags/Intel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Intel</span></a></p><p><a href="https://hgpu.org/?p=29825" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29825</span><span class="invisible"></span></a></p>
Giuseppe Bilotta<p>Even now, Thrust as a dependency is one of the main reason why we have a <a href="https://fediscience.org/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> backend, a <a href="https://fediscience.org/tags/HIP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HIP</span></a> / <a href="https://fediscience.org/tags/ROCm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROCm</span></a> backend and a pure <a href="https://fediscience.org/tags/CPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CPU</span></a> backend in <a href="https://fediscience.org/tags/GPUSPH" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPUSPH</span></a>, but not a <a href="https://fediscience.org/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> or <a href="https://fediscience.org/tags/OneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OneAPI</span></a> backend (which would allow us to extend hardware support to <a href="https://fediscience.org/tags/Intel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Intel</span></a> GPUs). &lt;<a href="https://doi.org/10.1002/cpe.8313" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">doi.org/10.1002/cpe.8313</span><span class="invisible"></span></a>&gt;</p><p>This is also one of the reason why we implemented our own <a href="https://fediscience.org/tags/BLAS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BLAS</span></a> routines when we introduced the semi-implicit integrator. A side-effect of this choice is that it allowed us to develop the improved <a href="https://fediscience.org/tags/BiCGSTAB" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BiCGSTAB</span></a> that I've had the opportunity to mention before &lt;<a href="https://doi.org/10.1016/j.jcp.2022.111413" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.1016/j.jcp.2022.111</span><span class="invisible">413</span></a>&gt;. Sometimes I do wonder if it would be appropriate to “excorporate” it into its own library for general use, since it's something that would benefit others. OTOH, this one was developed specifically for GPUSPH and it's tightly integrated with the rest of it (including its support for multi-GPU), and refactoring to turn it into a library like cuBLAS is</p><p>a. too much effort<br>b. probably not worth it.</p><p>Again, following <span class="h-card" translate="no"><a href="https://peoplemaking.games/@eniko" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>eniko</span></a></span>'s original thread, it's really not that hard to roll your own, and probably less time consuming than trying to wrangle your way through an API that may or may not fit your needs.</p><p>6/</p>
Giuseppe Bilotta<p>I'm getting the material ready for my upcoming <a href="https://fediscience.org/tags/GPGPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPGPU</span></a> course that starts on March. Even though I most probably won't get to it,I also checked my trivial <a href="https://fediscience.org/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> programs. Apparently the 2025.0 version of the <a href="https://fediscience.org/tags/Intel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Intel</span></a> <a href="https://fediscience.org/tags/OneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OneAPI</span></a> <a href="https://fediscience.org/tags/DPCPP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DPCPP</span></a> runtime doesn't like any <a href="https://fediscience.org/tags/OpenCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenCL</span></a> platform except Intel's own (I have two other platforms that support <a href="https://fediscience.org/tags/SPIRV" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SPIRV</span></a>, so why aren't they showing up? From the documentation I can find online this should be sufficient, but apparently it's not&nbsp;…)</p>
HGPU group<p>Exploring data flow design and vectorization with oneAPI for streaming applications on CPU+GPU</p><p><a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> <a href="https://mast.hpc.social/tags/oneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneAPI</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=29705" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29705</span><span class="invisible"></span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p>Just how deep is <a href="https://hachyderm.io/tags/Nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nvidia</span></a>'s <a href="https://hachyderm.io/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> moat really?<br>Not as impenetrable as you might think, but still more than Intel or AMD would like<br>It's not enough just to build a competitive part: you also have to have <a href="https://hachyderm.io/tags/software" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>software</span></a> that can harness all those <a href="https://hachyderm.io/tags/FLOPS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FLOPS</span></a> — something Nvidia has spent the better part of two decades building with its CUDA runtime, while competing frameworks for low-level <a href="https://hachyderm.io/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> <a href="https://hachyderm.io/tags/programming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>programming</span></a> are far less mature like AMD's <a href="https://hachyderm.io/tags/ROCm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROCm</span></a> or Intel's <a href="https://hachyderm.io/tags/OneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OneAPI</span></a>.<br><a href="https://www.theregister.com/2024/12/17/nvidia_cuda_moat/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">theregister.com/2024/12/17/nvi</span><span class="invisible">dia_cuda_moat/</span></a> <a href="https://hachyderm.io/tags/developers" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>developers</span></a></p>
Sriram Ramkrishna<p>Howdy all - registrations are still open for the first oneAPI DevSummit hosted by the UXL Foundation! Learn about GPGPU programming, oneAPI and how companies are coalescing around <a href="https://mast.hpc.social/tags/oneapi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneapi</span></a> / <a href="https://mast.hpc.social/tags/sycl" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>sycl</span></a> <br><a href="https://linuxfoundation.regfox.com/oneapiuxldevsummit2024" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">linuxfoundation.regfox.com/one</span><span class="invisible">apiuxldevsummit2024</span></a></p><p>Registration will closeat 5pm today. The DevSummit will start at 8pm PT or 8:30am IST. See you there!</p>
HGPU group<p>Intel(R) SHMEM: GPU-initiated OpenSHMEM using SYCL</p><p><a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> <a href="https://mast.hpc.social/tags/oneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneAPI</span></a> <a href="https://mast.hpc.social/tags/Intel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Intel</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=29438" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29438</span><span class="invisible"></span></a></p>
OpenMP ARB<p>📢 Introduction to <a href="https://mast.hpc.social/tags/oneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneAPI</span></a>, <a href="https://mast.hpc.social/tags/SYCL2020" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL2020</span></a> &amp; <a href="https://mast.hpc.social/tags/OpenMP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenMP</span></a> offloading<br>📆September 23-25, 2024</p><p>In this 3-day online course, HLRS - High-Performance Computing Center Stuttgart provides an introduction to Intel Corporation's oneAPI implementation 🖥</p><p>Read more &amp; Register👉 <a href="https://www.hlrs.de/training/2024/intel-oneapi" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">hlrs.de/training/2024/intel-on</span><span class="invisible">eapi</span></a></p>
Khronos Group<p>Just one more day to submit your session for the UXL oneAPI DevSummit being held October 9th &amp; 10th!</p><p>Learn more: <a href="https://sessionize.com/uxldevsummit" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">sessionize.com/uxldevsummit</span><span class="invisible"></span></a><br><a href="https://fosstodon.org/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> <a href="https://fosstodon.org/tags/oneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneAPI</span></a> <a href="https://fosstodon.org/tags/UXL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>UXL</span></a></p>
HGPU group<p>Evaluating Operators in Deep Neural Networks for Improving Performance Portability of SYCL</p><p><a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> <a href="https://mast.hpc.social/tags/HIP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HIP</span></a> <a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/oneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneAPI</span></a> <a href="https://mast.hpc.social/tags/PerformancePortability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PerformancePortability</span></a></p><p><a href="https://hgpu.org/?p=29339" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29339</span><span class="invisible"></span></a></p>
jcinPDX<p><span class="h-card" translate="no"><a href="https://sigmoid.social/@pytorch" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>pytorch</span></a></span> 2.4 upstream now includes a prototype feature supporting Intel GPUs through source build using <a href="https://mastodon.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> and <a href="https://mastodon.social/tags/oneDNN" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneDNN</span></a> as well as a backend integrated to inductor on top of Triton - enabling a path for millions and millions of GPUs through <a href="https://mastodon.social/tags/oneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneAPI</span></a> for <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a>. </p><p>Lots of important milestones to make this happen - including support for <a href="https://mastodon.social/tags/UXL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>UXL</span></a> Foundation open AI technologies. Just a prototype, but a big step forward... thanks to all in the PyTorch community. Feedback welcome!</p><p><a href="https://pytorch.org/blog/pytorch2-4/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">pytorch.org/blog/pytorch2-4/</span><span class="invisible"></span></a></p>
Maxi 11x 💉<p>Interessanter Forenbeitrag zum Geschehen rund um <a href="https://chaos.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://chaos.social/tags/RoCm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RoCm</span></a> <a href="https://chaos.social/tags/HIP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HIP</span></a> <a href="https://chaos.social/tags/oneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneAPI</span></a> … und der Wettbewerbssituation mit <a href="https://chaos.social/tags/Nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nvidia</span></a>:</p><p><a href="https://www.heise.de/forum/heise-online/Kommentare/Der-erste-Dominostein-gegen-Nvidias-Dominanz-Frankreich-prescht-gegen-CUDA/Low-Level-Programming/thread-7603684/#posting_44165749" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">heise.de/forum/heise-online/Ko</span><span class="invisible">mmentare/Der-erste-Dominostein-gegen-Nvidias-Dominanz-Frankreich-prescht-gegen-CUDA/Low-Level-Programming/thread-7603684/#posting_44165749</span></a></p><p>Eine Replik:</p><p><a href="https://www.heise.de/forum/heise-online/Kommentare/Der-erste-Dominostein-gegen-Nvidias-Dominanz-Frankreich-prescht-gegen-CUDA/Low-Level-Programming/thread-7603684/page-2/#posting_44166110" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">heise.de/forum/heise-online/Ko</span><span class="invisible">mmentare/Der-erste-Dominostein-gegen-Nvidias-Dominanz-Frankreich-prescht-gegen-CUDA/Low-Level-Programming/thread-7603684/page-2/#posting_44166110</span></a></p>
HGPU group<p>Assessing Intel OneAPI capabilities and cloud-performance for heterogeneous computing</p><p><a href="https://mast.hpc.social/tags/oneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneAPI</span></a> <a href="https://mast.hpc.social/tags/HeterogeneousComputing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HeterogeneousComputing</span></a> <a href="https://mast.hpc.social/tags/FPGA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FPGA</span></a></p><p><a href="https://hgpu.org/?p=29214" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29214</span><span class="invisible"></span></a></p>
Juan Fumero<p>Can we run TornadoVM applications on CPUs and take advantage of all CPU cores? The answer is YES. All you need is an OpenCL implementation that can run on your CPU. In this video, I will show you how you can configure TornadoVM to run on such systems using the Intel oneAPI base toolkit for Intel CPUs, and even FPGAs.</p><p>🔗<a href="https://www.youtube.com/watch?v=lJHSpw97yDE" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">youtube.com/watch?v=lJHSpw97yD</span><span class="invisible">E</span></a></p><p><a href="https://mastodon.online/tags/java" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>java</span></a> <a href="https://mastodon.online/tags/multicore" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multicore</span></a> <a href="https://mastodon.online/tags/cpus" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cpus</span></a> <a href="https://mastodon.online/tags/fpga" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>fpga</span></a> <a href="https://mastodon.online/tags/oneapi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneapi</span></a></p>
Dr. Dek 👨‍🚀🐧🚀 )<p>Aaand <a href="https://social.linux.pizza/tags/blender" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>blender</span></a> <a href="https://social.linux.pizza/tags/cycles" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cycles</span></a> on <a href="https://social.linux.pizza/tags/arc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>arc</span></a> <a href="https://social.linux.pizza/tags/oneapi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneapi</span></a> is broken again :/</p>
HGPU group<p>Using Intel oneAPI for Multi-hybrid Acceleration Programming with GPU and FPGA Coupling</p><p><a href="https://mast.hpc.social/tags/oneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneAPI</span></a> <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenCL</span></a> <a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> <a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/FPGA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FPGA</span></a></p><p><a href="https://hgpu.org/?p=29176" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29176</span><span class="invisible"></span></a></p>
HGPU group<p>SYCL-Bench 2020: Benchmarking SYCL 2020 on AMD, Intel, and NVIDIA GPUs</p><p><a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> <a href="https://mast.hpc.social/tags/oneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneAPI</span></a> <a href="https://mast.hpc.social/tags/Benchmarking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Benchmarking</span></a> <a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=29145" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29145</span><span class="invisible"></span></a></p>