mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

8.2K
active users

#IntelArc

1 post1 participant0 posts today

So figured out why #systemd is overwriting the path for all applications started from the desktop.
It does this because @kde Plasma's KRunner tells it to.
And why does it do that? Cause it itself has the incorrect environment vars as I recently replaced #sddm with #gdm when I swapped my GPU for an #IntelArc A380 and it just failed to start with it (and I was too lazy to actually debug and troubleshoot it).

But why this issue hits now after the last update and not before? No idea...

Continued thread

The #IntelArc #A310 looks like the card of choice but there appears to be no card with a proper fan design available right now. There is only a super noisy #Sparkle variant with a blower fan design with several reports on this specific card to have high fan noise and a terrible fan curve. Urgh.
So I looked at the alternative #A380 at pretty much the same price, which can be run at ~5W when idling. But there is a problem: Intel Arc cards drain 30-40W when there is no #ASMP enabled to allow the card to low power states: intel.com/content/www/us/en/su
Well, ASMP is only supported on newer hardware so I had to to check my mainboard (AMD B520 chip) for support. (2/?)

IntelConfiguration required to enable an idle low power consumption profile for Intel Arc Graphics Desktop cards.Configuration required to enable an idle low power consumption profile for Intel Arc Graphics Desktop cards.

Is it just me or is there mouse lag on Wayland?

Never had a problem with my laptop (Ryzen CPU and integrated graphics), but with my desktop (Intel Arc GPU) I'm experiencing this weird mouse lag.

Does anyone know why it happens and how to solve it?

After my #wake_word_detection #research has delievered fruits, I have plans to continue works in the voice domain. I would love if I could train a #TTS model which has #British accent so I would use it to practice.

I was wondering if I could do the inference on #A311D #NPU. However, as I am skimming papers of different models, having inference on A311D with reasonable performance seems unlikely. Even training of these models on my entry level #IntelArc #GPU would be painful.

Maybe I could just finetune an already existing models. I am also thinking about using #GeneticProgramming for some components of these TTS models to see if there will be better inference performance.

There are #FastSpeech2 and #SpeedySpeech which look promising. I wonder how much natural their accents will be. But they would be good starting points.

BTW, if anyone needs opensource models, I would love to work as a freelancer and have an #opensource job. Even if someone can just provide access to computation resources, that would be good.

#forhire #opensourcejob #job #hiring

I was originally gonna get an Nvidia A2000 12gb or A4000 ada 20gb but it doesn't seem like the price has been dropping on them in ages.

Is anyone using the Intel B580 under Linux for AI work, specifically stable diffusion work? I'm hearing it's decent but haven't seen anything on Linux workloads or performance.

I know it'll be nice for Davinci Resolve. I'm not really a gamer. It's also half the price!

#nvidia
#IntelArc
#ai