the service "AI" provides is to automate authoritarianism under a veneer of fallibility. To operate simultaneously with "unimpeachble objectivity" and also already-assumed failure. Good enough for deportations but without generating liability for lawsuits. "The language model can do everything" and "it's just a language model what can you expect"
Naturally, the only use cases are spam, fraud, and repression.
https://www.theguardian.com/us-news/2025/mar/06/foreign-student-visas-pro-palestinian-ai
@jonny @inthehands the flaw in this plan is that there is human liability because the decision was actually not made by AI. that decision, of course, was whether to trust AI’s output. only a human can make that decision
@exchgr
note rest of thread
@jonny whoops, sorry, i replied to a quote post without noticing there was a thread. should’ve looked first!