mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

12K
active users

Church of Jeff


Edit: went back and found my friend that originated the quote from Facebook.

facebook.com/share/p/sQGLoN9wu

@jeffowski AI is just another tool to make users lazier

@dogpile @jeffowski it has enormous potential to democratize education through #onelaptoperchild and similar initiatives.

@ariaflame @dogpile @jeffowski I said potential, Aria. Don't let the perfect be the enemy of the good.

@nicholas_saunders @dogpile @jeffowski LLMs at the moment don't give accurate information. But sure, tell me that I can give up my job and give it over to machines which do not give accurate information.

@ariaflame @dogpile @jeffowski bit of a strawman there, isn't it? It's just a tool like any other. As you yourself said, it's not capable of thought. Writing some boilerplate? Great.

Not understanding the hate.

Because it's an advancement. Terribly inefficient and overblown, but with great potential and some present utility.

@nicholas_saunders @dogpile @jeffowski So it's good at doing bad writing? How is that useful?

@ariaflame @dogpile @jeffowski

It's rather remarkable, but it's also being used to write code. It collates data well, and acts as a frontend for Wikipedia.

It has utility. Of course it's hyped up. And?

@nicholas_saunders @dogpile @jeffowski From what I've heard not great code. What data does it collate?

@ariaflame @dogpile @jeffowski

I'm mainly using copilot as a frontend to Wikipedia, so can only speak to that.

But the mere fact that it's able to pass exams cannot be overlooked.

@nicholas_saunders @dogpile @jeffowski Is that the exams that they found out that were fudged?
Tell me, do you check that the summary it gives you is correct?

@ariaflame @dogpile @jeffowski I'll be glad to test it with you. What subject do you want? You have access to it, anyhow.

But even when you find hallucinations, so what? It's not reference material.

@nicholas_saunders @dogpile @jeffowski Tell that to my students that use it to do their assignments for them. Badly.

@ariaflame @dogpile @jeffowski then what is your complaint? That it doesn't cheat well enough?

I don't understand your objection.

@nicholas_saunders @dogpile @jeffowski That people think it does things that it doesn't. That some students think it's a search engine. That it has very very limited uses and yet everyone is pushing it on me when I don't need it or want it.

@ariaflame @dogpile @jeffowski oh, well I'm not pushing it on you. To start with, it's environmentally unsound.

I would only say that the chatbot aspect is groundbreaking. It's arguably going to change graphic art, and, maybe more significantly, music.

Its ability to churn our code is impressive, if of limited utility.

@nicholas_saunders @dogpile @jeffowski Those are three different machine learning models though. And sure it's going to make being an artist even more of a poverty ridden job.

@ariaflame @dogpile @jeffowski and, taken to a logical extreme, maybe the death of musical composition (for example).

@ariaflame @dogpile @jeffowski

I don't use #stackoverflow the way that I used to, but they're employing #ai on a massive scale. Why? Because everything's already been fed into the monster anyhow. And more and more responses, etc, will at least get filtered through it.

@nicholas_saunders @dogpile @jeffowski Yes, they're basically stealing other people's work and charging for it. Or if they aren't yet, they will be soon.

@ariaflame @nicholas_saunders @dogpile @jeffowski
The code is good enough - and it's very good at summarizing bad documentation.

It doesn't need to be perfect to be useful else humans would also be useless.

@atzanteol @ariaflame @nicholas_saunders @dogpile - I have a very good friend that is an incredibly smart guy but was deprived of a quality education. His speech is crude and he has trouble making comprehensible sentences for his social media posts. AI as an assistant to an actual user is different than simply outsourcing entire jobs away from actual artists. I don’t believe AI is dangerous in the sense that it can destroy humans CONT ->

@atzanteol @ariaflame @nicholas_saunders @dogpile — the “Terminator/Skynet” future is not what I’m afraid of. as assistants paired with a human can do superhuman things and I think it may even be the next step in our evolution. With that said, the machine learning models used to create papers/articles and to create images and video are not paired with a human. As such, the results can only be within the data set it has been fed-> CONT

@atzanteol @ariaflame @nicholas_saunders @dogpile - this sort of work does not expand the dataset at all. It is all derivative. Outside of the ethics of using the art of other artists to train your machine learning model, it cannot deviate and create something completely new. We have ethical use of as a tool to enhance a human and and the inethical use of creating derivative works in styles stolen from real human artists.

@atzanteol @ariaflame @nicholas_saunders @dogpile - an artist using to create more art trained on the artist’s past work for the benefit of the artist is something I can get behind. Using to circumvent the artist using a service trained on their art and the art of thousands of other artists for the benefit of a tech company and the end customer (again, not the artist(s)) is inethical and does not expand the dataset.

@jeffowski @atzanteol @ariaflame @dogpile

Fuck, Jeff, tell us what you really think.

But, yeh, hunna percent, as they say now. Music, I think, is the most immediately exploitable. The long term effect being that composing becomes a lost art.

But that risk is across the board. Why learn to draw when ai will do that for you?

@nicholas_saunders @jeffowski @ariaflame @dogpile

Remember when everyone stopped learning math because of calculators?

@atzanteol @jeffowski @ariaflame @dogpile

This is different, tho. The AI can create a completed artistic work. It's plagiarism and unoriginal, but, as Stalin might observe: quantity has its own quality.

@nicholas_saunders @atzanteol @jeffowski @ariaflame @dogpile Another example is the demise of classical painting after Cubism and photography. There had always been movements in art, but since the early 20th century very few people have learned to paint photorealistic images, as artists could in previous decades and centuries.

There was an app for that.

@mike805 @nicholas_saunders @atzanteol @jeffowski @ariaflame @dogpile and yet plenty of photorealistic painters abound, in fact have reached a skill level that’s dramatically higher than in the 19th century. Nor did musician stop creating music once the record or tape was invented. A lot of handwringing here…

@atzanteol @nicholas_saunders @jeffowski @ariaflame @dogpile > Remember when everyone stopped learning math because of calculators?

Yes, I do. Try this experiment: go into a store, buy some stuff, throw down even dollars, wait til the cashier has entered the amount, and then put down enough coins to get even dollar change back.

Observe the panic in the cashier's face. People did not become handy with numbers after calculators were universally used.

@mike805 @nicholas_saunders @jeffowski @ariaflame @dogpile

That was me in the '80s. I was always bad at math. Most people are. Even back then you'd have mistakes where a smug customer would say "Don't they teach math these days" (kids are always stupid "these days").

@atzanteol @nicholas_saunders @jeffowski @ariaflame @dogpile They might not have stopped “learning math” but they definitely declined in their ability to do basic arithmetic calculations

@peterbutler @atzanteol @jeffowski @ariaflame @dogpile

Yeah, I think this is different in degree and quality. Because it plagiarizes so well it can only and inexorably lead to much of our culture becoming a lost art.

@nicholas_saunders @peterbutler @atzanteol @ariaflame @dogpile - It cannot get out of the box the data set makes. Anything the AI can generate will be within that box (data outline). Humans still need to create new things (data) that is outside of the current data set, to expand what AI can do. The real issue, and the root of the concise Original Post above, is that as soon as an artist makes something new, it can instantly be stolen, even before the artist can make a penny off of it.-> CONT

@nicholas_saunders @peterbutler @atzanteol @ariaflame @dogpile -- If that is the paradigm we are working with, then there is absolutely NO incentive to create new art. This will lead to artistic and cultural stagnation, IMO.
Unless we make special art specifically forbidden from being photographed or digitized in any way that is profitable, can instantly steal it and it is instantly not special.
This has special implications.
->CONT

@atzanteol @nicholas_saunders @peterbutler @ariaflame @dogpile — any economic incentive for the given art will be squashed.
People will always create but it will be sacrificed to the corporate gods’ coffers if it gets to the internet.
I have many pieces of art that I’ve created and I have specifically never photographed or digitized it in any way and it is not on any public display.
Is this art monetarily more valuable? -> CONT

@peterbutler @nicholas_saunders @jeffowski @ariaflame @dogpile

[citation needed]

I disagree. I think people have always been bad at math and they're more accurate at it now because they have a good tool to do it for them.

@atzanteol @nicholas_saunders @ariaflame @dogpile -- I was the generation that grew up with calculators for the first time. I have an old National brand calculator from Japan, 1980 that I still have and it still works perfectly.
This was a tool that still needed you to understand math theory. You have to know how the numbers all work to make the calculator useful.
You simply had a super accurate "calculator" and that was all it did. Still a completely useless tool without understanding math.

@ariaflame @atzanteol @dogpile @jeffowski

I've only used in the most cursory way, but it generates boilerplate. Apparently you can feed in documentation and get at least something back.

Which is on the way to something interesting. Not sure that we're talking matrix level singularity, but something.

The more general the input is the more remarkable. I was impressed. But, perhaps I'm easily impressed. The proof's in the pudding.

as to #gigo ideally it would just be io, sans garbage.

@ariaflame @nicholas_saunders @dogpile @jeffowski

No... "Bad documentation" isn't wrong, it's difficult to understand - or incomplete. It's then talked about on forums all over the internet.

LLMs can combine all that together and tell you how to use that function you're looking at in the docs without needing to post in a forum, being given wrong advice by 4 different people, being told to re-do everything by that one guy who hates everything, etc. You just get an answer that's "probably helpful" really quickly. And if it's not then you fall back on the standard forums and all the shit you deal with there.

@atzanteol @ariaflame @nicholas_saunders @dogpile — there’s no validation on whether it is kicking out bad information until you try it…
Good luck with that.

@ariaflame @nicholas_saunders @dogpile @jeffowski

I just want to interject to say that information has value only when it is vetted. LLMs do not vet the information they generate and thus they reduce the value of all information in the spaces they occupy because now nothing is vetted.

LLMs destroy meaning and hide truth because they don't have intent or understanding of language. They pollute good information with bad, and it's not easy to tell.

@nicholas_saunders @dogpile -- You know this is not actually "Artificial Intelligence," right?
It can't think. That's why this is so fucking sad. It is not able to think, but to pick statistically probable words in a language model.
All of the data creates a box and the machine learning can only recreate and predict based on what it was trained on (the stuff in the box).
What that means is that everything created by "AI" (as you call it), is a derivative of the training material.

@nicholas_saunders @dogpile - So what "AI" has actually done is to exploit the labor of thousands of people (you included) and to remove those same people from any profit or benefit of their work, but this "new" work can only be within the confines of the data it harvested.
Everything is a derivative of the original data sample, and it can only recreate MORE of the same, backfilling the data even more from this derivative reconstruction.
Think of these implications.
There are no new ideas

@jeffowski @dogpile it's going to kill music. It can rip off musicians in ways that were unforseen.

@jeffowski @nicholas_saunders @dogpile

> There are no new ideas

This is the thing. There is nothing new. I've seen the expression "stochastic parrot" used to describe what LLMs are, and I think it is one of the best description I've heard.

@jeffowski @dogpile I was using the tag because that's how it's generally considered. Fair point.

@jeffowski @dogpile
@ariaflame

where I'll agree with you guys is when the #ccp in particular is giving every indication of taking the guiderails off and using #ai for #cognitivewarfare then things get particularly dark.

However, that doesn't mean that there aren't positive applications.

@jeffowski Well, yeah. That's the whole point of software since "Expert Systems;" to crystallize expertise, alienating it from experts. Then the experts can retire, and the next generation can develop novel expertise.

Why, has something gone wrong?

@gavmor @jeffowski Yes. We have yet to design a single, working expert system.

@enoch_exe_inc @jeffowski But we have, arguably, designed working software by other means.

@gavmor @jeffowski For 1.), the Experts helping crystalize the "Expert Systems" get paid to help do that work.

For 2.), they don't actually get to retire, as they're there to ensure that when it goes off the rails, they can point out it went off the rails, and figure out why.

@jeffowski I think that's what corporations do and not special to AI. We use unions, civil society, and government to compensate. See also though onlinelibrary.wiley.com/doi/10 which in the end agrees with you. #AIEthics

@j2bryson @jeffowski

@cstross describes Corporations as Slow AI running on a biological substrate. :D