mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

8.2K
active users

#aiprogramming

3 posts2 participants0 posts today

I heard a developer say, "I only use AI for autocompletion." That's two generations behind!

The field has moved fast and the real action is happening elsewhere. I took sometime to map out how #AIProgramming has evolved. Where we are and where we're headed next?

Read more - kau.sh/blog/ai-programming/

kau.shAI Programming Paradigms: A Timeline - Kaushik Gopal's WebsiteA developer podcast host recently said they only use AI for autocomplete. This shocked me. That’s two generations behind today’s state of the art. This is how the field is evolving:

I see people making jokes and memes about this thing called '.env' which can be a file or a dir for some kind of genAI assisted^Whampered coding but I've never used that stuff so I don't get the jokes 😖

When I hear about AI-based programming, I think back several decades to a time when I was dealing with a hairy set of data, and I wrote a pretty complex bit of code generating an even more complex bit of SQL. I don't remember now if it ended up proving useful or not, though I think it did. But that's not the point.

The point was when I came back to it after a few months ... I couldn't figure it out at all. Neither the generator, nor the generated code.

And I HAD WRITTEN IT. Myself, from scratch, sorting out what I wanted and how to get there.

There's a principle in programming that debugging and maintenance are far harder than coding. Which means you should never write code that you are too stupid to debug and maintain. Which is precisely what I'd failed in my anecdote.

And of course, Management, in its infinite wisdom, typically puts far greater emphasis on new development than on testing, or Heavens Forefend!!! maintenance. So all the brightest talent (or so perceived, at any rate) goes to New Development.

(There's a great essay from about a decade ago, "In Praise of Maintenance, which you, and by "you" I mean "I", should really (re)read: freakonomics.com/podcast/in-pr).

With AI-based code generation, presuming it works at all, we get code that's like computer-chess or computer-Go (the game, not the lang). It might work, but there's no explanation or clarity to it. Grandmasters are not only stumped but utterly dispirited because they can't grok the strategy.

I can't count the number of times I've heard AI referred to as search or solution without explanation, an idea I'd first twigged to in the late 2010s. That is, if scientific knowledge tells us about causes of things, AI ML GD LLM simply tells us the answer without being able to show its work. Or worse: even if it could show work, that wouldn't tell us anything meaningful.

(This ... may not be entirely accurate, I'm not working in the field. But the point's been iterated enough times from enough different people at least some of whom should know that I tend to believe it.)

A major cause of technical debt is loss of institutional knowledge over how code works and what parts do what. I've worked enough maintenance jobs that I've seen this in all size and manner of organisations. At another gig, I'd cut the amount of code roughly in half just so I could run it in the interactive environment which made debugging more viable. I never really fully understood what all of that program did (though I could fix bugs, make changes, and even anticipate some problems which later emerged). Funny thing was when one of the prior Hired Guns who'd worked on the same project before my time there turned up on my front door some years later ... big laughs from both of us...

But this AI-generated code? It's going to be hairballs on hairballs on hairballs. And at some point it's gonna break.

Which leaves us with two possible situations:

  • We won't have an AI smart enough to deal with the mess.
  • Or, maybe, we will. Which as I think of the possibility whilst typing this seems potentially even more frightening.

Though my bet's on the first case.

FreakonomicsIn Praise of Maintenance (Replay) - FreakonomicsIn Praise of Maintenance (Replay) - Freakonomics
Replied in thread

@gnat

So I code with ChatGpt/Claude.

First, it's not like ordinary coding.
If you expect to vibe code, you are going to have a very bad time.

Second. The more definitions you give the #AI, the better.
Give parameters what you want to expect.

Third, spec it. Give as much specifications as you can. You want that text window to scroll?
Propose an array or a list structure.
Leave as little to imagination as possible, the thing has very little of it and it will try to please you hard, it will make shit up.

Fourth. Give overall instructions. I usually say something along the lines of "Do not code unless clear instructions are given". Else the thing will launch into code at the first prompt.

Fifth, I used to get it to Pseudocode. Now I just usually say "Restate the problem". Just to make sure the machine understands what it's doing.

Checkpoint. When you have code that works, designated it as "Version X.1" because inevitably the machine will fuck it, esp if you're introducing a notable change.

Seventh, learn #promptengineering, most people have NFI how to use the #LLM esp. if they are naturally hostile towards the tech.
E.g. If I really want the model to pay attention, I will say something like: DIRECTIVE: Blah blah.

Lastly, this should go without saying, the free models suck, pay the broligarch tax for the smarter engine.

It helps if you understand a little how LLMs work, today for example I gave a prompt to just keep latest checkpoint and specs and flush everything else from the session context as it tied itself into knots

There are other tips.

#aiprogramming

P.S. If this is not your sport, just mute and move on, don't be rude