Boring AI

A Bauhaus-style image of a human creating something.

DALL-E 2 created this image when I gave it the prompt “A Bauhaus-style image of a human creating something.”

ChatGPT is truly amazing. I’ve spent some hours using it to help me with learning NetBSD, bare-bones web servers, software packaging, and non-software stuff, too. On the plus side, it behaved like a confident mentor helping me learn things I wasn’t too familiar with. It could point me in a good direction with helpful sample code and even explanations of why the sample code addressed my questions. On the down side, the confidence ChatGPT showed was sometimes misplaced and the answers given were wrong. I did report feedback to the team when I saw wrong answers. Often when you point out wrong answers to ChatGPT, it will recognize the mistake and apologize. Sadly, it’s easy to get caught in a loop where ChatGPT will head back down the wrong path, you correct it, it apologizes, then in the next answer or two it will head down the same wrong path from before. I have a feeling ChatGPT will improve dramatically in the coming weeks/months and I do hope it doesn’t succumb to commercial interests and become privatized.

So what can we say about the above experience with ChatGPT? Do developers need to be worried? Who will reign this in? Will AI replace my job?

To me, ChatGPT, and other chat-based AI tools, or AI-based art generators like DALL-E, are simply examples of new technology. They are tools. They are abstractions. As Grady Booch so succinctly tweeted:

“The entire history of software engineering is one of rising levels of abstraction.”

As developers, we don’t feed stacks of punched cards to machines anymore. We write code in languages that are increasingly abstract. And some languages like assembly or C have a relatively low level of abstraction by today’s standards but are still widely used. Different levels of abstraction for different needs. No problem. The “language” used to interact with ChatGPT looks a lot like a conversation. It’s amazing that you can take an English (or lots of other languages including Chinese) conversational tone with ChatGPT and ask it to write some code for you and it’ll get you close to where you want to go. It’s a neat abstraction.

So, back to the questions: Do developers need to be worried? No. Well, maybe. I suppose if you rely unquestioningly on results from an AI you could be in trouble because how can you assume the code really does what you think it does? And even if the generated code is 100% correct, is it correct all the time? And what does “correct” even mean? Different people can legitimately have wildly different definitions of “correct.”

Who will reign this in? Nobody. Because there is nothing to reign in. “Artificial Intelligence” is just another term for “new stuff that seems mysterious.” Lots of new things were scary and mysterious when they first came out: printed words, cars, rock music. And people did try, unsuccessfully, to reign those things in. In the end, if people find new things useful, they’ll stick around.

Will AI replace my job? Possibly. What jobs did cars displace? Horse and buggy drivers? And the jobs where you looked after all those horses people used to travel? Yes. So what are those equivalent jobs today? I think AI will be embedded in lots of areas in society. Because AI is just “new stuff.” So, if your job is anything where you are doing a basic or simple thing, you might want to consider doing a more complex or advanced version of what you currently do. Because I can see increasing levels of automation of basic things throughout society. Just as it’s always been.

What makes AI special? Well, the boring answer is: nothing. It’s not special. Some amazing results can come from human progress and ChatGPT is a good example of what people can achieve. But that doesn’t mean there’s anything special or unique about AI.

Previous
Previous

First Class Done