The other day, my boss asked me if I wanted a seat for Claude Code. At first, I respectfully declined, but after a while, I got curious. I asked him to buy me a seat so I can check out what all the fuss is about. What I didn't know back then is how finicky this all is. I first gave it a prompt that totally overwhelmed the thing. In the end, I had to revert all the changes it had made because while I asked it to work on one file, it started messing with almost the entire project, completely breaking the app. So my first impression was: This is just a phase, a bubble that is about to burst, and then we can all move on. This was just at the end of last year, and in the last couple of weeks, this bubble has outgrown even my wildest dreams. People are actively looking for ways to automate their entire jobs. Others are looking for ways to "ship more code", whatever that means. In the meantime, I'm trying to avoid all the noise, still writing my own code, trying to figure out how to refine code I have written months ago, and all the while releasing "only" two new versions of our app at work. I say "only" because I don't feel bad about not releasing four or five new versions in the same time. I'm proud to only ship code that I know works. Because, and let’s be honest for a moment, as soon as you let AI write your code, there will be a time where you don't read through it anymore. It's normal, it's how people are built. We become comfortable, and sooner or later, we start releasing code to the public without even knowing what it does or if it even works as it's supposed to do. We stop being programmers and become AI managers. We don't write code anymore; we assign the job to the next worker that is available. We don't discuss design patterns anymore, but how we can improve our prompts and "md" files. And the only thing I can come up with while contemplating how to deal with all that is: What's the point? What's the point of even learning computer science and/or app development? Why should anyone even be bothered to go to college anymore? If it's just writing prompts, why not let high-school graduates become "software developers"? Or scrap the developer entirely, let the project manager be both manager and prompt engineer. That saves the company some serious money and everyone is happy, right? Wrong! The future of programming can't be AI because we're risking the jobs of thousands of developers worldwide, even yours, dear reader. Once upper management gets wind that it's possible to replace their entire workforce with machines that don't get tired, don't need breaks, and don't need vacations, plus they can save a big buck, what do you think happens? And even if it doesn't end up that gloomy, let's say you get handed a task that is so complex it will take you days to figure out how to solve it, even with an LLM at your side, how will you be able to justify that? The other guy online has just posted that he's able to write an entire app in a day. Just hand it over to the AI, it'll do the work for you. Is that really the future we're striving for? Constant fear of losing your job that you have worked so hard for to get into? I definitely don't want to live in that world, and the sooner we realize that we have to stop this madness, the better.
Look, I'm not saying that LLMs are bad in general, although it might sound like that. I'm just trying to remind everyone what happened in the automotive industry. People are still being replaced by machines that do their work. The machines don't need breaks, they're not in some union fighting for more money. All they need is a splash of fresh oil now and then, and they're good. I guess what I'm trying to say is that AI or LLMs or whatever you want to call them doesn't have to be the one thing we should all try to master. Programming still is an art form, well-crafted code and the joy of coming up with creative solutions for our daily problems still should be the main thing we focus on when writing a program. Should we ban AI from our workflow? I don't think we can anymore, but we can use AI for other things like ideation, for example. It can be a very powerful tool for getting inspiration very quickly. But implementing features, understanding why something doesn't work has to come from us, the ones responsible for the correct implementation, the ones that have to stand up for each and every error we make. Because, let's face it, when something reaches the user it's still us who get our butts kicked when things break. You can't just talk your way out of it when the AI broke something, you are the one responsible. And the sooner we get our shit together and take responsibility for the stuff we post online the better. I'm not trying to censor someone, please don't get me wrong. But please, when posting about AI also talk about its risks. Use AI in a sensible manner, and never trust that it does everything right every time you give it a task. AI still is not intelligent, it's just guessing; it's doing so very well, but it's still just guessing. Not everything coming out of AI is a godsend, and we have to make sure everyone involved has all the information they need to use LLMs correctly.
I sure as hell won't be using it for my daily code, as it's just too much joy getting a program running all by myself.