If there’s one thing that has come to define the modern era, it’s AI. From the early days of ChatGPT, to image generation, to the massive sweeping statements about how AI is going to “change the world”, and “replace (software engineers|writers|doctors)”, it seems like the train of LLM applications just keeps on chugging with no end in sight. One can’t help but feel that no matter what, you need to jump on that train and hold on for dear life, because the future is now old man and if you don’t, you’re going to be left behind in the dirt wondering where it all went wrong.
So, having been a user of AI for a few years now, and working on an AI Product, I wanted to jot down my thoughts. To spoil the take away: I’m anti-AI. Now, it’s easy given that small snippet to jump to conclusions, scoff, and call me a luddite who needs to get with the future, but it’s a bit more complicated than you might think. Let me explain.
The Good
Let’s start with what I actually like about AI, because unlike a lot of detractors I do see a lot of value here. As someone who works in software it’s impossible to not acknowledge the force multiplier that a tool like Claude Code represents. Over the last year or so we’ve seen coding tools go from something that sort of gets it right sometimes to something that actually gets it right relatively often, assuming you’re ok with a few rounds of code review to get it to the right place. For example, one of the things I work on at DuckDuckGo is profiling, and thus I was looking at Perl’s Devel::NYTProf - a sort of equivalent to Go’s pprof, and the file format is entirely undocumented, save for the tools in the NytProf repository that read and write it. I was able to point Claude at that repository and it was able to breakdown the format for me, and help writing a converter that allows reading NYTProf files and writing them back out as pprof files, ready to be imported into tools like Parca. A couple of days work in reading and understanding some relatively complex C++ code complete in an afternoon of bonking the AI until it worked.
But beyond more practical applications, the really neat part of AI to me is that is represents the first real instance of “stochastic” compute. Historically, computers have been always been deterministic - same input, same output, to a fault. I remember growing up as a baby programmer being told repeatedly by my programming teacher “computers do what you tell them to, not what you think you told them” which really gets to the heart of it: computers don’t deviate. But, that requires you to know exactly what you want your computer to do, and some times, you just don’t or at least, the amount of effort required to translate a vague idea into determinstic language is exhorbitantly high. For example, at DuckDuckGo, we have traffic graphs that we compare week over week to look for strange deviations. The basic idea is easy to quantify - “has this number changed by more than x% week over week?” is trivial math. But then we get changes in traffic due to predictable reasons - searches go down over the holidays, and searches in our sports module go up during the Super Bowl, so suddenly there’s a bunch of perfectly explainable reasons why traffic can change. So we start to get into this situation where there’s so many potential innocuous inputs to the system that quantifying a determinstic algorithm to say “has this changed, and if so is the change unexplainable” becomes really hard when you can just wack the numbers into an AI Model and ask it that exact sentence. Which is to say, AI enables us for the first time to start expressing computation in real every day language, rather than cold, hard, unforgiving code, and what that unlocks is magical. It brings creation to so many more people, and more people being able to express themselves through different mediums is a net positive for society.
The Bad
But beyond the neat parts about AI, beyond the Claude agents writing screeds of code, there’s some real downsides to this stuff.
What about power? When evaluating AI investment, a lot of hum is put into the power usage of the datacenters that a company is building. For example, as of the end of last year OpenAI is aiming to spin up 10GW of datacenters in their “Stargate” project (upped recently from ). It seems that very few people actually have a sense of scale on those sort of numbers, because that’s one company (albeit the biggest one, outside the hyperscalers), and that’s a massive amount of power. For context, New Zealand generated 43,872 GWh of Electricity in 2024, it we take an average load (don’t crucify me electrical engineers, I know that that isn’t how it works), that comes down to 5GW continuous on average - OpenAI is proposing to install enough capacity to power New Zealand twice over, to run their AI workloads. Anthropic goes even further and claims that the AI industry will need 50GW by 2028. Remember when we used to roast Bitcoin for using the energy of a small nation? Now, power usage in and of itself it not necessarily a problem. A lot of pushback on statistics like that come in the form of appeals to the future - we can build datacenters on top of geothermal plants, or use them as a baseline load for nuclear power plants and thus all of the discussion about power usage becomes moot, as it’ll all be clean energy so who cares? And sure, there is some merit to that argument - power plants like Three Mile Island are coming up to power the AI boom, but considering that that’s a 800MW (8% of what they want to draw) existing facility that wont be ready until 2028 you’ve really got to consider that for the current and near to medium term future, the AI boom is on the backs of the existing power infrastructure. If you care at all about climate change, AI should horrify you.
But the environmental cost isn’t the only externality we’re ignoring in our rush to shove AI into everything. An interesting paper was published at the tail end of last year: Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. While it’s still a prelimary result, they note that prolonged use of LLMs can actually impact the brain:
“Writing with AI assistance, in contrast, reduces overall neural connectivity, and shifts the dynamics of information flow. In practical terms, a LLM might free up mental resources and make the task feel easier, yet the brain of the user of the LLM might not go as deeply into the rich associative processes that unassisted creative writing entails.”
Which is really worrying right? The idea that these things might actually be doing real, unchecked damage to the brains of the people using it should terrify anyone who cares about the future of society. If this were a drug trial, any hint of danger would cause an immediate pause, and informed consent prompts at the least, but because this is the tech industry we’re just allowing these companies to move fast and break things with our collective and individual psychis.
But more generally, these things really do seem to be doing something to us. We keep seeing cases of AI Psychosis, where people are actually going a bit nuts about these things. What started out pretty niche in communities like Replika and Character.ai seems to be becoming more mainstream, especially with OpenAI pushing usecases like talking to ChatGPT as a Therapist (something you should never do, for privilege reasons if nothing else). It’s tremendously irresponsible of AI companies to be pushing their chat bots to be more personable, and to pretend to be real humans because it really is breaking peoples brains. It wouldn’t surprise me if we start to see this popping up as a recognised diagnosis soon, and I hope that some sort of regulation comes from it.
One of the common responses I get when I loosen up over drinks and become insufferable in my ranting is that a lot of the issues can be solved by Open Source models. If you use Llama, you’re not connected to the system maaan, and while that is true to some extent, they’re still LLMs, so have the same negative societal impacts, and Open Source models present entirely different ways of being… not great. For example, at least through their services companies like OpenAI and Anthropic can try to put some guardrails on their model (even if they are pretty trivially bypassed, see: Grok’s recent attempts to prevent abuse). In the Open Source world those guard rails basically don’t exist, because for the most part they are implemented in the harness of the model rather than in the model itself. Without that harness, what we have is the equivalent of a 3D printable gun. Great from a Libertarian freedom point of view, terrible from an “I don’t want this technology unleashed on the world with abandon” point of view.
The Gaslighting
Outside the good/bad, there seems to be a lot of interesting things about AI. Like, it feels like we’re being gaslit a bit - big stories not lining up with what we can actually see in the real world.
One interesting thing to note is that while AI is seeing a lot of adoption, the benefits seem quite limited, despite companies like Google’s attempts to push AI features into everything and the announcements from AI companies to the contrary. The National Buerau of Economic Research put out a paper that puts the impact of adopting AI at only a 2-3% performance improvement for example, about an hour a week, or 13 minutes a day. This new technology that is going to revolutionise the world saves people less than a coffee break. METR found similar results in the tech space when they looked at Open Source Software Developers - Open Source developers drastically overestimated how much more productive using AI would make them, to the point that some were actually slowed down by using AI. It seems we have this weird situation where AI makes people think they are being more productive, and maybe some are, but on average it’s just a feeling rather than something that’s actually helping.
But those poor reported productivity gains are kind of hard to square with the monumental job losses we’ve seen across the job market. 5000 jobs here, another 3000 jobs there. Every major player in the space seems to be haemoragging employees like it’s going out of style. Maybe the Solow Paradox really is happening again and there’s some missing metric that we are not tracking that is representing all this hidden productivity gain? These companies are posting record profits after all. But a lot of these job losses seem to be coming from companies that have an AI tool to sell. I suppose that does make sense - they would be the first ones to realise actual productivity gains after all, being so close to the technology, but despite the record profits, none of these companies split out their AI revenue so we can see how much they’re actually making from AI. So you have to wonder: Are these job cuts actually because of AI productivity gains? Or are they because of AI money losses? Those profits have to come from somewhere, and if it’s not coming from an increase in revenue, it has to be coming from a decrease in costs.
And so we get to the most frustrating part of this whole AI mess: The double messaging. It’s so hard to know what to believe in today’s modern media landscape, and the fact that AI companies keep misleading people about the capabilities of their models leads to huge whiplashes between “oh god I’m about to lose my job”, and “oh actually everything’s fine” almost every week, and that is exhausting. Like, Cursor’s CEO saying “we built a browser with GPT-5.2 in Cursor”, which for about a day had the developer world in storm, until people actually tried to run the code and found that actually the code didn’t compile, and in fact never did compile, in any commit. So, while impressive that it could produce and manage that volume of code, it falls short of the claims made. The same goes for the claimed “PhD level intellegence” that current frontier models apparently posess. Such a claim sounds impressive, but it notably lacking any basis in reality. But with so many detractors, how do you square that with the fact that it does seem to work well for some people? Rather than any fundemental “skill issue” I think it comes down to the fundemental randomness of these systems, and a bit of selection bias. For the most part, we as end users to a piece of LLM produced stuff don’t see the amount of iteration it took to get to the end product - how many times someone had to pull the lever and hope that the LLM spat out something usable. Coupled with the mental impacts above, I’m not even certain most AI users realise how much this happens - they just notice when it works and block out how many times they bonked it to get to that point.
Conclusions
Really, it’s a weird feeling. While I know that AI is a useful tool in some areas, I do not believe that the benefits of it outweigh the downsides environmentally, societally, or economically, and to continue to support AI you have to either ignore all of them, or conciously decide that you’re fine with them, with whatever justification you choose to use. This is why it’s so disconcerting to see folks who I look up to pushing AI workflows - it makes me question which side of the line they fall on. There’s a great quote in Player Piano by Kurt Vonnegut, “A step backward, after making a wrong turn, is a step in the right direction.", and I can’t help but wonder if we’ve made that turn.
I'm on BlueSky: @colindou.ch. Come yell at me!](https://blog.colindou.ch/my-thoughts-on-ai/cover.webp)