RemainNA's blog

Why I don't use AI

As I'm sure you're already well aware, generative AI/LLMs have become utterly inescapable across nearly every facet of digital spaces. I'm not going to try to nail down a precise definition of AI in this blog post, there are others better suited to do that, but I will give a rough one: tools that take natural language as input and create "novel" outputs from it (regardless of output medium) rather than classifying or otherwise processing the input. Tools like ChatGPT, DALL-E, Sora, ElevenLabs1, etc. all fall under this definition.

Several arguments against the use of generative AI have been made. The first is the high energy cost of training and using AI. There is some good news in that the (per use) cost of AI does seem to be going down significantly, especially since many models are now able to efficiently run locally (in part thanks to new hardware such as the Framework desktop). I'm not so sure that the same can be said about the training cost. Yes, DeepSeek claimed a $6m training cost as compared to ChatGPT 4's >$100m, but I personally doubt that those numbers were calculated the same way. As I see it, OpenAI has an incentive to overestimate costs to encourage investment, while DeepSeek has an incentive to underestimate costs (by excluding hardware or other non-energy costs) to seem more competitive and disruptive. I'm also concerned that the sheer number of models being trained offset this per model improvement in energy usage, and that's even before considering the potential impact of e-waste generated from hardware and the land usage (both ecological factors and societal, data centers aren't exactly great community spaces). Even DeepSeek's "bargain" price of $6m is significantly more expensive than any of the estimated prices from Timnit Gebru's 2020 paper (which resulted in her being forced out of Google).

Beyond energy usage, there is concern over the legality of the training data used by these models under copyright. OpenAI and Google sure want it to be considered fair use, while Facebook Meta has claimed that torrenting dozens of terabytes of data for the purposes of training its models is totally fair use, and also totally not piracy since they were super careful not to seed the data (from Meta IP addresses). It's astounding what major companies (think they) can get away with compared to what happens to individuals. These are companies that have the resources to license content at scale, with Google already paying Reddit $60m to use content posted to it as training data, but rather they choose to do this and show utter disdain at the idea of actually paying people for the work they do. Stealing not being enough, the crawlers used by these companies are disregarding established online norms like robots.txt and putting an immense amount of strain on swaths of internet infrastructure.

There is the possibility that synthetic training data offers a path towards not only equivalent but better models without (or with less of) the issue of copyrighted training data and mass crawling. But synthetic data doesn't do anything to fix the biases in AI models, especially since the biggest players in the AI space are among the worst examples of Silicon Valley/big tech/venture-capital-backed, growth-over-all-else corporations.

Setting aside the above (and I don't think you should), why else don't I use AI? For this blog it really comes down to one idea: this is a hobby. Outlining, researching, and ultimately writing these posts is the point. Any tool that reduces the effort I put into this is taking the hobby out of the hobby. There's nothing left if you do that! Sure, I could use it for editing, cleaning up grammar, bouncing ideas off of, etc. but that's what I have friends for! I appreciate everyone that has looked over a draft for me before posting, and that process has led to many great conversations and ideas to add to the post that I hadn't considered before (thank you!!). AI chat bots meanwhile just produce completely bland and sterile "writing" (even Grok, its brand of blank and sterile is just edgy), and they don't have any lived experience that leads to interesting conversations. And as Kevin Gannon put it on Bluesky, why should anyone bother reading what I haven't bothered to write?

There are of course more applications of AI than just writing. I could use AI to create images and move away from the default Bear icon almost all of my posts have shown when embedded on Bluesky, Discord, etc., or add a header image like The Verge does for its articles. But I don't want to do that, first because I want to eventually make those images myself (setting aside the simple solution of using my profile picture as the default embed image), and second because AI images frankly look like shit more often than not. If I never get around to learning the tools and making images myself I can always commission them, ask friends, or just keep not using images!

Then there's the question of audio generation, like ElevenLabs' Audio Native. If they are to be believed, adding an audio player would make my posts that much more accessible. And there's absolutely something to this, screen readers can be an important accessibility tool for the blind community. However that's just it, a screen reader is going to be a tool that works in more places than just my blog, and that each individual user is going to have configured in the way that works best for them. I try to make my blog accessible not by adding and embedding more in it, but rather by keeping it simple (and using alt text where applicable) so that screen readers are able to parse it properly, and otherwise aiming to follow established accessibility best practices. Much like how AI generated images don't look right, these AI generated narrations just don't sound right. Sure the AI voices have intonation and follow the grammar and structure of the writing to some extent, but they don't understand it in the same way that a human narrator can. Once again, I would rather record myself reading the post (don't expect that to happen), work with friends, commission voice overs, or simply leave things as is. While this blog largely started as a means for me to write down and share my thoughts, it has become a great way for me to connect with others during and after the writing process, and AI is an absolutely terrible substitute for that.

So that's why I don't use it when writing about my blog, but what about at work? I am a software engineer after all, and writing code is supposed to be one of AI's killer apps! I did use GitHub copilot in its relatively early release for some of my personal projects, but haven't used it (or any other coding "assistants") for quite some time now. Almost all code that was generated using that has been replaced by code I wrote myself, as I simply found that while the code worked (with some tweaking), it wasn't architected very well and didn't lend itself to being expanded upon down the line. Beyond that, AI has been making programmers worse at their job and reducing critical thinking. I think it is incredibly important to keep up your skills when doing this professionally, and to not let yourself become overly reliant on any one tool, especially not one that can "do the thinking for you" like AI. When I was first learning programming I wrote everything using Notepad++, without plugins or anything. This gave me basic syntax highlighting, but no autocomplete, error detection, visual debugger, etc. Compiling, debugging, git commands, and more were all done in the terminal. I don't work this way anymore, but I feel confident that I could go back to it. I am not arguing against using tools that makes tasks more convenient, but if they are making you worse at or unable to independently do the basics of your work then I think you need to seriously reconsider if and how you should use them.

The ultimate purpose of the work I do professionally is to protect life and property. While I am (fortunately) not in a position where a single mistake is likely to result in immediate danger to anyone, I still believe that I need to be responsible for my code and the decisions made when writing it. I think that this work prioritizes being a better programmer, not a more efficient programmer. While it's possible to let an AI generate code and then thoroughly review it before committing, writing the code myself means that I understand the code itself and can justify the decisions made. That understanding is what puts me in a good position to go back and modify things if needed at a later date. On a related note, this is why you shouldn't directly copy and paste code from StackOverflow either. Take the time to read through it and understand what is happening, why it was written the way it was, and then incorporate it into your code. Even reviewing an AI's code won't actually accomplish this, as it wasn't written the way it was for any reason other than the probability of certain tokens given a prompt and temperature, no real decisions were made.

On top of everything, I see no reason to be excited about a tool that is being marketed as something that will replace me. We've already seen so many layoffs in the tech sector, some justified using AI, and the industry is only getting more difficult (especially for more marginalized groups). The CEOs of these massive companies do not care about us, their long term goals are not in the best interest of most people, and their products, AI and otherwise, cannot be meaningfully detached from the ideology of their creators. I think people should leave platforms based on their owners actions, and I think people shouldn't use their AI platforms for the same reasons. One day we might have AI available to us that fixes everything mentioned so far, or even usher us into a new bright future for humanity, but right now we don't. That possibility does not justify using AI right now, especially since LLMs are not going to be what brings those improvements about.

I started this blog post with the goal of explaining why I don't use AI personally, both within this blog and generally as a programmer. Over the course of researching and writing this post I increasingly saw generative AI being used as the aesthetic of fascism and AI companies and users not only disregarding copyright but seemingly taking joy in doing so. The more I learned and saw the more my perspective and the goal of this blog post solidified: I do not think anyone should use generative AI, not as it exists right now. I do not care about how perfect AI is for this use case or that use case or how technically impressive it may be, the generative AI products and services that exist right now are fundamentally bad as a result of the motivations of the people behind them, the space and culture within which they are made, and the decision made in their creation. I don't use AI, and I don't think anyone else should either.

  1. ElevenLabs may be a bit of a stretch given my definition, but given the way it is advertised as an "AI audio platform" and how similar products have been central in the ongoing SAG-AFTRA video game voice actor strike, I am choosing to include it in this discussion