I Despise AI

Every so often, I engage with critiques of what passes for artificial intelligence. I try not to use that term, since I think it gives too much credit to the drivel that comes out of your run of the mill large language or diffusion model, but unfortunately I’ll be forced to use it since we all have agreed on what it is.

The most recent flavor of this was a 3 hour YouTube video essay from one of my favorite creators on the platform. It’s a long watch, but the essence of the piece is that the anti-AI backlash broadly engages with philosophical techniques that are less than ideal, that the harms (especially environmentally) of the emergent sector are dangerously overstated by the media, and that the reflexive instinct to appeal to the broken system of copyright a means to retain control over ones work in the face of tech company exploitation.

The entire video didn’t sit well with me, though I think there’s merit to each of the ideas presented in a vacuum. The issue is more the framing of it – that AI is the future, and needs to be accepted as such.

Things Only Happen If We Let Them

The crown jewel of Robert Moses’ empire (Author’s)

Ever since I read the Power Broker, I’ve been obsessed with the phrase fait accompli. When Robert Moses was rising to power in New York, he had a technique of starting projects without a full budget to finish them, using the project being started as justification for finish it. By doing this, he was able to reshape the world (well New York anyways) in his own image.

The modern day Robert Moses’ in Silicon Valley thrive at doing this, and our entire cultural apparatus encourages it. So much of our civil society has changed in the last 50 years – I mean I’m writing this essay on a computer for gods sake – that it feels like the winds of change will always blow at the rate they have. But this sort of thinking is not grounded in the historical reality of the computer revolution, and there is no reason to think that AI as it’s currently implemented will transform society unless we allow it to.

Essentially, the IT revolution that has occurred since the invention of the transistor came about because of a combination of strong government investing (primarily via military contracting), an active and local financing scene, and was only really possible because the transistor was a genuine leap in physical technology. Before the transistor was invented, all complex electronics relied on vacuum tubes, and were thus bulky and expensive – think of a $10,000 TV set the size of a desk. But after, things became smaller – and eventually we ended up with miniature computers in our pockets.

This isn’t to say that there aren’t other critical physical advancements in technology that have made the modern world possible, just that most advancements in electronics have been some kind of iteration on a silicon semiconductor. Advancements in AI are along this line as well. As interesting as neural networks may be conceptually, the physical implementation of an integrated AI chip is only different in design from traditional integrated circuits1.

Why does this matter? Well I think that the fact that the entire existence of the AI industry is predicated on design improvements of 80 year old technology sort of flies in the face of what you read in the press. And it’s important to realize that our current rate of technological change is predicated more on expansion rather than increased efficiency. Moore’s Law is dead2, and the only way to get increased computing power is increased capital spending3.

This is ultimately why the AI hype machine has to grapple with environmental damages in a way that the smart phone and PC hype machines never really did4. All technological production processes are exploitative of environmental resources, but since better models require more data and more processing, and no quantum leap is on the horizon, the only way for big tech to innovate is by creating more data centers and consuming more power.

The purpose of highlighting the mundane details of semiconductors is that the technological changes which drove the IT revolution of the late 20th and early 21st century were essentially just manufacturing firms finding ways to make smaller semiconductors. The rapid pace of technological change from 1955 to 2015 is in many ways attributable to this, and things like the lack of meaningful changes to modern smartphones models are downstream of the fact that you can only get so many transistors into a fixed amount of space. We should not expect exponential technological change outside a real physical mechanism, and as such the foundations of the AI hype machine largely rest on preconceived notions of a steady march of progress that was only made possible by unique historical circumstances.

Absent this physical mechanism for improvements, tech companies can only engage in an arms race for data center subsidies, preferential treatment from power utilities, and further corporate subsidization. It should be clear that these are all deeply political issues which demand our attention. Any AI piece that chooses to engage with the topic as a fait accompli is dangerous not just because it’s wrong. It’s dangerous because it yields our authority as political actors to shape the world in the ways we see fit. The AI revolution will not come unless we decide to dedicate the resources it requires.

Everything Sucks Now

Picture unrelated, just needed to get a train in here (Author’s)

Another part of my deep distrust of AI hype stems from the fact that everything is worse now. When I was younger, Google was the best search engine by a wide margin, and there were maybe a few sideboard ads. Now, Google is among the most profitable companies in the world based on an advertising and tracking heavy approach. It’s all very bleak.

As computers have gotten more powerful, there have been benefits. But it seems like for every increase in processing power, there’s an accompanying increase in the need for more processing power. And at some point it’s worth asking: how much do we need? Pokémon Red and Blue took up just 373 kB, while Sword and Shield take up 12.4 GB – something like 35,000 times more space5. But did the game improve in any meaningful way? I don’t think there’s a person alive who could argue they did. File size is an imperfect way to measure this stuff, but the broader point is that more powerful computers do not necessarily create better products.

The way I see it, AI is an outlet for overvalued tech firms to expend capital, not an innovation per se. Of course, this kind of stuff is sort of innovative in the sense that it’s a creative way to light money on fire and of course “the passion for destruction is also a creative passion“, but the larger point is that advancements in AI serve the role of satisfying demand to prevent under-consumption. This is especially true of the LLM based search engine results, which serve no useful purpose other than preventing people from the difficulties of clicking on a link.

AI integration into search engines is particularly ridiculous. By one tech evangelists estimations6, a modern Chat GPT query takes at least as much as a 2009 era Google Search (0.3 Wh for short queries) and at most about 10x more (2.5 Wh for longer queries). Despite what the author of that paper seems to think, it is not impressive that purpose built data centers in 2025 running on purpose built chips for LLM queries consume as much energy as a Google search did in 2009. We’ve spent trillions of dollars, only to wind up with a (computationally) less efficient and generally worse system. Cool!

This is all to say that I don’t think there is a strong stand alone efficiency argument for the proliferation of AI models absent the potential for removing human labor. And since that is the case, it is necessary to consider this labor removal as the primary impacts and costs of AI. Of course, the needless waste of resources to create a model that can blabber on about whatever you want it to should be on your mind7, but the reason that AI receives so much hype is that people view it as a way to cut costs.

The Future of Work

New technologies leading to more time for leisure is only good if regular schmucks can actually use that time (Author’s)

I won’t make the case that all labor saving technology is inherently bad (right now), but what exactly is saved when firms use AI to replace a writer, artist, or call center? Few people will argue for the inherent dignity of call center work, but I’ve never been more frustrated in my life than when I’ve had to deal with an AI phone tree that can’t understand me. If I am having to call about an issue, I want to speak to a human being who can help me with my problem.

Sure, it’s not “profitable” to “provide good customer service”, but also it should be. Like providing good customer service should matter, but every firm is always falling over itself to see who can do it worse. Is this the market operating efficiently to reveal that people don’t actually want good customer service, is it revealing that people don’t really have a way to accurately weigh it before buying a product, or some other thing? If you have an MBA, I’m sure you stopped reading by now, but if you didn’t, I’m sure you said the first thing.

The problem with AI hype for replacing work, even if that work is replaceable, is the age old question of “who benefits”? As was the case in the industrial revolution, it’s the people who own the means of production who gain the most from any gain in productivity via labor saving. In the tech age, the “means of production” are about more than just the factory-based semiconductor manufacturing production (important as that is), but also the means of information production and dissemination.

In our heavily digital era, owning has been replaced with renting via the rise of streaming, subscriptions, and licensing for basically everything. If a Chat GPT powered chat bot takes the job of a former call center worker, the primary benefactor of that transaction is obviously not the worker – now forced into a poor job market with poorly transferable skills. And in the current “growth at any cost phase” of the hype cycle, it’s easy to think of the call center company as the primary benefactor. But in the long term, if they do not have control over the technology they use to replace their workers, they will be at the mercy of OpenAI et al.

But even if OpenAI were structured in some inherently democratic way, the economic impact of AI-based labor replacement would still be a strong negative. Since workers in the US generally have little say over the strategic direction of a company, there is no way for them to assert their interest in not having their job removed without recourse. Labor unions have historically filled this niche, but have long been relegated to the sidelines in the general experience of most working people from a combination of institutional rot of the unions themselves, a legal system that favors employers8, and a generally management orientated work system.

This is all to say that AI will only ruin the conditions of working people in the US as much as our general economic system allows. Given that our system is currently oriented around the idea that the highest calling a firm can achieve is making as much money as possible for its shareholders, we don’t seem to stand much of a chance. Rethinking this idea should be a core component of any social movement that wishes to change the economics of daily life. Profit for shareholders should be one of many considerations, and part of why I feel that worker owned institutions are preferable to typical US firms is that the interest of the worker does not align with this, and thus giving workers meaningful power means tempering the relentless drive for short-term profit.

Language is About What You Mean

Does the meaning of this image change based on if I tell you that I took it on my film camera during a trip to Olympic with Mark and Clark?

The final piece of my hate for AI comes down to writing. As someone who gets a great deal of joy from writing as a creative endeavor, the idea that Chat GPT or other LLMs should replace the work of doing writing is a bit ridiculous. Part of the joy of writing something for yourself is the ability to choose specific words to get meaning across, and you can’t really outsource that and still come up with quality words.

In personal settings – like writing an email – I think using a LLM is deeply anti-social and demeaning. If you can’t be bothered to choose words to convey meaning to a person, what’s the point of writing at all? About a year ago, a classmate of mine sent me an email generated with Chat GPT inviting me to an Oregon Walks event. I didn’t attend in no small part because it felt like the invite didn’t mean anything. I get that writing emails is dull work in a way, but for most laptop workers, it’s the primary mode of communication. The desire to remove that essential work from yourself, to say that my communication with other people is not worth any of my time, is deeply anti-social.

Everyone featured in every AI commercial comes across this way; the Apple ones from a few months back are particularly galling9. I don’t know how to say that you cannot, and should not, want to offload the essential parts of your life that require human interaction. I mean what’s the point of doing anything at that point? This is the philosophical crux that needs to be examined – not if AI is conscious, or if it produces words that have meaning, but what it means for human interaction when people can’t be bothered to think of the words to articulate what they mean. I frankly don’t care if the AI drivel has meaning, I want to know what the person who wrote me this email actually meant to say. I want to understand them, and I want to feel like they care about me.

My high school cross country coach was fond of funny little lines, and the adage “you can’t get there without getting there” has become something of a truism for me. Most people feel some level of alienation towards the work they do, but you can’t rise above that by cutting off the only elements of work that ground you in reality. Without human interaction, work can only be a means to an end. There can be no meaning derived from two people using LLMs to “talk” to each other, and you will never get anywhere without putting in the effort required to get there.

I get it, you feel that if you just didn’t have so many emails to answer, you could do their “real work”. I’ve got bad news for you buddy: the emails are your real work. If you send enough emails for it to be a problem, your job is communicating ideas to other people. If you outsource this to OpenAI, Meta, Apple, Microsoft or whoever else, you will be up a paddle without a creek. So as long as your work involves some impetus to meaningfully articulate ideas about the world, you will need to meaningfully articulate those ideas. No computer will do that for you, nor should you want it to.

We Should Assert Our Power Where We Can

Related: doing things in real life (like walking to the ocean) helps me feel grounded in the reality of the place I am

When it comes down to it, the problem with AI is that it doesn’t exist, but it will ruin everything anyways. All of the promises of the AI hype train are indistinguishable from those of the crypto evangelist of yesterday (do I base this strongly off what a guy I swam with growing up is up to? Sure, but he is a self-proclaimed “founder” who has to have a handle on this kind of thing). Saying that it will transform work is true only insofar as the products which AI companies pedal are consumed by those with economic power. I see a lot about how graphic designers are being automated out of work, but of the graphic designers I know well, the only job loss has come from outsourcing to Mexico10.

What AI hypers and crypto evangelists miss is that the difficult problems they seek to solve need no solution. Crypto is great at preventing man in the middle attacks, but it turns out those don’t matter all that much. The biggest vulnerability in all computer systems is bad actors that look legitimate to the system, not scary hackers, something which crypto has only proliferated in the finance space. For the AI hype companies, does solving the problem of if we can create plausible looking art, or words that pass for human writing, ultimately one we want to solve? The jury is still out on this, but I am broadly of the opinion that it’s more like an interesting quirk (LLMs and other deep learning ideas are of interest conceptually at least) than a categorical change. If the goal is to give meaning and to tweak and iterate on these ideas in a specific way, communicating with a person will be deeply relevant. That work cannot be outsourced.

The issue at hand is the fate of the world only if we collectively allow AI companies to market and sell their products as legitimate. If you subscribe to a service like Chat GPT and are still reading for some reason, probably unsubscribe and learn how to write. Stop using AI assist options in search engines. If you need some kind of logo or art, ask a friend (and pay them!). I don’t think there’s a consumer-only approach to the work crisis the managerial class will induce by forcing AI tools to the forefront of their products to appease investors looking for the promise of how seriously they take “the next big thing”, but there’s no reason not to do it.

So yes, the AI hype machine is poised to reorient the workplace, but not by providing some kind of value to the world. Instead, the value is only to business owners seeking short term profits (i.e. all publicly traded businesses) and to the firms driving the hype train. Eventually, things will crash to earth, but in the here and now understanding how AI hype policy drives material policy – tax abatements11, utility expansion12, and endless data center build outs – is of critical importance. These are things which the public has power over, and in all circumstances we should act to prevent these subsidies from advancing further than they already have. When we allow feckless politicians to breathlessly promote data center expansions or increased electrical capacity in the name of economic development, we allow them to mortgage our future to the ever deepening oligopoly of Bezos et al. Before we do this, its worth asking if this is a future we can live with.

Thanks for reading – til next time.

Footnotes
  1. Here’s an IBM piece essentially confirming this: “Perhaps no other feature of AI chips is more crucial to AI workloads than the parallel processing feature”. Parallel processing is just allocating work in a more efficient way for certain tasks. ↩︎
  2. Moore’s Law was never a law per se, just an observation that the density of transistors on integrated circuits tended to double every 2 years. As we’ve reached the physical limit of transistor density, this rate has slowed (though newer chips do still have more transistors than older ones) ↩︎
  3. This is also evidenced by the fact that the cost per transistor of chips has not dropped in a decade ↩︎
  4. Obviously, planned obsolescence in tech is a massive environmental issue, but I don’t recall tech waste being the same political issue that data centers are now ↩︎
  5. It’s easiest to engage with this stuff on the video game front because the art that has the most staying power is rarely the stuff that is the most computationally impressive or whatever ↩︎
  6. I think this analysis is interesting and relatively useful, but falls into comparing AI related power consumption to regular household uses rather than to roughly equivalent alternatives. ↩︎
  7. And if the guy who sat next to me on the flight from MSN to DFW is reading this, please stop asking Chat GPT to generate pictures of a flirtatious Hawaiian woman ↩︎
  8. This is strongly evidenced by the fact that union busting is basically legal, but strikes have been nearly outlawed ↩︎
  9. This angry article sums my feelings up fairly well ↩︎
  10. This isn’t to say AI isn’t a huge issue in graphic design, just that the ultimate goal of cost cutting takes on many forms ↩︎
  11. 36 of 50 states have specific tax policy targeting data center expansion ↩︎
  12. You don’t need a tinfoil hat to connect the dots between a 46% rise in power rates for major electrical consumers in Hillsboro and PGE expanding utility infrastructure towards the Tualatin Valley ↩︎

7 responses to “I Despise AI”

  1. Thanks for writing this. And also thank you for sitting through a three hour video–I don’t have the time, interest, and bandwidth to wade through something that long.

    I don’t believe that AI or machine learning is bad per se, it’s a tool. It’s the implementation that can be bad. AI can be very useful under certain applications, like automating scripts in big computer models. AI can be helpful in weather forecasting, or perhaps if we have another pandemic and need to create a vaccine as fast as possible.

    But most of how I see it being applied is towards that “flirtatious Hawaiian woman” imagery you encountered on the plane. In a way I admire the brilliance of ChatGPT et al releasing a consumer version so people can instantly “create” an “image” from their most inane thoughts. That really sold it to a large demographic of the public, a swath of folks who can now pretend they are like the bosses that ultimately make the most out of shrinking their workforces. And it’s a boon to those too cheap to pay for someone to create art for something, especially since there’s currently little legal resource to those artists whose work is being used to “create” this new AI art. (At least in the past an artist could send a “cease and desist” to someone lifting their images. Yeah, copyright law is soooo pesky!)

    And I feel that the worst part is that even if some folks might find AI a bit icky, the idea in the air is that AI is “the future” and if you don’t get on that train you’ll be left in the dustbin of history. I encountered this with another blog I’ve followed, who decided to use AI to improve their writing, with a bad AI image as the cherry on top. Their post sounded stilted, most likely due to “SEO optimization” where posts are overly long and repeat the same thing again and again. When I expressed my concerns with what they were doing I got this: “I don’t want to turn off long term subscribers but at the same time, anyone who doesn’t embrace technology like this will be left behind.” I responded that I’d rather be left behind if this was the “future”.

    A few notes:

    • Yay on footnotes!
    • Though I’m a bit confused by Footnote 8. Should it instead read: This is strongly evidenced by the fact that union busting is basically illegal, but strikes have been nearly outlawed
    • What camera were you using?

    Like

  2. Thanks for writing this. And also thank you for sitting through a three hour video–I don’t have the time, interest, and bandwidth to wade through something that long.

    I don’t believe that AI or machine learning is bad per se, it’s a tool. It’s the implementation that can be bad. AI can be very useful under certain applications, like automating scripts in big computer models. AI can be helpful in weather forecasting, or perhaps if we have another pandemic and need to create a vaccine as fast as possible.

    But most of how I see it being applied is towards that “flirtatious Hawaiian woman” imagery you encountered on the plane. In a way I admire the brilliance of ChatGPT et al releasing a consumer version so people can instantly “create” an “image” from their most inane thoughts. That really sold it to a large demographic of the public, a swath of folks who can now pretend they are like the bosses that ultimately make the most out of shrinking their workforces. And it’s a boon to those too cheap to pay for someone to create art for something, especially since there’s currently little legal resource to those artists whose work is being used to “create” this new AI art. (At least in the past an artist could send a “cease and desist” to someone lifting their images. Yeah, copyright law is soooo pesky!)

    And I feel that the worst part is that even if some folks might find AI a bit icky, the idea in the air is that AI is “the future” and if you don’t get on that train you’ll be left in the dustbin of history. I encountered this with another blog I’ve followed, who decided to use AI to improve their writing, with a bad AI image as the cherry on top. Their post sounded stilted, most likely due to “SEO optimization” where posts are overly long and repeat the same thing again and again. When I expressed my concerns with what they were doing I got this: “I don’t want to turn off long term subscribers but at the same time, anyone who doesn’t embrace technology like this will be left behind.” I responded that I’d rather be left behind if this was the “future”.

    A few notes:

    Yay on footnotes!

    Though I’m a bit confused by Footnote 8. Should it instead read: This is strongly evidenced by the fact that union busting is basically illegal, but strikes have been nearly outlawed

    What camera were you using?

    Like

    1. Glad you liked it!

      It took me a few sittings to get through that three hour piece – though mostly because I was so frustrated by the framing.

      On the topic of AI as a tool: I think my experience in the machine learning space as an undergrad and my early career always frames my thoughts on this pretty heavily. I’ve helped develop what would now be called an “AI tool”, but what we called machine learning to do a lot of things, but they were always just tools. It makes following the hype of AI feel a bit much, since the only real difference these days is that LLMs are good enough to make plausible sounding words. But yeah, most of those advancements have come through increased capital outlay on bigger data centers rather than a fundamental breakthrough in the physical technology.

      It’s a shame that people feel the need to “improve” their writing with AI tools. I enjoy trying to find the words to say something that has a specific meaning when you read it, and I feel like using AI to “write” something defeats the purpose of writing in the first place. I agree that if this is the future, I welcome being left behind. It was the same thing with cryptocurrency – why on earth would I want the thing being sold? The problem of financial markets isn’t that there’s too much regulation, it’s that their isn’t enough!

      And notes on the notes: For #8 – if union busting is illegal, I’ve yet to see a CEO face real, tangible consequences for it. A fine to a rich man is just a cost of doing business.

      The camera is the old point and shoot I bought from you back in 2021 (or 2022 maybe?)

      Like

      1. In the case of the particular blog, their defense was that it helped them organize their disorganized thoughts. I get that to an extent, but AI will organize your thoughts by sucking the life out of your writing. And this is a blog that used to post 1-3 times a year, but since the AI “help” has been much more frequent. I kind of get the feeling that the real reason is they want to improve SEO and get more eyeballs on the blog, but for what outcome? To become an “influencer”? If that’s the intent, there’s better platforms for that.

        Coming back to Footnote 8, then I am still confused about what you are trying to communicate. This is strongly evidenced by the fact that union busting is basically legal, but strikes have been nearly outlawed. What is confusing me is the “but”, because when I see that it means the two statements contradict themselves or differ in some way, but the second statement just reinforces the first. Reading it again I see what you’re getting at, though I might have used “and” instead of “but”. Or maybe I need a nap!

        Like

      2. I always feel like there’s a pressure to “get more views” or whatever, but when it feels that getting more views is the intention, it starts coming across as fake and stilted to me. And I always struggle to organize my thoughts, I imagine having one more tool to “help” me would just mean I’d have one more vector to be distracted by.

        The broad point in footnote 8 is that employers have a lot more power in the workplace, and that the employees primary method of asserting leverage (striking and withholding their labor to demonstrate the value of their labor to the company) is very strictly regulated, especially in wake of the Taft-Hartley Act of 1947. The effect of this is that workers lack the ability to negotiate on even grounds with employers, and if you take classical economic argument of Ricardo and Smith where wages are determined by the strength of bargaining power between employers/managers/owners and workers, it leads directly to lower real wages even as productivity rises (something observably true). But I see why it’s a bit confusing without more context 🙂

        Liked by 1 person

  3. Apologies for duplicate posts, I think AI heard me and this is their retaliation!

    Like

Leave a reply to adventurepdx Cancel reply