What's So Good/Bad About AI?
September 14, 2025
The Self-Driving Car Principle
In 2013, Tesla CEO and notable buffoon Elon Musk publicly predicted that the cars his company produced would be “90% autonomous”—ie, self-driving—by 2016. In 2016, Musk publicly predicted that “you'll be able to summon your car from across the country” by 2018. In 2018, the deadline shifted again to “next year”. In 2019, he was saying “this year.” In 2020, he was also saying “this year”. In 2021, he was still saying “this year”.
Yes, okay, I know it’s easy (and fun!) to pick on Elon. But the interesting thing to me isn’t that he kept saying things like this, it’s that—for a while, at least—people believed him. Why did it seem so plausible that this once-impossible seeming technological fantasy was just out of reach?
Self-driving cars have been a hallmark of science fiction for decades. As such, they’ve been a consistent dream of techbros who aim to be the ones to “bring the world the future”. Research on self-driving cars has spanned decades, but it was only in the 2010s that the dream suddenly came into clear focus. It shifted from the realm of academic papers into Silicon Valley hype. It was seemingly just on the cusp of turning into a real product.
But it’s 2025, and I still need to get a driver’s license. What gives?
Well, it’s not that the technology stalled, per se. Driver assistance systems in cars today have progressed enormously and have undoubtedly stopped many accidents and saved many lives. Waymo runs 250,000 rides a week. But driver assistance tech still requires a driver to assist. Waymo only operates in a small handful of very specific, very controlled geographic locations.
The problem is the nigh-infinite, ever-increasing, ever more specific list of edge cases which a self-driving system would need to handle safely in order to be “road-ready” in any arbitrary condition. Real life is infinitely more complex than any rehearsal or pre-training can properly account for, so there will always be dramatic, heavily publicized situations where the system fails to act correctly.
I also think there’s something else going on here. We focus on (and occasionally point and laugh at) the times a self-driving Tesla fucks up a lane change or swerves into a ditch. But these are all things human drivers do all the time, and it’s never headline news because it’s not shocking.
We expect human beings to be flawed, but we expect our technology to be perfect.
No system will make the correct decision 100% of the time. The goal of engineers and programmers is to get the correctness rate of any system asymptotically close to 100%. But you always need to build in safeguards for if and when things don’t work. We still need humans in the car.
Perhaps more to the point, we don’t trust a system which has no humans in the car. There’s something inherent about us where we don’t trust something which could make mistakes for reasons we don’t understand, or in ways we can’t intervene in.
This is why I’m skeptical of the claims—from AI boosters and doomers alike—that AI is on the cusp of taking all our jobs. I’m skeptical because I don’t see many people saying that AI today is ready to take all our jobs. They just point to the current system and say “Look how much it improves, it only needs to be a little bit better and then it’ll be better than us at everything!”
Self-driving cars have taught me to be skeptical of this idea. Even if we get to a point where an AI can do 90% of someone’s job, what about that last 10%? 1%? 0.1%? How much do you think people will trust their “AI coworker” if they know in the back of their head that it could be wrong and the AI wouldn’t be able to even tell, since they don’t understand the difference between wrong and right, only the difference between looks wrong and looks right?
And which manager or producer would want the risk of giving AI some core function and then it fucking something up because it hallucinated a file or read malicious instructions online? Who’d be responsible for that if not the person who deployed the system? How many high-profile cases of “AI chatbot leaks SSNs of all employees” will make the news before executives get anxious about bad press?
Well, what if a human employee fucks something up? That happens all the time. True, but like with a human car accident, it’s not something that sticks in our head. We remember each time the system messes up because we blame the system overall, but the blame for human error is isolated to that particular human. It’s going to be incredibly hard to build trust like that.
Of course, the people at the head of this wave don’t see it that way. The money they’ve put into it—billions and billions of dollars—will only ever remotely be worth it if they can credibly replace all intellectual and physical labour in the near future. You simply cannot pay off this outrageous amount of investment through amateur programmers vibe-coding to-do apps or high school students cheating on their English assignments.
Could it happen? Maybe. But maybe not. The Self-Driving Car Principle proves to us that some capability which seems close but requires total perfection—not just asymptotic perfection—could be much further away than rapid progress might make it appear.
Oh, by the way, Elon is still out there in 2025 saying that full self-driving capability is “coming this year”. We’ll see how that one ages, I guess.
~
Gulf Of Usefulness
I think one of the big reasons people have so radically different opinions on AI right now is that the gulf of usefulness right now is enormous between software developers and literally anyone else.
I think this is part of what creates the “techbro bubble”, where people within tech get more and more sure that AI is going to be a big deal, while people outside of tech get more and more convinced it’s just a useless gimmick that’s being shoved down their throat.
The thing that convinced me that AI tools weren’t just some kind of NFT-style fad was when I saw just how many people were actively using it at my co-ops, on the co-ops of people I know, on personal side projects of people I know…
Action is the strongest signal, and the message is clear: people in tech find AI legitimately useful and use it without being forced to.
(This isn’t a value judgement: people use cars because they find them useful, and I think people know my opinions about car-centric infrastructure. But I’d never call cars a gimmick.)
But for the life of me, I can’t possibly picture how I would be using AI right now in a non-destructive way if I was a doctor, or a teacher, or a police officer, etc. AI is great for things which require low precision, immediately testable results which can be verified and iterated on yourself (and anything it does can’t be something that’s more fun to do yourself: ie writing).
(Perhaps I just lack imagination, but I also think there’s evidence for this in how few successful AI-based products there are outside of the startups/products which promise to make you better at programming somehow.)
~
Being Intentional
I don’t think AI is an ontologically evil technology. Maybe I’m an optimist, or naive, but I don’t think it has to be inherently harmful. I think it can be used in bad ways and have bad effects on society at large, but I think there is a real place for it. The fact that so many people use it today proves that there’s something useful there, and it isn’t going away. Again, this is not NFTs.
Maybe it’s that optimism talking again, but I think technology in general can make people’s lives better in tangible ways. No matter what bad things the internet or the smartphone has done to the world, I think we’re net better off for being able to have easy access to information, or for my grandmother to be able to text her grandchildren and get an instantaneous response from hundreds of kilometres away. I wouldn’t have gotten a Computer Science degree if I didn’t believe that, at least in the aggregate, my work would be good for the world.
There’s still something that makes me nervous, though.
The point of a new technology is to make something easier—to remove friction. The shovel made it easier to dig holes, the printing press made it easier to create a book. The internet made it easier to find and share information with people around the world at an extremely low cost.
Existing technologies will evolve over time to have less and less friction, and almost always, the most frictionless technology will win. The problem is, your mind only really considers friction at the exact moment of interaction, and doesn’t have a good way of thinking about pain added long-term—and we’re even worse at considering potential opportunities lost by taking the easy way out.
Social media is a good example of this. We went from disconnected internet blogs, to aggregated feed social media platforms like Facebook or Twitter, to algorithm-driven platforms like TikTok. Each step reduces friction by reducing the agency of the human beings involved. You go from intentionally visiting people on blogs, to having users link to each other on a shared feed, to a system which recommends things solely algorithmically, trying to optimize your time spent on the system.
From the perspective of removing friction, TikTok and its ilk is about as good as it gets: you can easily slip into spending hours scrolling through videos just long enough to keep your attention so you don’t realize what else you’ve lost: time you could have spent enjoying culture which wasn’t total throwaway junk garbage.
By letting the machines make decisions for you, you’re missing a chance to be intentional about your own life. You’re letting it be determined by something other than yourself. I don’t understand why this doesn’t drive everyone in the world crazy.
This is when tech is at its worst: when it encourages you, subtly, to give up agency in your life for the sake of convenience, rather than giving you more agency.
If this—tech leading to the narrowing of the spirit—seems inevitable to you, I disagree. It’s only the way it is because society has been constructed today to permit this to happen. It’s a human-engineered problem, and one that we can engineer our way out of.
Where will AI take us, then? Will it increase our agency or reduce it? Initial results seem very discouraging.
AI allows people to frictionlessly abstract away the process of learning skills. It’s most obvious in the flood of AI slop images we see on the internet—bad art made by people who may otherwise have gone out and learned how to make art.
I’m under no illusion that everyone who ever generated an image on ChatGPT would have been an artist given time. But I imagine the young person who wants to make a comic, and instead of being “forced” to learn to draw and developing a love of the craft, they just plug their request into a chatbot and call it a day, not knowing enough to know what they could have had with just a little more work.
Now imagine this same effect—reducing the need for anyone to ever learn skills—and multiply it across all disciplines, and across the whole population. Imagine the people who will never appreciate The Great Gatsby because they just plugged their high school essay questions into ChatGPT. Imagine the people who won’t learn an instrument because AI can generate a song for them. Imagine the people who won’t ever learn to code because AI can vibe-code them anything they ever want.
(This last one is the one which, personally, bothers me the most. I have a grudge against sloppy, thoughtless design, which is exactly what AI encourages: slapping bits and pieces together that look right and barely work. On a fundamental level in my soul, this drives me nuts.)
I should be careful not to use passive voice here. “AI” didn’t just pop into existence from space, it was built by companies, creating products, chasing certain goals. It was totally within their power to make tools that merely reduce the friction of engaging in good work by giving context or support, without totally supplanting the work being done.
But in a capitalist marketplace, the option with the least friction wins, no matter the long term harm. Every company knows this, and the last thing any of them will willingly do is lose.
The thing that gives me the most hope is that we now, as a society, should know better than this. I know I’m not the only one to be thinking this. Everyone, on some level, whether they act on it or not, understands that TikTok isn’t an empowering, fulfilling use of their time.
There are things that, even in a free market, we don’t let companies do because they’d be socially destructive. Otherwise they’d start putting nicotine in my coffee at Tim’s.
Now may be the last, best chance to, as a society, set the rules for what AI should be doing, and how the companies developing it need to act to avoid the social harms we know it could cause if left unchecked. We saw what unrestrained, engagement-maximizing social media did to our brains, and it’s now in our power to make sure it isn’t allowed to happen again.
I don’t know what this future should look like, but I do know what it absolutely cannot look like.
~
Why’d You Kill My Friend, Sam?
GPT-5 is the latest and, allegedly, greatest AI model produced by OpenAI, creators of ChatGPT. It was rolled out worldwide to much fanfare on August 7th, 2025.
To put it mildly, this launch did not go well.
The biggest uproar came from people who had become emotionally attached to the specific quirks of the previous model, GPT-4o. They’d grown accustomed to its specific phrasing, its tone, and especially the way it flattered them with endless praise, companionship, and words of affection on-demand.
The launch of GPT-5 involved the abrupt disabling of GPT-4o, which shocked users when they discovered that their “close friend” had suddenly vanished and been replaced by a new system—colder, more clinical, and
Much has been written about what this means about our scary new age of ever-more intimate human-machine relationships, and what it means for society at large that so many people have fallen into parasocial relationships with next-word predictors on their computer.
But there’s something else I don’t see people talking about: just how caught off guard OpenAI was when all of this happened.
The optimistic outlook (which OpenAI would like you to believe) is that the scientists there have thoughtfully considered all aspects of the new world we’re stepping into and are working to maximize benefits for everyone. The cynical outlook is that OpenAI are leading us deliberately into a dark future of dependency on their systems for everything.
The truth seems to be something much more frightening: they’re just winging it.
They’re making changes based on whatever seems best at the moment. That’s it. They don’t know what impact their technology will have on their users or society at large. The people at the center of the movement, who are theoretically the most able to predict these kinds of things, seem utterly clueless.
No one knows where all this is going to lead. Anyone who claims to know is either lying to themselves or lying to you.