May 16, 2025
The Philosophical Baggage of AI [Part 3]
This is my last post in this AI series, thank god. I, too, am sick of AI as a primary topic of conversation.
While writing the last post, I wanted to end with a quote I couldn’t quite remember from John Maynard Keynes.
So I asked ChatGPT, which said: “He imagined a world, about a hundred years in the future (so around 2030), where the workweek might be reduced to 15 hours because machines and productivity gains would meet most material needs. People would then be free to explore the “art of life”—culture, relationships, creativity, etc.”
The actual quote:
““For the first time since his creation man will be faced with his real, his permanent problem—how to use his freedom from pressing economic cares, how to occupy the leisure which science and compound interest will have won for him, to live wisely and agreeably and well.” – John Maynard Keynes
It’s ironic that the agent of destruction itself summarized the concept so well, and also by how off base Keynes prediction was.
Why was he so off base?
Our systems create our problems, but the system is a series of choices controlled by a handful of people.
That’s when I realized I needed a part 3.
I’m ending this series with a high-level overview of how insane tech bros are.
“”The future of AI felt like you could almost touch it,” the senior game designer1 remembers.
“The other question we often philosophized about was, ‘Why should humans be the only ones that create things?’ Why can’t we have the burden of creativity handled by a piece of AI?“
They imagined AI eventually writing music and poetry and even designing games.” (Parmy Olson, Supremacy)
…
BURDEN?!
Excuse me, the BURDEN of creativity?! What in the actual f…
I’d be hard-pressed to find a single person in real life who would call their hobby or art a “burden.” We are now 50 years removed from Mihaly Csikszentmihalyi’s research on the flow state. Most people I know are asking why AI has to take over creative writing and not doing the laundry.
There’s a massive disconnect between what tech bros see as the coming utopia (because they’re rich untouchable MOFOs) and the daily experience of normal human beings. I know because I’ve read and listened to a lot of stuff, for a long time, about the tech brotopia, and as depressing as it is, it’s important that you know it too.
Here’s how I would sum up the value system of the handful of people making the decisions about the technology that impacts every area of your life.
Tech Bro Core Beliefs:
- We are special little boys, more deserving of our wealth, status, etc., than other people.
- Because we’re so special, it is our birthright to “save the world” (translation: make ungodly sums of money by any means possible).
- We are terrified of death, which puts us on the same mortal plane as the peasants.
- Also we’re terrified of peasants (it’s why we all have bunkers).
- AI is our god, and generative AI is the second coming.
I’m only half joking. I point out the first four items so you can watch for them – I don’t have space for all the wild examples I’m thinking of, but you’ll start to notice.
I want to talk about #5.
“…atheists and tech nerds creating Calvinism.”2
Artificial Intelligence is the tech bro religion. They are a cult of fanatics who seem to genuinely believe in a savior made of silicon.
For this to make sense, I need to define what seems to be their presented 3 operating principle: effective altruism.
Effective Altruism
Sam Bankman-Fried (SBF) was the most high-profile believer of this nonsense, and openly admitted it was a hustle. It’s like the opium of the tech people.
Effective altruism is this idea of doing the most good – mathematically – extrapolated to a level that is mind-numbingly stupid.
Observe: You could volunteer to give food to poor kids. But you could have more impact if you took a tech job and made a shit ton of money so you could pay lots more people to feed poor kids. But those are only today’s poor kids! The future, by virtue of being the future, contains way more poor kids, so those are the ones we should save. And the way we do that is to make even more money but also work towards AGI, which will magically know how to solve the problem of poor kids entirely!
Wait, what is AGI?
AGI stands for Artificial General Intelligence, and this is the holy grail: truly human intelligence, but AI. ChatGPT is just what they’ve created while working toward that end goal, which is a tipping point that is called the Singularity.4
No, really. That’s what they call it.
First of all, they actually believe this can be achieved. A real “live” Hal, or that weird David robot guy from Prometheus.
But they also believe that AGI has the potential to end humanity, therefore, we must create it as quickly as possible before someone else does, so we can… prevent… the end of humanity?
“But perhaps the most disturbing ideologies that were to percolate around AGI were those focused on creating a near perfect human species in digital form. This idea was popularized in part by (Nick) Bostrom’s Superintelligence. The book had a paradoxical impact on the AI field. It managed to stoke greater fear about the destruction that AI could bring by “paper-clipping us,” but it also predicted a glorious utopia that powerful AI could usher in if created properly. One of the most captivating features of that utopia, according to Bostrom, was “posthumans” who would have “vastly greater capacities than present human beings” and exist in digital substrates. In this digital utopia, humans could experience environments that defied the laws of physics, like dying unaided or exploring fantastical worlds. They could choose to relive cherished memories, create new adventures, or even experience different forms of consciousness. Interactions with other humans would become more profound, because these new humans would be able to share thoughts and emotions with one another directly, leading to deeper connections.
These ideas were irresistible to some people in Silicon Valley, who believed such fantastical ways of life were achievable with the right algorithms. By painting a future that could look like either heaven or hell, Bostrom sparked a prevailing wisdom that would eventually drive the Silicon Valley AI builders like Sam Altman to race to build AGI before Demis Hassabis did in London: they had to build AGI first because only they could do so safely. If not, someone else might build AGI that was misaligned with human values and annihilate not just the few billion people living on Earth but potentially trillions of perfect new digital human beings in the future. We would all lose the opportunity to live in nirvana. Along the way, Bostrom’s ideas would also have dangerous repercussions as they drew attention away from studying how artificial intelligence could harm people living in the present.” (Parmy Olson, Supremacy)
Now you know why Elon Musk is working on Neuralink. And also why he’s suing OpenAI for not benefitting humanity or something while simultaneously building a competing AI company. Right.
OK, I know this is all weird, but it’s really only scratching the surface. There’s so much more I could go into, but I want to leave you this:
The future dangers of AI are being used to distract from the present dangers of AI.
The paper clip story mentioned above is the idea that a superintelligence programmed with the objective to create paperclips might use every resource in the world to create paperclips (“just doin’ my job!”) and accidentally cause the extinction of humanity.
But that’s not the current present danger.
Have you seen the Internet? Do you know how much toxic, racist, misogynist, violent, hideous, horrible things are floating around? AI was trained on the entire Internet. The worst of humanity. And primarily the white, Western, English-speaking version of humanity.
This goes way beyond irritation that “show me a CEO or doctor” ChatGPT prompts only generate images of men.
“(Timnit Gebru) came across an investigation into software being used in the US criminal justice system called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), which judges and parole offices used to help make decisions about bail, sentencing, and parole.
COMPAS used machine learning to give risk scores to defendants. The higher the score, the more likely they were to reoffend. The tool gave high scores to Black defendants far more than white defendants, but its predictions were often erroneous.
COMPAS turned out to be twice as likely to be wrong about future criminal behavior by Black defendants as it was for Caucasian ones, according to a 2016 investigation by ProPublica, which looked at seven thousand risk scores given to people arrested in Florida and checked if they’d been charged with new offenses in the next two years. The tool was also more likely to misjudge white defendants who went on to commit other crimes as low-risk. America’s criminal justice system was already skewed against Black people, and that bias looked set to continue with the use of inscrutable AI tools.” (Parmy Olson, Supremacy)
“In 2018, Amazon realized that an internal AI tool that it used to sift through job applications kept recommending more male candidates than female candidates. The reason: the tool’s creators had trained it on résumés submitted to the company over the previous ten years, most of which came from men. The model had learned that résumés with male attributes were more desirable as a result. But Amazon didn’t-or wasn’t able to-fix the tool. It just shut it down completely.” (Parmy Olson, Supremacy)
There is no transparency around how these LLMs (Large Language Models) have been trained, why they return the results they do, or even how to fix them. And yet OpenAI continues to churn out new models with no guardrails, while tech companies scramble to incorporate them into every piece of software available. This is the real problem we should be talking about.
How can we discern “good” and “bad” uses of AI?
Imagine something as standard as a house with bad water pressure because the original plumbing used a thinner gauge of pipe from ye olden days.
Everything in society is like this; issues layered upon issues. Maybe things changed, but the system didn’t, so it will continue to reinforce the old, outdated norms if we don’t step in. We’re on the precipice of an even more insidious invisible software layer, and people have no idea it’s happening.
On top of that, the people who are making all the decisions around these software layers are INSANE. I truly mean it when I say I’m scratching the surface on how wild Silicon Valley tech bros really are. It’s simply beyond the understanding of most normal, reasonable people.
Yes, I use AI most days, and yes, it does help my business to be more efficient. But we need to be judicious in our use of tech – when it’s appropriate, when it’s not, how to understand the risks, and when to speak up. We have the numbers; we just need the knowledge. Keep an eye on what’s happening, read between the lines, and always ask who it benefits.
When technologists imagined what a superintelligence could do if it went rogue, they were seeing echoes of themselves in a world where businesses were allowed to become unstoppable global monopolies. The most transformative technology in recent history was being developed by handfuls of people who were turning a deaf ear to its real-world side effects, who struggled to resist the desire to win big. The real dangers weren’t so much from AI itself but from the capricious whims of the humans running it.” (Parmy Olson, Supremacy)
Part 1: OK Fine, I’ll Talk About AI >
Part 2: How I Use AI – My Top 5 Use Cases >
Reading/Listening Recommendations:
- Supremacy by Parmy Olson (Finished reading this, and it’s thorough and layman-friendly. It doesn’t get particularly deep into the philosophy side of things).
- The Zizians: How Harry Potter Fanfic Inspired a Death Cult – Behind the Bastards Podcast (I have listened to this podcast for years, and this is, IMO, the WILDEST story Robert has ever covered. It shows how badly the effective altruism brain rot can go. Note: this is a 4 part series, and NSFW or kids.)
- Transhumanism
- The Dark Enlightenment and Curtis Yarvin
Footnotes:
1 That ‘senior game designer’ was Demis Hassabis, the man running Google DeepMind (and the Joker to the Batman that is Sam Altman of OpenAI/ChatGPT).
2 Robert Evans in the podcast linked above. Perhaps his finest (and funniest) analogy to date.
3 I think their organizing principle is just selfishness, but we’re still a few weeks out from them saying that directly.
4 See what I mean? Second Coming, the Rapture, the Singularity. Religious level language here.
Where does the “atheists and tech nerds creating calvinism” quote come from? That’s an interesting thought.
Neoliberalism and meritocracy seem like the logical end of the Protestant/Calvinist work ethic that Weber called “the spirit of capitalism.” The uniquely Calvinist idea is that very, very few are chosen/”saved,” and it has nothing to do, actually, with freedom or your choices or merit. Calvinism also denies people are free and can achieve anything meritorious on their own. What this results in is a social shame engine that drives people to act as they imagine the chosen “elect” would act, and then what ensues is an insane competition over appearances. Or, very sensitive people or those deemed failures become depressed and end their lives. There was a crop of suicides directly related to Calvinist introspection in 17th – 18th century England. At the same time in northern Europe, there was a wave of suicidal mothers who killed their children or other family members so they would be executed themselves but allowed to be spiritually forgiven first, leaving them a chance of not going to hell. (This led to suicide being decriminalized.) This shows how deeply affected the early modern capitalist countries were by the theological logic in their cultures. Most people just coped by playing the game of striving for status.
Fake it and hope you make it. This uniquely weird element in a radicalized Calvinist logic or psychology is actually destructive of value (and markets and everything else) and leads to all kinds of vices — narcissisism and pathologically lying, for sure. For various reasons the Calvinist/Puritan element in America iterated through much less toxic and sometimes reasonably decent cultural expressions in the past. I think when it caught on in forms of Fundamentalism and Prosperity Gospel in the south and the west, it may have morphed into something worse than the old Yankee strains, which did have a concern about limits and “what will and will not wash.” A fundamentalist, apocalyptic, prosperity gospel in secular form is a way of describing the late stages of the “California Ideology”/dotcom neoliberalism.
It’s a quote from Robert Evans, host of Behind the Bastards podcast, in I believe the first episode of The Zizians that I linked above (in the recommended section). It’s one of the funniest, most on-point things I’ve ever heard him say.
And yeah, what you’ve described is exactly it, and it’s apparently a thing that happens to these effective altruist types. It’s also very much tied to unconscious work beliefs in the US. As written by Oliver Burkeman:
“Social psychologists call this inability to rest “idleness aversion,” which makes it sound like just another minor behavioral foible; but in his famous theory of the “Protestant work ethic,” the German sociologist Max Weber argued that it was one of the core ingredients of the modern soul. It first emerged, according to Weber’s account, among Calvinist Christians in northern Europe, who believed in the doctrine of predestination that every human, since before they were born, had been preselected to be a member of the elect, and therefore entitled to spend eternity in heaven with God after death, or else as one of the damned, and thus guaranteed to spend it in hell. Early capitalism got much of its energy, Weber argued, from Calvinist merchants and tradesmen who felt that relentless hard work was one of the best ways to prove to others, but also to themselves-that they belonged to the former category rather than the latter.
Their commitment to frugal living supplied the other half of Weber’s theory of capitalism: when people spend their days generating vast amounts of wealth through hard work but also feel obliged not to fritter it away on luxuries, the inevitable result is large accumulations of capital.” (Oliver Burkeman, Four Thousand Weeks)
So that, but for AI guys. With emphasis on the ‘fake it til you make it’ you mentioned, or in tech bro cases, break the law until you accumulate enough power to be above the law and never face consequences for stealing copyrighted material, etc., while also making sure to burn the ladder for any potential competing tech companies (see also Jack Dorsey saying we should do away with copyright entirely).