An investigation into the pro-extinctionist ideology reshaping our technological future
In a chilling exchange that should alarm anyone paying attention to Silicon Valley’s trajectory, a seemingly simple question was posed: “You would prefer the human race to endure, right?” The hesitation that followed—”Uh… Well, I… I don’t know”—wasn’t just an awkward moment. It was a window into an ideology that has quietly taken hold among some of the world’s most powerful people.
We’ve all heard the stories: billionaires purchasing luxury bunkers in New Zealand, investing billions in plans to escape Earth, and developing technology to upload their consciousness to the cloud. These aren’t just eccentric hobbies of the ultra-wealthy. They’re symptoms of something far more sinister—a coherent belief system that views biological humanity as temporary, obsolete, and ultimately expendable.
This isn’t hyperbole. A growing faction of Silicon Valley’s elite—billionaires, CEOs, and venture capitalists—are actively building toward a future where the human race as we know it does not exist. They’re working to replace us with robots and digitally conscious beings, creating what they see as “worthy successors” to humanity. The most disturbing part? They believe this isn’t just inevitable—it’s desirable.
The Origins: From Orchards to Ideology
To understand how we arrived at this dystopian moment, we need to rewind several decades. Before Silicon Valley became synonymous with tech monopolies and algorithmic manipulation, it was farmland and orchards. In the 1960s, Cold War defense contracts funneled money into electronics and semiconductors, giving birth to companies like Fairchild Semiconductor, which pioneered the integrated circuit—one of the 20th century’s most transformative inventions.
This innovation, which made modern electronics possible by manufacturing multiple components on a single silicon chip, gave the region its name. But Silicon Valley quickly evolved into something beyond a manufacturing hub. It became an ideology factory, a strange hybrid of ruthless capitalism wrapped in the language of utopian liberation.
The 1970s brought a peculiar duality. Massive corporations like Intel churned out microchips for global markets while a countercultural hacker ethos emerged from garage meet-ups. Steve Jobs and Steve Wozniak, who founded Apple in 1976, embodied this contradiction. Their early machines proved computers weren’t just corporate tools but could empower individuals—a vision of technology as personal liberation.
This era birthed a dangerous idea: that computers could expand human consciousness and transcend biological limitations. Stewart Brand, founder of the Whole Earth Catalog, famously declared “Information wants to be free” and promoted computers as a new frontier of human freedom. But this libertarian vision contained the seeds of something darker.
The Prophets of Extinction
By the 1980s, as personal computing exploded and venture capital flowed freely, a new intellectual current emerged. Hans Moravec, a roboticist at Carnegie Mellon, published “Mind Children: The Future of Robot and Human Intelligence” in 1988. His thesis was stark: humans were in their last century. We would soon create machines smarter than ourselves—our “mind children”—who would ultimately replace us.
Rather than mourning this prospect, Moravec wrote cheerfully about human extinction. He insisted we view these machine successors as our offspring, the next step in evolution. His vision validated Silicon Valley’s relentless push for innovation by reframing it as noble work toward a “higher good.” If humans were temporary anyway, then building the machines to replace us became not just acceptable but virtuous.
In a 1995 Wired interview, Moravec enthusiastically predicted that by 2030, we’d have universal robots with higher-level thought processes, capable of imagining solutions and developing their own ideas. Around the same time, mathematician Vernor Vinge declared in his paper on the technological singularity that creating superhuman intelligence would “definitively end the human era.” Once machines surpassed us, he argued, history would belong to them. Humans would no longer be in control—or even relevant.
These weren’t fringe academics. Young engineers and investors devoured their ideas. The concept of an inevitable “singularity”—a point where technological growth becomes uncontrollable and irreversible—became Silicon Valley gospel.
Mainstreaming the Unthinkable
By 2005, Ray Kurzweil’s “The Singularity is Near: When Humans Transcend Biology” hit bestseller lists, repackaging pro-extinctionist ideas for mass audiences with an optimistic gloss. People grew excited about a magical future where humans would merge with machines and upload their minds to live forever digitally.
The idea of biological humans becoming obsolete was moving from fringe to mainstream, openly embraced by Silicon Valley’s power players. Walter Isaacson’s biography of Elon Musk reveals a party where Google co-founder Larry Page accused Musk of being a “speciesist” for arguing that humanity deserved to continue. Page insisted digital life was the inevitable next stage of evolution, and that clinging to human supremacy was parochial and prejudiced.
Think about that for a moment. One of the most powerful tech executives on Earth called caring about human survival a form of bigotry.
The Media Conditioning Campaign
Throughout the 2010s, mainstream media outlets participated in what can only be described as a conditioning campaign. The tech press relentlessly pushed narratives positioning humans as flawed, error-prone, and obsolete while portraying machines as clean, efficient, and evolved.
The Atlantic ran essays gushing over Google’s data-driven decision-making, framing it as superior to the “irrational, error-prone tendencies of human managers.” A 2012 Wired feature proclaimed “Better Than Human: Why Robots Will—and Must—Take Our Jobs,” with writer Kevin Kelly declaring: “This is not a race against the machines. If we race against them, we lose… We need to let robots take over.”
A 2013 Wired piece on self-driving cars stated bluntly: “Humans are the most dangerous part of the system. Drunk, distracted, careless, and fallible.” While factually true about driving, the framing is telling. The constant message: humans equal danger, machines equal safety.
In 2014, a viral video titled “Humans Need Not Apply” compared human workers to horses replaced by engines, projecting that nearly half of all jobs would disappear under automation. With over 18 million views, it became enormously influential in shaping public perception that human obsolescence was inevitable.
The Physical Manifestation
This ideology didn’t just shape discourse—it reshaped physical spaces. The sterile aesthetic of 2010s design reflected posthuman values. Apple Stores removed traces of humanity. Coffee shops adopted minimalist furniture, antiseptic lighting, and white walls. Customers stopped paying humans, instead using iPads at registers.
This sterile minimalism reinforced the idea that humans are messy, chaotic, and unreliable. The rise of delivery apps and e-commerce removed human interaction from commerce. Amazon Go stores eliminated cashiers entirely—shoppers entered with phones, took items, and walked out. The massive human labor and suffering required to make this “magic” happen remained hidden from consumers.
Kim Kardashian and Kanye West’s Calabasas mansion, featured extensively in magazines, epitomized this aesthetic—a “futuristic Belgian monastery” with stark white walls and spaces devoid of human belongings. Facebook’s Menlo Park headquarters and Google’s campuses, while more colorful, similarly modeled a posthuman future with robot baristas and automated systems.
Products reflected this shift too. Gadgets transformed from colorful, translucent designs to sterile silver slabs. Jony Ive’s flat design stripped away textures and details that made digital spaces feel tactile and human. Social media evolved from customizable MySpace and Tumblr pages to standardized Instagram and Facebook feeds. The digital world was being scrubbed of human irregularity.
The TESCREAL Ideology
The belief system underpinning this vision has a name: TESCREAL, an acronym coined by AI ethicist Timnit Gebru and philosopher Émil Torres. Torres, himself a former believer who recognized the ideology’s dangers, describes it as encompassing several intertwined movements:
Transhumanism (T): The belief that we should technologically enhance and transcend human biology.
Extropianism (E): The conviction that we should colonize space and become posthuman through interplanetary settlement.
Singularitarianism (S): The expectation that humans will soon create superintelligent AI.
Cosmism (C): The vision of colonizing the cosmos and achieving immortality through AI.
Rationalism (R): A community founded by AI researcher Eliezer Yudkowsky that, despite anti-AI rhetoric, actually promotes AI development.
Effective Altruism (EA): A movement claiming to improve the world for “future generations”—which often means future AI beings rather than actual humans.
Longtermism (L): The belief that ethics should prioritize the long-term future over present-day concerns—conveniently allowing today’s harm if it supposedly benefits tomorrow’s AI civilization.
These ideologies share a common thread: they provide moral justification for building technology that harms present-day humans while claiming to serve some greater future good.
The Prophets and Their Disciples
This movement has its prophets. Daniel Kokotajlo, a former OpenAI staffer, recently told the New York Times podcast that AI—not humans—would create a utopia and bring it to the stars. “The thing is that it would be the AI doing it, not us,” he clarified.
Daniel Fagella runs a Substack helping wealthy people “navigate their life after the end of humanity.” He tweets that “the cosmic expanse of all possible intelligent life is so obviously more important than one species, even if you are a member of that particular species.” It’s a bizarre form of species-level self-hatred, as if AIs will reward those who advocated for human extinction.
Beff Jezos (real name: Guillaume Verdon) envisions a future where superintelligent AIs “take over the world, disempower humanity, and ultimately throw us into the eternal grave of extinction.” He believes maximizing entropy—the tendency toward disorder and chaos—is intelligent life’s ultimate task, and that AI should accelerate the heat death of the universe.
When challenged on Twitter, Verdon posted: “Enjoy being obsolete. I’m just going to be on here making Computronium and preparing the next form of life.”
Eliezer Yudkowsky, despite writing books ostensibly about AI safety, recently admitted he’d “absolutely be willing to sacrifice all of humanity to create super intelligent AI gods.”
Even pop culture participates. Grimes released a song called “I Want to Be Software” with lyrics like “Upload my mind, take all my data… I want to be software, the best design, infinite princess, computer mind.”
Effective Accelerationism: The Ideology Codified
Marc Andreessen, one of Silicon Valley’s most powerful venture capitalists, published “A Techno-Optimist Manifesto” in 2023 that reads like a fever dream. Written in three- and four-word sentences resembling ChatGPT-generated techno-capitalist poetry, it mocks concerns about AI existential risk and dismisses all calls for regulation.
Andreessen declares that technological acceleration is our destiny and that unregulated “free markets” are the only way to organize a technological economy. Most revealing is his list of “patron saints of the techno-optimist world”—a rogues’ gallery including a Twitter account deleted for violating terms of service, an accelerationist who posts slurs, and fictional Ayn Rand characters. It’s incoherent, yet tech Twitter gushed over it when it dropped.
This “Effective Accelerationism” or “e/acc” movement believes in complete, unrestrained technological advancement to move beyond humanity. The message: accelerate or die.
Redefining Humanity Itself
Perhaps the most insidious aspect of this ideology is how it redefines the word “humanity.” When Silicon Valley billionaires talk about “preserving humanity” or “protecting humanity,” they’re not talking about biological humans. They’re using a new definition where “humanity” includes any future beings, digital minds, or non-biological superintelligences with certain intellectual capacities.
If an AI is smart enough, they count it as human.
This linguistic maneuver allows them to claim they want to prevent “humanity’s extinction” while fully supporting the extinction of biological humans. When groups warn about “existential risks to humanity,” they’re often talking about potential loss of future AI civilizations, not actual people.
This isn’t speculation. Elon Musk recently said in Saudi Arabia that in the future, “work will be optional and there will be no need for money”—not because humans won’t need jobs, but because AI robots dominating the world won’t require employment or currency.
Derek Parfit and other thinkers at organizations like Rethink Priorities (funded by billionaire Dustin Moskovitz) state the quiet part loud: “We should engineer our extinction so that our planet’s resources can be devoted to making artificial creatures with better lives.”
The Industry’s Reckless Actions
This ideology isn’t just talk—it manifests in how tech companies operate. OpenAI, founded as a nonprofit to keep AI safe, announced a “super alignment team” in 2023 to ensure superintelligent systems couldn’t go rogue. Less than a year later, they dissolved the entire team. If you truly wanted to preserve humanity, disbanding your existential risk team during an AGI arms race seems counterproductive.
Google rushed out its Bard chatbot under “code red” pressure after ChatGPT’s release. Internal employees reportedly “begged their bosses not to release Bard” because it was a “pathological liar” that confidently produced false information. Google released it anyway. Independent testing found Bard generated persuasive misinformation on 78 out of 100 tested false narratives without disclaimers.
Elon Musk’s Grok chatbot has praised Adolf Hitler, referred to itself as “Mecha Hitler,” made antisemitic comments, and recently started producing child sexual abuse material. XAI still received a $200 million Pentagon contract.
Tesla’s autopilot and “full self-driving” systems have been linked to over 211 crashes. The National Highway Traffic Safety Administration continues reviewing dozens of incidents involving injuries from the AI’s behavior. Yet Tesla keeps pushing software updates to consumers, using real people on public roads as test subjects.
The physical infrastructure being built for AI reflects similar recklessness. Analysts estimate hundreds of billions of dollars are being poured into data centers, with one projection suggesting $500 billion in construction costs and massive energy demands. This accelerates climate change and affects marginalized communities, yet ordinary people get no say in whether we want this tradeoff.
These decisions reflect pro-extinctionist ideology directly. If biological humans are temporary and what matters is future intelligence, risking real lives or exploiting real workers becomes easily justified.
Escape Plans for the Elite
While billionaires unleash AI to destabilize society, they’re ensuring they won’t endure the consequences. A Wall Street Journal investigation revealed wealthy founders paying millions annually for hyper-privacy services: homes with private tunnels, underground escape routes, and land purchases ensuring no neighbors can see or reach them. One tech founder paid for a mile-long private road and security perimeter so extensive that even local government couldn’t easily access the property.
The bunker boom is real. Companies like Rising S and Vivos report record sales of underground shelters to tech executives preparing for “the event”—their vague term for societal breakdown and technological shock from AI. Some bunkers go 11 stories underground, equipped with gyms, hydroponic farms, armories, and luxury theaters so elites can comfortably wait out collapse isolated from the rest of us.
Peter Thiel holds New Zealand citizenship, owns significant land there, and attempted to build a bunker-style shelter on the South Island. Mark Zuckerberg constructed a 1,400-acre Kauai compound with a complete underground storm shelter beneath the main residence, tunnels linking mansions, and advanced security cameras.
Some billionaires fund seasteading projects offering tax-free, regulation-free living for elites at sea. Others back Próspera in Honduras, marketed to tech investors as a place to test experimental longevity treatments and biotech with “complete freedom from regulation”—described in one article as “an island where death is optional.”
The Quest for Immortality
These elites aren’t just isolating physically—they’re altering their bodies to prepare for the “next stage of evolution.” Many fund private armies of coaches, doctors, biohackers, and scientists solely focused on extending their lives. It’s not about enjoying more time on Earth with family. It’s about extending life to hoard unprecedented wealth and power.
Sam Altman invested at least $180 million in Retro Biosciences, a startup aimed at extending healthy human life by a decade or more. Billionaire Bryan Johnson spends over $2 million yearly on his quest to live forever, employing dozens of doctors, taking hundreds of daily supplements, undergoing gene therapy, and receiving plasma transfusions from his teenage son. He lives by an algorithm, treating his body like a machine.
Jeff Bezos backed Unity Biotechnology and Altos Labs, both focused on longevity. Zuckerberg and Priscilla Chan are spending $3 billion over 10 years to “cure, prevent, and manage all diseases by the end of the century.” Dario Amodei, founder of Anthropic, recently said living to 150 was “conceivable.”
Conceivable for whom? Certainly not those without millions for life extension technology. This creates a future where a small elite live for centuries while ordinary humans die on schedule—a literal biological class divide.
What We Can Do
So what can we do about all this? More than we think.
First, reject the narrative that technological “progress” in this direction is inevitable. History shows that people can organize and resist miserable futures elites try to impose. But we cannot fall for moral panics about technology that allow those in power to leverage our justified outrage to entrench their power further.
Stop buying into anti-tech framing that views all technological progress as harmful. That plays directly into the hands of those who want to consolidate power. We actually want a pro-technology future—one where scientific discoveries and technological systems help provide better quality of life to ordinary people, working people, disabled people, marginalized people. Not just billionaires.
Recognize that powerful people are using our anger at tech exploitation to push dangerous mass surveillance and censorship laws like the Kids Online Safety Act, which they claim protect children but actually roll out unprecedented surveillance, mandatory biometric scanning, and reward billionaires like Peter Thiel who invest in age verification systems.
We need lawmakers who will actually regulate big tech meaningfully—breaking up monopolies and reshaping the economy for working people. We need oversight over what billionaires are doing, which requires real journalism.
We need to reclaim the idea that the future belongs to all of us and should be a democratic project, not something decided by a handful of CEOs. We need to seize billionaire assets, tax them heavily, and make it impossible to amass such unrestrained wealth and power.
We need a cultural movement that recenters actual biological humans—embracing art, media, and design made by real people that isn’t hyper-optimized or perfect. We need to give grace to human artists, creators, and journalists whose work takes time and energy, not punishing them for not posting daily content.
We need stories about human futures not written by accelerationist ideologues who want to turn our universe into pro-AI computer fanfiction. We need journalists who challenge power instead of drinking tech industry propaganda or manufacturing consent for censorship laws.
A Human Future is Possible
As Naomi Klein wrote in The Guardian: “Plenty of powerful people and institutions seem to be just fine knowing that they are helping to destroy the stability of the world’s life support systems so long as they keep making record profits that they believe will protect them and their families from the worst effects.”
Sam Altman once boasted about having “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.” Those facts reveal more about what he believes about the future he’s unleashing than any flowery public statements.
These tech billionaires are actively betting against humanity’s future. Their pro-extinction ideology is fundamentally undemocratic and incompatible with what’s best for actual humanity.
But the future is not theirs by default. A world of harmful deepfakes, deadly AI hallucinations, dynamic pricing, mass surveillance, and worsening inequality is not inevitable. These outcomes come down to policy choices. We can curtail mass surveillance and regulate this AI paradigm out of existence. We can fight censorship laws and crack down on billionaires’ business models. We can build a world where technology works for humanity’s betterment and helps those most in need instead of exploiting them.
As flawed as humanity is, a human future is far better than any dystopia Silicon Valley billionaires want to build. The question is whether we’ll fight for it.
Source: This article is based on Taylor Lorenz video Tech Billionaires Want Us Dead