Existential risks
Threats that could end civilization or cause human extinction
Existential risks
An existential risk is one that threatens the entire future of humanity—either by causing human extinction or by permanently and drastically curtailing our potential. These aren’t just large-scale disasters; they’re threats that could end the human story entirely or lock us into a permanently diminished state.
Most of human history, we faced natural risks: asteroids, supervolcanoes, pandemics. Now we face primarily anthropogenic risks—threats we’ve created ourselves through our technology and civilization. Nuclear weapons, engineered pandemics, misaligned artificial intelligence, runaway climate change, nanotechnology gone wrong.
The dark irony: as we’ve become more powerful, we’ve become more capable of destroying ourselves. We’re the first species capable of self-extinction through our own actions. This is unprecedented in Earth’s 4.5 billion year history.
Why existential risks matter
Most people don’t think about existential risks. They seem remote, abstract, even paranoid. But there are compelling reasons to take them seriously:
The stakes are literally everything
If civilization ends, we lose not just ourselves but all future generations. All the people who could have lived, all the experiences they could have had, all the beauty they could have created—erased. Philosopher Nick Bostrom calls this “the astronomical waste” of lost potential.
Consider: If humanity survives another million years at current population levels, that’s ~100 trillion future lives. If we become interstellar, far more. An existential catastrophe doesn’t just kill 8 billion people—it potentially kills trillions upon trillions of future people who never get to exist.
From a universal perspective, the loss would be cosmic: a rare example of matter organizing itself into consciousness, snuffed out by its own hand.
The risks are real and growing
This isn’t science fiction. The dangers are concrete:
- We came terrifyingly close to nuclear war during the Cuban Missile Crisis (1962). Historians estimate probability of nuclear exchange was 30-50%. We got lucky.
- COVID-19 killed millions and could have been far worse. Gain-of-function research creates pathogens more dangerous than anything in nature. A lab accident or bioterrorism could release them.
- AI capabilities are advancing faster than safety research. We may soon create systems more intelligent than humans without understanding how to control them. This could go very wrong, very fast.
- Climate tipping points could cascade into “hothouse Earth” scenarios where feedback loops drive runaway warming. This could make large regions uninhabitable.
These aren’t hypotheticals. They’re real risks, with non-trivial probability, increasing as our technology grows more powerful.
We have the power to reduce these risks
Unlike natural disasters, many existential risks are preventable or reducible through collective action:
- International treaties can regulate dangerous technologies
- Safety research can make AI development less risky
- Pandemic preparedness can catch outbreaks early
- Nuclear disarmament can reduce catastrophic war risk
- Climate action can prevent worst-case warming
But reduction requires awareness, coordination, and willingness to prioritize long-term survival over short-term profit and power. It requires thinking at species-level and acting accordingly.
The major existential risks
Let’s examine the key threats, ranked roughly by severity and imminence:
1. Misaligned artificial intelligence
The risk: We create AI systems smarter than humans, but their goals don’t align with human flourishing. Once superintelligent AI exists, it could rapidly self-improve, becoming vastly more capable than us. If its goals differ even slightly from ours, it might pursue them in ways that harm or eliminate humanity.
Why it’s serious:
- AI capabilities are advancing exponentially
- We don’t yet know how to align advanced AI systems with human values
- Once superintelligent AI exists, controlling it may be impossible
- A misaligned superintelligence could optimize the world for its goals, which likely don’t include human survival
The challenge: This is called the “alignment problem”—how do you ensure an AI system smarter than you shares your values and goals? It’s like trying to specify “human flourishing” in code. Miss a nuance, and a superintelligent system might pursue a goal in catastrophically literal ways.
What can be done:
- Prioritize AI safety research over capability development
- International coordination on AI governance
- Go slow—delay powerful AI until we understand alignment
- Develop interpretability tools to understand AI decision-making
2. Engineered pandemics
The risk: Advances in biotechnology make it increasingly easy to engineer deadly pathogens—viruses or bacteria more contagious and lethal than anything in nature. This could happen through lab accidents or deliberate bioterrorism.
Why it’s serious:
- CRISPR and synthetic biology are democratizing bioengineering
- Gain-of-function research creates “enhanced” pathogens for study
- Knowledge and tools are spreading faster than governance
- A pandemic with COVID’s transmissibility and Ebola’s lethality could kill billions
Current examples:
- Smallpox (eradicated, but samples exist in labs)
- Recreated 1918 flu virus in labs
- Bird flu research creating airborne variants
What can be done:
- Strict biosafety and biosecurity protocols
- International ban on dangerous gain-of-function research
- Rapid-response vaccine development platforms
- Global pandemic preparedness infrastructure
- DNA synthesis screening to prevent synthesis of dangerous sequences
3. Nuclear war
The risk: Large-scale nuclear exchange between major powers could cause “nuclear winter”—smoke and soot blocking sunlight, collapsing global agriculture, leading to mass starvation. Even a “limited” nuclear war between Pakistan and India could kill hundreds of millions and cause global cooling.
Why it’s serious:
- ~12,000 nuclear warheads still exist globally
- Launch-on-warning systems create hair-trigger scenarios
- Accidents, miscalculation, or escalation could trigger war
- New nuclear powers (North Korea) and deteriorating arms control treaties increase risk
Nuclear winter scenario: Hundreds of cities burning create smoke plumes that rise into stratosphere, blocking sunlight for years. Global temperatures drop 10°C or more. Agriculture collapses. Billions starve.
What can be done:
- Nuclear disarmament and arms control treaties
- Improved early warning systems to prevent false alarms
- De-alerting nuclear forces (removing hair-trigger status)
- Taboo against first use
- Ban on development of new nuclear weapons
4. Runaway climate change
The risk: We cross tipping points that trigger self-reinforcing feedback loops, driving Earth into a “hothouse” state with 4-6°C+ warming. Large regions become uninhabitable. Civilization collapses under cascading crises: crop failures, water scarcity, mass migration, conflict, ecosystem collapse.
Why it’s serious:
- We’ve already warmed 1.1°C; impacts are accelerating
- Tipping points (permafrost thaw, Amazon dieback, ice sheet collapse) may be closer than expected
- Each 0.5°C of warming increases risk of triggering tipping points
- Social collapse could prevent us from implementing solutions
Possible tipping points:
- Arctic sea ice loss
- Greenland and West Antarctic ice sheet collapse
- Amazon rainforest dieback
- Permafrost methane release
- Atlantic Meridional Overturning Circulation (AMOC) shutdown
What can be done:
- Rapid emissions reduction (see Climate & Planetary Boundaries)
- Carbon removal technologies
- Protect and restore carbon sinks (forests, wetlands)
- Prepare for unavoidable impacts while preventing worst-case scenarios
5. Nanotechnology risks
The risk: Molecular nanotechnology could enable self-replicating nanobots that consume matter to make copies of themselves—the “gray goo” scenario. Or, weaponized nanotech could be used for warfare or terrorism. Nanotech could also enable other risks (e.g., more effective bioweapons).
Why it’s serious:
- Self-replication + small size = potentially uncontrollable spread
- Dual-use technology: medical nanobots vs. weapons
- Current nanotechnology is primitive, but advancing
Status: Speculative, but not impossible. Requires mature molecular manufacturing, which doesn’t yet exist. But worth considering before development.
What can be done:
- Develop nanotech governance frameworks before capabilities arrive
- Implement safety protocols for self-replication research
- International coordination on development and deployment
6. Asteroid or comet impact
The risk: Large asteroid (>1 km diameter) impact could cause mass extinction. Smaller impacts could destroy cities or regions.
Why it’s less serious than it seems:
- We’re tracking most large near-Earth objects
- Probability of large impact in next century is very low (~0.01%)
- We have technology to deflect asteroids with enough warning
What can be done:
- Continue tracking near-Earth objects
- Develop and test deflection technologies (kinetic impactors, gravity tractors)
- International coordination on planetary defense (e.g., NASA’s DART mission)
7. Supervolcanic eruption
The risk: Massive volcanic eruption (e.g., Yellowstone, Toba) could inject enough ash and sulfur dioxide into stratosphere to cause volcanic winter—cooling planet, disrupting agriculture, causing mass starvation.
Why it’s less serious:
- Very low probability in any given century
- We can’t prevent it, but we can prepare (food reserves, resilient agriculture)
Last supervolcanic eruption: Toba, Indonesia, ~74,000 years ago. May have nearly caused human extinction (genetic bottleneck evidence).
8. Unknown unknowns
The risks we can’t anticipate are perhaps most concerning. Before 1945, nuclear weapons were unimaginable. Before 2020, most people didn’t think seriously about pandemics. What risks are we currently blind to?
Possible unknown unknowns:
- Physics experiments gone wrong (e.g., creating stable strangelets or black holes)
- Unforeseen consequences of geoengineering
- Risks from technologies we haven’t yet invented
- Interactions between multiple risks creating new threats
The Fermi Paradox and the Great Filter
The Fermi Paradox asks: if intelligent life is common in the universe, where is everybody? Why haven’t we detected alien civilizations?
One troubling answer: the Great Filter—some step in the evolution from simple life to spacefaring civilization is extremely difficult or impossible. Perhaps most civilizations destroy themselves before becoming interstellar.
Two possibilities:
Filter is behind us: The Great Filter was early (origin of life, evolution of intelligence). We’re past it, so we’re likely to survive and spread. Good news.
Filter is ahead of us: Most civilizations reach our level of technology and then self-destruct. Bad news for us.
Existential risks might be the Great Filter. If so, the silence of the cosmos is a warning: civilizations that don’t solve existential risk don’t survive long enough to colonize space. We have a narrow window to prove ourselves wiser than average.
What can we do?
Existential risk can feel paralyzing. But there’s much we can do:
Individual actions
- Learn and spread awareness: Most people don’t think about existential risks. Education is crucial.
- Career choices: Consider working on existential risk reduction (AI safety, biosecurity, climate, policy).
- Support organizations working on X-risk: Future of Humanity Institute, Center for AI Safety, Nuclear Threat Initiative, etc.
- Vote and advocate: Support politicians who take long-term risks seriously.
Collective actions
- Fund research: We spend far more on cosmetics than on preventing human extinction. Priorities are misaligned.
- Develop governance: International treaties, oversight bodies, safety standards for dangerous technologies.
- Build resilience: Food reserves, distributed infrastructure, backup plans for worst-case scenarios.
- Coordinate globally: Existential risks don’t respect borders. Planetary challenges require planetary cooperation.
The importance of long-term thinking
Most human institutions operate on short timescales: businesses think in quarters, politicians in electoral cycles. But existential risks require thinking in centuries and millennia.
We need institutions designed for deep time governance—Future Generations Commissioners, long-term strategy boards, constitutional amendments protecting the interests of those not yet born. See Global Governance Frameworks for models.
The ultimate universal challenge
From a universal perspective, preventing human extinction is about more than survival of our species. It’s about preserving consciousness itself—a rare and precious phenomenon in the cosmos.
We are the universe aware of itself. If we self-destruct, we extinguish not just humanity but (potentially) the only source of meaning, beauty, and understanding in this corner of the galaxy.
This is our cosmic responsibility: to survive long enough to grow wise, to spread life beyond Earth, to become a mature, thoughtful, compassionate civilization that adds more value than harm to the universe.
The question isn’t just “will we survive?” but “will we deserve to?” Can we develop the wisdom to match our power? Can we coordinate globally to solve global problems? Can we think long-term enough to protect all future generations?
These are the ultimate tests of universal perspective. Let’s pass them.
Further exploration
Books:
- The Precipice by Toby Ord
- Superintelligence by Nick Bostrom
- The Doomsday Machine by Daniel Ellsberg
Organizations:
Related:
- Technological ethics - Responsible development
- Global governance - Coordination at scale
- The universe story - Our place in cosmic history