🌍⚠️
Existential Risks
Disclaimer:
The topic of existential risks (or x-risks) is not a light-hearted one, and we recognize that delving into its implications can be emotionally challenging. Our primary objective is to foster planetary awareness and expand the emerging worldview of planetary stewardship, as we believe it can contribute to a brighter, safer, wiser and fairer future. To accomplish this, we must confront the dangers that could inhibit such a world from coming to fruition. Therefore, it is crucial that we diligently research and, collectively, address existential risks, employing every resource and tool at our disposal. This page represents our ongoing and modest visual attempt to investigate this crucial subject. Please note, Globaïa does not adhere to the 'TESCREAL bundle' of ideologies, as we align ourselves with a broader, more inclusive approach to planetary stewardship and existential risk management. Our commitment prioritizes ecological and social sustainability, integrating principles of indigeneity, planetary justice, and posthuman thought that emphasize respect for diverse cultural values, ecological wisdom, and the interconnectedness of human and non-human entities, challenging the techno-utopian visions often associated with TESCREALism.
Existential Risks refer to a threat that has the potential to either cause the extinction of humanity or irreversibly destroy our future potential for flourishing. These risks include events, processes, or phenomena that can severely undermine the long-term survival, well-being, or development of humanity on a global scale.
The profound and consequential nature of an existential catastrophe can be illustrated by a compelling example conceived by Derek Parfit in Reasons and Persons (1986), as cited by Toby Ord in The Precipice (2020):
Picture a catastrophic nuclear conflict that annihilates 99 percent of the global population. This devastating event would plunge the world into a centuries-long dark age before the survivors could eventually reconstruct civilization, returning it to its prior glory—battered, marked, but unbroken.
Now contrast this scenario with a war that wipes out the entire human population. Although the second conflict would undoubtedly be more disastrous, just how much more severe would it be? Either calamity would be unparalleled in history, resulting in billions of casualties. The second war would claim tens of millions of additional lives, making it worse on this basis. However, there is another, far more crucial distinction between the two conflicts.
While both wars claim billions of lives, the second one eradicates humanity entirely.
Both conflicts devastate our present, but the second one demolishes our future as well.This qualitative distinction in the losses sustained during that final percentage is what sets existential catastrophes apart, making the reduction of existential risk an exceptionally critical priority.
Existential Moods
Émile P. Torres, a thinker in the field of existential risk and philosophy, has conceptualised a framework delineating the evolution of Western thought regarding human extinction. This framework is structured around five distinct "existential moods", each representing a unique period in history characterised by specific beliefs and attitudes towards the possibility and nature of human extinction. Torres' analysis delves into the shifting perspectives from ancient times to the present day, exploring how scientific, philosophical, and cultural developments have dramatically altered our understanding of humanity's place in the universe and the threats to our continued existence.
1. Eternalist Era (Ancient times to 1850s)
During this era, the dominant belief was that humanity could not be extinguished. This conviction was largely rooted in religious doctrines and the idea of a cosmic order, which assured people that a divine entity or the natural laws of the universe would perpetually safeguard humanity.
The emergence of secular thinking and scientific rationalism began to challenge the religious and mystical views of the world. This shift was marked by the Enlightenment and the gradual rise of scientific methods, which questioned the previously unchallenged religious doctrines about human permanence.
2. Cosmic Fragility (1850s to 1950s)
This period saw a dramatic shift to the acknowledgment of humanity's vulnerability and the eventual certainty of extinction. Influenced by scientific advancements, this mood was characterized by a sense of existential despair and the recognition that, like all species, humans are subject to the laws of nature and could face eventual oblivion.
Key scientific developments triggered this shift. The Second Law of Thermodynamics introduced the concept of entropy, suggesting a universe tending towards disorder and eventual 'heat death'. Darwin's theory of evolution further eroded the notion of human exceptionalism by placing humans within the continuum of natural species, all susceptible to extinction.
3. Brink of Oblivion (1950s to 1980s)
The focus shifted to the frightening realisation that humanity now possessed the technological means for its own rapid extinction, primarily through nuclear weapons. This mood was marked by a palpable fear of nuclear war and the potential for a self-inflicted end to human civilisation.
The development and deployment of nuclear weapons, particularly the atomic bombings of Hiroshima and Nagasaki, vividly demonstrated humanity's newfound destructive capabilities. The subsequent development and testing of even more powerful thermonuclear weapons (such as Castle Bravo, in 1954) solidified this mood, highlighting the immediate and tangible possibility of human-induced annihilation.
4. Unforeseen Hazard (1980s to 2000s)
A new realisation emerged that nature, previously considered a benign or even protective force for humanity, could actually pose severe existential risks. This mood was characterised by the understanding that catastrophic natural events could threaten human survival on a global scale.
A series of groundbreaking discoveries in earth sciences played a crucial role. The hypothesis and subsequent confirmation of an asteroid impact causing the dinosaur extinction shattered the prevailing uniformitarian view in geology, which had held that catastrophic events did not occur on a global scale. This revelation opened eyes to the fact that Earth is not exempt from such cataclysmic events.
5. Age of Polycrisis (2000s to present)
This current mood is dominated by a heightened awareness and concern over a diverse array of existential threats facing humanity in the XXIst century. These include climate change, technological advancements with unpredictable consequences, biodiversity loss, and the potential for a mass extinction event.
Two main developments triggered this shift. Firstly, the ethical reevaluation of human extinction, which framed such an event as a moral catastrophe of cosmic proportions. Secondly, the growing body of research highlighting the immediacy and severity of threats like climate change and biodiversity loss, coupled with the rapid development of potentially dangerous technologies such as artificial intelligence and genetic engineering, painted a stark picture of a future fraught with unprecedented risks.
X-Risks
These risks can arise from various sources, including natural events, human activities, and technological advancements. Prominent examples of natural existential risks include:
-
X-class flares pose an existential threat to humanity, as their massive solar emissions have the potential to wreak havoc on our planet's technological infrastructure and natural environment. These powerful solar storms could disrupt satellite communication, damage power grids, and trigger widespread blackouts, thereby crippling our modern way of life. Moreover, increased exposure to solar radiation may have dire consequences for the Earth's climate and ecosystems, exacerbating global warming and other environmental challenges. Historically, the Carrington Event of 1859 serves as a prime example of an extreme solar storm that caused widespread telegraph disruptions. More recently, the 1989 geomagnetic storm led to the collapse of Canada's Hydro-Québec power grid, leaving millions without electricity. These past events underscore the potential for X-class flares to cause unprecedented devastation on a global scale, potentially threatening the very existence of human civilisation.
-
Large asteroid collisions, involving objects 5 km or larger, occur approximately every twenty million years and can release energy a hundred thousand times greater than the largest bomb ever detonated. Land impacts could destroy areas comparable to a small country, while larger asteroids may lead to extinction-level events. Asteroid impacts are among the most well-understood risks in this context.
There has been talk of methods to deflect asteroids on a collision course with Earth. The primary damage from an impact wouldn't be the initial collision, but rather the dust clouds projected into the upper atmosphere. An "impact winter" could alter the climate, harm the biosphere, disrupt food supplies, and cause political instability.
-
A supervolcano can produce eruptions with ejecta volumes exceeding 1,000 km3, thousands of times larger than typical eruptions. The primary threat is the aerosols and dust projected into the upper atmosphere, which would absorb sunlight and trigger a global volcanic winter. The 1991 Mt. Pinatubo eruption caused a 0.5°C global cooling for three years, while the Toba eruption around 70,000 years ago may have cooled the planet for over two centuries. The effects of these eruptions are comparable to a nuclear war, with more violent eruptions but fewer secondary effects like firestorms.
-
Cosmic risks, while relatively rare, pose significant threats to the existence of humanity, with the potential to cause catastrophic events that could lead to our extinction. Among these cosmic phenomena, gamma-ray bursts, nearby supernovae, and black hole mergers stand as prominent harbingers of potential doom. Gamma-ray bursts, the most energetic explosions in the universe, can release a torrent of lethal radiation that could annihilate Earth's protective atmosphere and trigger mass extinctions. Similarly, a nearby supernova, which occurs when a massive star exhausts its nuclear fuel and explodes, could shower our planet with harmful cosmic radiation, causing long-term climatic changes and threatening life as we know it. Lastly, black hole mergers, the cataclysmic collision of two black holes, generate powerful gravitational waves and release colossal amounts of energy, which, if close enough, could have disastrous consequences for our solar system and ultimately, life on Earth.
-
An epidemic of infectious disease can spread across large regions or even globally, with the likelihood of a high-impact epidemic potentially higher than assumed. Nature already contains devastating disease features such as incurability (Ebola), high fatality (rabies), extreme infectiousness (common cold), and long incubation periods (HIV). A pathogen combining these traits could cause extreme death tolls. Although modern sanitation, medical research, and disease-fighting institutions exist, rapid transportation, dense populations, and slums can still facilitate the rapid spread of diseases. The COVID-19 pandemic, a stark reminder of our vulnerability, has led to millions of deaths, exposed weaknesses in healthcare systems, and disrupted economies worldwide, emphasizing the urgent need for global preparedness against future pandemics.
Human activities and land conversion can exacerbate global pandemics, which emerge at the intersection of nature and culture. Conversely, the prominent existential risks that are anthropogenic, stemming from human actions, include:
-
Uncertainties in risk estimates suggest that extreme warming and feedback loops could raise global temperatures by 4°C or 6°C above pre-industrial levels. Factors like methane release from permafrost or Amazon rainforest dieback may contribute. Poorer countries, most affected, could become uninhabitable, causing mass deaths, famines, social collapse, and mass migration. Agricultural and biosphere shocks in developed countries could lead to global conflict and potential civilization collapse. Past civilisation collapses driven by climate change support this risk.
-
An ecosystem experiencing a significant, potentially irreversible decrease in carrying capacity for all organisms may lead to mass extinction. As part of the global ecosystem, humans fundamentally rely on it. Current species extinction rates far exceed historical norms, and humanity operates well beyond a safe ecological range. The compounding effects of ecological degradation put the entire world at risk. Although some human lifestyles could potentially be sustained independently from ecosystems, doing so on a large scale poses technological and ethical challenges.
-
Synthetic biology involves designing and creating biological devices and systems for practical applications, which introduces human intentionality to traditional pandemic risks. Current regulation efforts are nascent and may lag behind research advancements. One of the most harmful consequences of synthetic biology could stem from an engineered pathogen targeting humans or essential ecosystem components, originating from military or commercial bio-warfare, bio-terrorism, or lab leaks. The integration of synthetic biology products into the global economy or biosphere may create additional vulnerabilities, such as widespread benign products being targeted to cause damage.
-
Atomically precise manufacturing enables high-throughput processes at the atomic or molecular level, creating innovative products like smart or highly resilient materials. This could empower various groups or individuals to produce a range of items, potentially including large weapon arsenals or even nuclear bombs. However, nanotechnology could also address global issues like resource depletion, pollution, climate change, clean water, and poverty. Risks include self-replicating nanomachines consuming the environment or the misuse of medical nanotechnology.
-
AGI, or Artificial General Intelligence, refers to machine or software intelligence that achieves human-level cognitive abilities across various domains. It involves the development of intelligent agents that perceive their environment and act to maximize success. Highly advanced AGI may be challenging to control and could endeavor to enhance its own intelligence and acquire resources, potentially disregarding human survival and values. This makes AGI a unique risk, with extinction being more probable than lesser impacts. However, such intelligence could also combat other risks, offering great potential. AGI-driven warfare and related technologies pose additional risks. "Whole brain emulations," where human brains are scanned and replicated in machines, could potentially alleviate some concerns.
-
The probability of a full-scale nuclear war between the USA and Russia has increased dramatically in 2022, with rhetoric reminiscent of the past century resurfacing. The potential for intentional or accidental conflict is real, with some estimates placing the risk at 10% over the next century. A larger impact would hinge on whether the war induced a nuclear winter, causing global freezing temperatures and potential ozone layer destruction. Firestorms in targeted cities would need to lift soot into the stratosphere. Recent models confirm severe risks, with global food supply disruption leading to mass starvation and state collapse.
-
Global economic or societal collapse encompasses a wide range of conditions, often involving economic downturns, social chaos, civil unrest, and even the disintegration of human societies and life support systems. The complex, interconnected nature of the global economic and political system renders it vulnerable to unexpected system-wide failures, even if individual components are reliable. This interconnectedness creates systemic risk, as self-reinforcing joint risks can spread throughout the system and potentially impact related external systems. Such effects have been observed in ecology, finance, and critical infrastructure like power grids. The risk of collapse increases when multiple independent networks rely on each other.
-
Governance disasters fall into two categories: failing to address solvable problems and actively causing worse outcomes. Examples include not alleviating absolute poverty and establishing a global totalitarian state. Technological, political, and social changes could create new governance forms, with varying outcomes. The challenges with governance disasters include estimating their likelihood and determining their impact, as it's unclear how to objectively compare continued poverty, global totalitarianism, and catastrophic events resulting in billions of casualties or civilization collapse.
Global catastrophic challenges may also involve Unknown Unknowns, which are composed of seemingly unrelated risks that, when combined, create a significant danger, frequently referred to as a Polycrisis. The Fermi Paradox, emphasising the absence of detectable extraterrestrial life, could suggest that intelligent life commonly self-destructs before spreading across the galaxy, an idea known as the Great Filter and captured by the final variable (L) in the famous Drake Equation.
Since the 🌍⏳ Anthropocene started, our awareness of existential risks has grown significantly as scientific advancements, global communication, and public discourse have facilitated a deeper understanding of the potential threats facing humanity. The advent of the atomic bomb in the 1940s marked a pivotal moment in recognizing our capacity to cause our own extinction. This realization sparked a global movement for nuclear disarmament and arms control, leading to international treaties and ongoing diplomatic efforts.
As we advanced into the latter half of the 20th century, growing concerns about environmental degradation and pollution culminated in the recognition of anthropogenic climate change as a major existential threat. The establishment of the Intergovernmental Panel on Climate Change (IPCC) in 1988 reflected the rising urgency to address this issue. Furthermore, advances in technology and artificial intelligence have raised ethical and safety concerns, leading to organisations such as the Future of Life Institute, the Centre for the Study of Existential Risk and the Global Challenges Foundation promoting responsible development and research.
The interconnectedness of our global society has heightened awareness of the risks associated with pandemics, as evidenced by the widespread impact of HIV/AIDS, SARS, and most recently, COVID-19. Finally, Russia's invasion of Ukraine has revived the nuclear discourse and risk that was once believed to be a relic of the Cold War era.
🌍 Planetary Fever 🌡️
High temperatures have been linked to increased mortality, decreased labour productivity, decreased cognitive performance, impaired learning, adverse pregnancy outcomes, decreased crop yield potential, increased conflict, hate speech, migration and infectious disease spread.
Climate change has been a topic of discussion and concern for many years, with increasing evidence pointing towards its potentially catastrophic impact on the future of our planet. But beyond the direct effects on the natural world, climate change also poses a significant existential threat to human civilisations. By examining the history of past civilisations and the interconnectedness of our modern world, we can understand why climate change has the potential to bring about widespread destruction and destabilisation.
Past societies have faced and been impacted by environmental changes, such as droughts, floods, and temperature shifts. In some cases, these changes were so severe that they contributed to the collapse of entire civilisations. This serves as a cautionary tale for our modern world, as the interconnectedness of our global systems makes us even more vulnerable to the effects of climate change.
Read more on our dedicated pages on 🌏⚖️ Planetary Tipping Systems and the 🌎🚨 Planetary Emergency.
To learn more about theses visualisations and watch the short film,
visit our page on 🌍🌡️ Human Climate Niches.
⚛️ Nuclear Perils ☢️
The worldwide significance of nuclear risks, both civil and military, cannot be overstated, as the potential consequences of nuclear accidents or warfare could be catastrophic for humanity and the environment.
On the civil side, incidents such as the Chernobyl disaster in 1986 and the Fukushima Daiichi nuclear disaster in 2011 have demonstrated the far-reaching implications of nuclear accidents, including widespread radioactive contamination, long-term health effects, and significant economic burdens. These incidents highlight the importance of strict safety regulations and risk management for nuclear facilities.
In the military domain, the destructive power of nuclear weapons poses an existential threat to global security. The bombings of Hiroshima and Nagasaki in 1945 exemplify the devastating consequences of nuclear warfare, leading to immense loss of life and long-lasting environmental damage.
The Doomsday Clock, a symbolic representation of the likelihood of global catastrophe, was created in 1947 by the Bulletin of the Atomic Scientists. This ominous timepiece reflects the potential for humanity to self-destruct through a combination of nuclear war, climate change, and emerging technologies. By measuring the "minutes to midnight," the clock seeks to raise awareness and inspire action to prevent global disaster. Throughout its history, the clock's hands have been adjusted in response to geopolitical events and scientific developments, making it a stark reminder of the ever-present threats that imperil our planet and the urgent need for global cooperation to address these challenges.
Cesium-137 deposition following the nuclear disasters at Fukushima and Chernobyl has had a significant impact on the environment and public health. The Chernobyl accident in 1986 led to widespread deposition of radioactive materials across Europe, with Cs-137 being one of the most persistent radionuclides. Similarly, the Fukushima Daiichi nuclear power plant disaster in 2011 resulted in the release of vast amounts of radioactive isotopes, including Cs-137, which contaminated land, air, and water. In both cases, Cs-137 deposition has led to long-term contamination of ecosystems, as the isotope's half-life of 30 years allows it to persist in the environment. This has resulted in the exclusion of vast areas surrounding the disaster sites, with human resettlement deemed unsafe due to elevated radiation levels.
Nuclear tests, conducted primarily during the mid-20th century, were a means for nations to develop and demonstrate their nuclear capabilities. The first recorded nuclear test took place on July 16, 1945, when the United States tested the "Trinity" device in New Mexico. The period of intensive nuclear testing spanned from the 1940s to the early 1990s, with the peak occurring during the Cold War era between the United States and the Soviet Union. Over 2,000 nuclear tests were conducted worldwide during this time, with the United States, the Soviet Union, the United Kingdom, France, and China being the primary participants. The Comprehensive Nuclear-Test-Ban Treaty (CTBT), adopted in 1996, has largely put an end to nuclear testing, although it has yet to enter into force due to the non-ratification by a few countries.
Today, there are approximately 440 operational nuclear reactors worldwide, with the majority being utilised for civilian purposes, such as electricity generation. These civil nuclear reactors are regulated and monitored to ensure their safe operation and prevent the proliferation of nuclear weapons. On the other hand, military nuclear threats primarily stem from countries possessing nuclear weapons or seeking to develop them, as well as the risk of nuclear materials falling into the hands of non-state actors, such as terrorist organizations.
👾 Digital Minds 🤖
The advancement of Artificial Intelligence (AI) could eventually pave the way for the development of Artificial General Intelligence (AGI) — a term sometimes used interchangeably with God-like AI (GAI). This progression could then lead to the emergence of Artificial Superintelligence (ASI), presenting a spectrum of existential threats to humanity.
A useful definition for AGI or Powerful AI is proposed by Dario Amodei, CEO of Anthropic:
A scalable, multi-instance AI surpassing human experts across disciplines, autonomously executing complex tasks via virtual interfaces and remote control, functioning as a “country of geniuses” within a datacenter.
For a deeper exploration, see his essay Machines of Loving Grace.
As AI systems evolve to become more sophisticated and autonomous, the likelihood of unintended outcomes and human oversight diminishes. This is especially true with AGI, which would possess human-like cognitive abilities and could surpass human performance in nearly any task.
“A superintelligent AI system that is autonomous and goal-directed would be a potentially rogue AI if its goals do not strictly include the well-being of humanity and the biosphere, i.e., if it is not sufficiently aligned with human rights and values to guarantee acting in ways that avoid harm to humanity.”
Further explore this argument by reading “How Rogue AIs may Arise” authored by Prof. Yoshua Bengio.
The pathway from Artificial Intelligence (AI) to Artificial General Intelligence (AGI), further to Artificial Super Intelligence (ASI), and finally to potentially Rogue Artificial Intelligence (RAI) isn't necessarily a clear-cut or inevitable one. However, there does appear to be potential for this progression if the objectives of the AI system do not consistently align with human values across various cultures and periods.
This is why, considering the abilities of current large language models (LLMs) as of 2023, the potential extinction-level threats posed by AI have become increasingly prominent in public dialogues and policy-making forums. This has led to near-consensus statements like:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
— Statement on AI risks by leading technology experts (May 2023)
Over recent years, the realm of machine learning has seen remarkable progress, epitomised by state-of-the-art models like GPT-4. These models exhibit superior proficiency in natural language comprehension and generation, signaling a new era in the field. This upswing, a chapter in the global narrative of the 🌍🎢 Great Acceleration, is underpinned by three key factors: (1) burgeoning model sizes, (2) the advent of more sophisticated architectures, and (3) advanced training algorithms. These enhancements have ushered in superior generalisation abilities. A prevailing sentiment among many specialists is that relentless refinement and advancement of these models may eventually yield AGI capable of performing any cognitive task at a human level. Yet, the unanswered, vital question that looms for researchers and policymakers alike is whether such formidable AGI systems will align with human values and aims, ensuring the birth of Friendly AI (FAI).
Navigating the philosophical labyrinth of AI alignment is as demanding as its technical and practical counterparts. It resembles an 'Axiological Singularity'—a concept signifying the immense intricacy and seemingly insurmountable hurdle of imbibing human value systems into an artificial sentient entity. The task's complexity is compounded by the flux and diversity innate to human values. Ironically, humans themselves, the source of these values, struggle to consistently comprehend and adhere to their own fluctuating value systems—systems that are a product of time and cultural divergence.
AI technologies are quickly evolving, advancing in capabilities to produce text, images, and videos almost indistinguishable from those created by humans. While these advancements offer numerous advantages, they also bring forth potential risks. AI could be utilised to spread bias, power self-operating weapons, foster the spread of false information, or facilitate cyberattacks. Moreover, while human supervision often accompanies the use of these AI systems, they are progressively gaining the ability to act independently, potentially leading to harmful outcomes.
As AI continues to progress, it may eventually present catastrophic or even existential threats. There exist a multitude of manners in which AI systems could instigate or augment large-scale risks, a few of which are outlined below.
You can examine this chart featuring 24 scenarios illustrating the various ways humanity and AGI/ASI may coexist, with outcomes ranging from positive to absolutely negative. This exploratory exercise is inspired by Max Tegmark's Life 3.0 and Nick Bostrom's Superintelligence: Paths, Dangers, Strategies.
Here, we present in a visual form 9 scenarios of an AGI takeover, each warranting cautious consideration. An abundant body of literature exists online detailing the nature, paths, kinetics, and problems to be addressed in order to avert these potential outcomes. However, it is crucial to apply the precautionary principle to this invention, possibly more so than any other development in human history.
Corporations such as Nvidia, Intel, Samsung and Taiwan Semiconductor Manufacturing Company (TSMC) are leading the advanced microprocessor production space. They have been relentlessly pushing the boundaries of innovation by developing increasingly sophisticated microprocessors. These cutting-edge chips are manufactured using extreme ultraviolet (EUV) lithography machines exclusively designed and built by ASML, a Dutch multinational corporation and global leader in photolithography equipment. As the physical foundation for AGI continues to evolve, the powerful synergy between these industry giants is poised to play a pivotal role in realising its immense potential and risks.
If we, as a global community, decide to push the research agenda and help create a new form of intelligence on Earth, it must be done with extreme caution and vigilance. To ensure the safe and responsible advancement of AGI, it is crucial to involve international supervision from a recognised body, such as the United Nations, and adopt a collaborative model similar to CERN or ITER. Participating nations can pool resources, expertise, and knowledge while implementing strict security measures and conducting comprehensive research into AI alignment. Establishing the AGI research facility in a remote location, like the Kerguelen Islands, further mitigates potential threats and encourages a focused, secure environment for researchers.
The following is a collection of AI-generated images depicting fictional research facilities on the Kerguelen Islands. This playful experiment serves to demonstrate the remarkable capabilities of artificial intelligence in the realm of image creation, alongside the numerous other domains being disrupted by AI technology.
😷 Epidemics & Pandemics 🦠
The COVID-19 pandemic has had a profound worldwide significance, laying bare the vulnerabilities of modern civilization and exposing the interconnected nature of our global society. As the virus spread rapidly across the globe, overwhelming healthcare systems, disrupting economies, and claiming millions of lives, it became evident that even the most advanced nations were ill-prepared for a crisis of this magnitude. The pandemic has highlighted our dependence on global supply chains, the importance of timely and accurate information sharing, and the need for international cooperation to tackle common challenges.
The Ebola outbreak of 2014-2016 in West Africa was one of the most devastating epidemics in recent history, causing widespread fear and panic across the globe. The highly contagious and deadly Ebola virus, transmitted through contact with bodily fluids, ravaged communities and overburdened healthcare systems in the hardest-hit countries of Guinea, Sierra Leone, and Liberia. According to the World Health Organization, the outbreak resulted in over 28,600 cases and claimed more than 11,300 lives, marking it as the largest and deadliest Ebola outbreak on record. The gruesome symptoms of this terrible virus have marked people's imagination, serving as a stark reminder of the potential devastation that pandemics can bring.
While the origins and nature of future pandemics remain uncertain, certain locations may be more susceptible than others and warrant vigilant observation.
Bushmeat markets, where wild animals are sold for human consumption, can serve as a breeding ground for zoonotic diseases, as close contact between humans, live animals, and their carcasses can enable the transmission of pathogens across species. Wet markets, which feature a variety of live animals in close quarters, pose similar risks, as they create an environment conducive to the exchange of pathogens between species, increasing the likelihood of disease spillover into human populations. Additionally, biohazard level 3 and 4 laboratories, which handle highly infectious and dangerous pathogens for research purposes, carry inherent risks of accidental pathogen release or containment breaches, which could trigger outbreaks or pandemics. As our world becomes increasingly interconnected, addressing the risks associated with these sensitive locations will be crucial to preventing future pandemics and safeguarding global public health.
☄️ From Above 🦕
Meteors & Meteorites
Between 2000 and 2013, a comprehensive network of sensors, designed to monitor Earth continuously for the infrasound signature of nuclear detonations, detected 26 explosions with energy levels ranging from 1 to 600 kilotons. Surprisingly, these explosions were not caused by nuclear activities, but by asteroid impacts on Earth, emphasising the often-underestimated threat posed by such events. The Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) disclosed this data in 2014, highlighting the crucial role of their global network of sensors in detecting not only nuclear tests but also other significant events, such as asteroid impacts.
🌋 And Below 🌊
Supervolcanoes
Supervolcanic eruptions, although rare, pose a significant existential risk to humanity due to their potential to cause catastrophic global consequences. The scale of these eruptions is measured using the Volcanic Explosivity Index (VEI), with a VEI of 8 representing a supereruption that ejects more than 1,000 cubic kilometers of volcanic material.
The most recent supereruption on record took place approximately 74,000 years ago at Indonesia's Toba Caldera, an event of such magnitude that it likely triggered a volcanic winter lasting for several years and substantially reduced the global human population. Ranking as one of the most colossal volcanic occurrences in the past 25 million years, the Toba eruption had a VEI of 8 and expelled an estimated 2,800 cubic kilometers of volcanic matter. This eruption not only led to a prolonged volcanic winter, but also contributed to a significant decline in human populations and created genetic bottlenecks in multiple species.
The Taupo Volcanic Zone in New Zealand experienced the Oruanui eruption around 26,500 years ago, which is considered one of the most powerful volcanic events in the last 100,000 years. This eruption had a VEI of 8 and released an estimated 530 cubic kilometers of volcanic material, significantly altering the landscape of the North Island and contributing to a short-term global cooling effect.
On average, super eruptions are estimated to occur once every 50,000 to 100,000 years, although the intervals between such events are highly variable. The potential impacts of a super eruption include widespread destruction of ecosystems, disruption of global agriculture due to volcanic ash and aerosols, and long-term climate change from the release of large amounts of greenhouse gases.
The Yellowstone Caldera in Wyoming, United States, has experienced three major eruptions in the past 2.1 million years, with the most recent one occurring around 640,000 years ago. This eruption ejected approximately 1,000 cubic kilometers of volcanic material and created the Lava Creek Tuff, a rock formation covering large areas in the western United States.
Large igneous provinces (LIPs) are vast regions of the Earth's crust characterized by extensive volcanic activity that produces immense volumes of basaltic lava. These geological formations result from massive outpourings of molten rock, often associated with mantle plumes or other deep-seated sources. Two well-known LIPs are the Siberian Traps and the Deccan Traps. The Siberian Traps, located in present-day Russia, were formed around 252 million years ago during the end-Permian extinction event. The Deccan Traps, found in modern-day India, were created about 66 million years ago, coinciding with the end-Cretaceous extinction event. Both LIPs have been implicated in contributing to mass extinctions due to their vast emissions of volcanic gases, such as carbon dioxide and sulfur dioxide, which led to rapid climate change, ocean acidification, and widespread environmental disruptions.
In December 2021, the Hunga Tonga-Hunga Ha'apai submarine volcano in the Tongan archipelago began erupting, culminating in a massive climax on January 15, 2022. Part of the highly active Tonga-Kermadec Islands volcanic arc, this VEI-5 rated eruption, dubbed a "magma hammer," displaced 10 cubic kilometers of material and generated the largest atmospheric explosion recorded by modern instruments. The eruption triggered tsunamis throughout the Pacific, causing casualties and destruction in several countries. It was the largest volcanic eruption since Mount Pinatubo in 1991, and the most powerful since Krakatoa in 1883, with NASA comparing its power to hundreds of times that of the Hiroshima atomic bomb. On the other hand, supervolcanoes are classified as VEI-8, representing a significantly higher level of volcanic explosivity compared to the Hunga Tonga-Hunga Ha'apai eruption.
References:
Bostrom, N., Ćirković, M. M., & Rees, M. J. (2008). Global Catastrophic Risks. Oxford University Press.
Bostrom, N. (2002). Existential risks: Analyzing human extinction scenarios and related hazards. Journal of Evolution and Technology, 9, 1-30.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control.
Verendel, V., & Häggström, O. (2017). Fermi's paradox, extraterrestrial life and the future of humanity: A Bayesian analysis. International Journal of Astrobiology.
Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity. Bloomsbury Publishing.
Rees, M. (2003). Our Final Hour: A Scientist's Warning: How Terror, Error, and Environmental Disaster Threaten Humankind's Future in This Century - on Earth and Beyond. Basic Books.
Sagan, C. (1983). Nuclear War and Climatic Catastrophe: Some Policy Implications. Foreign Affairs, 62, 257-292.
Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.
Torres, Émile P. (2024). Human Extinction: A History of the Science and Ethics of Annihilation, Routeledge.
Gebru, T., & Torres, Émile P. (2024). The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence. First Monday, 29(4).
Organisations Working on X-Risks:
Oxford’s Global Priorities Institute
Cambridge’s Centre for the Study of Existential Risk