The Fermi paradox asks a simple question: if the universe is so vast and so old, where is everybody? Billions of stars, billions of years, and yet no signal, no visit, no trace of another civilization. The most common answers involve physics or biology: maybe interstellar travel is too hard, maybe complex life is too rare, maybe we are genuinely alone. But there is another possibility, less dramatic and far more disturbing. Maybe civilizations routinely destroy themselves. Not through some spectacular cosmic accident, but through an ordinary, structural failure to coordinate.
This is the Great Filter hypothesis, and the version of it that keeps me awake at night places the filter not behind us but ahead, somewhere between “intelligent enough to build dangerous technology” and “wise enough to survive it.”
The coordination problem at civilizational scale
Consider the current state of artificial intelligence development. Every major government, every major technology company, and a growing number of academic institutions recognize that building powerful AI systems without adequate safety measures carries enormous risk. Papers are published. Conferences are held. Open letters are signed. In September 2024, the International Institute for Management Development launched an AI Safety Clock, modeled on the Doomsday Clock, to measure how close we are to AI-caused catastrophe. It started at 29 minutes to midnight. By March 2026, it had moved to 18 minutes.
And yet the race continues to accelerate.
This is not because the people involved are stupid or evil. It is because the incentive structure makes slowing down individually irrational. If OpenAI pauses to work on alignment, Anthropic or Google or DeepSeek gains ground. If the United States imposes strict regulation, China gains an advantage. If one company invests heavily in safety research, its investors see slower returns than competitors who do not. This is the prisoner’s dilemma at civilizational scale, and the stakes are not market share but the future of the species.
OpenAI as microcosm
The trajectory of OpenAI tells the story in miniature. Founded in 2015 as a nonprofit with the stated mission of ensuring that artificial general intelligence benefits all of humanity, it has transformed into a capped-profit corporation, then restructured further to accommodate a $13 billion investment from Microsoft, then announced plans to convert fully to a for-profit entity. Each step was individually defensible: you need capital to compete, you need to compete to have influence over how the technology develops, you need influence to ensure safety.
But the cumulative effect is that an organization created specifically to resist the commercialization of transformative AI became an exemplar of it. Sam Altman’s framing of AI as a “utility like water” is instructive in ways he may not intend. Water and electricity are regulated utilities precisely because they are essential, with public oversight, price controls, and democratic accountability. Altman invokes the analogy to normalize dependency while quietly omitting the regulatory part. He wants the cultural legitimacy of the comparison without the consequences.
This is not unique to OpenAI. The same dynamics operate at Google, Meta, Anthropic, and every other player in the field. The problem is not one villain; it is a system that rewards speed over reflection, extraction over distribution, and competition over coordination.
Capitalism’s structural blind spot
Capitalism is extraordinarily efficient at optimizing for local maxima. It can produce smartphones and streaming services and next-day delivery with remarkable precision. What it cannot do is solve problems that require global coordination, long time horizons, and the internalization of externalities. Climate change is the proof of concept: even with scientific consensus, even with consequences already visible, even with existing technology to address the problem, nation-states defect. The logic is always the same: “If we slow down, someone else gains an advantage.”
AI governance will be harder than climate. The competitive stakes are higher because whoever controls the most powerful AI systems controls an enormous amount of economic and military leverage. The timeline is shorter because AI capability is advancing faster than any energy transition. And the feedback loops are less visible because the damage from misaligned AI systems may not announce itself the way rising sea levels do.
A simulation exercise called “Intelligence Rising,” run 43 times with participants from government, industry, and academia, found the same pattern repeatedly: positive outcomes almost always required coordination between actors who had strong default incentives to compete. Left to their own devices, players raced. The rare good endings came when someone imposed structure that made cooperation individually rational, not just collectively desirable.
The filter is not dramatic
The Great Filter, if it lies ahead of us, probably does not look like a Hollywood apocalypse. It looks like a series of reasonable, individually defensible decisions that collectively produce catastrophe. It looks like every country saying “we cannot afford to fall behind” while the thing they are racing toward grows more dangerous. It looks like corporate boards optimizing quarterly returns while the technology they deploy reshapes the information environment in ways nobody fully understands. It looks like intelligent beings doing exactly what competitive pressures incentivize them to do, and being unable to stop.
The most common counterargument is that previous technologies produced similar fears and we survived. We survived nuclear weapons, industrial pollution, genetic engineering. This is true, but it is also survivorship bias of the most literal kind. We do not hear from the civilizations that did not make it. And the track record is not as clean as it looks: we came within minutes of nuclear war on at least three occasions (Able Archer 83, the Cuban Missile Crisis, the 1983 Soviet satellite false alarm), and we survived those by luck as much as wisdom.
What is there to do?
I lost my job as a translator because of AI. Not dramatically, but gradually: declining rates, fewer projects, clients switching to machine translation with light human editing. The market did not collapse overnight; it eroded over years. I adapted. I am now studying electronics engineering and cybersecurity because those fields have a physical component that resists automation, at least for now.
I do not say this to solicit sympathy. I say it because the experience gives me a very concrete relationship with the abstract problem I have been describing. The AI coordination failure is not something I read about in a think-tank report. It is the reason my career ended.
And the conclusion I land on is not hopeful in the grand sense, but it is honest: the macro system is probably not going to fix itself. Nation-states will not coordinate in time. Corporations will not voluntarily slow down. The incentive structures are too powerful and the coordination mechanisms too weak.
What remains is the micro. Voltaire ends Candide with “il faut cultiver notre jardin,” we must tend our garden. After dragging his characters through every conceivable catastrophe, from earthquakes to wars to the Inquisition, his conclusion is not that the world can be fixed, but that the garden can be tended. That is not resignation. It is a serious ethical position: when the systems you inhabit are beyond your ability to repair, the scale at which you operate, the people you treat fairly, the work you do honestly, these things still matter.
Try not to be a dick at your own scale. Adapt. Build things that work. Study hard. Take care of the people close to you. It is not enough to save the species, but it is not nothing.
The Great Filter may be ahead of us. But the garden is right here.
Written collaboratively with AI. See the blog page for more on my process.
Sources: International Institute for Management Development AI Safety Clock (2024-2026); 80,000 Hours, “Risks from power-seeking AI systems” (2025); Wikipedia, “Existential risk from artificial intelligence”; ScienceDirect, “Strategic Insights from Simulation Gaming of AI Race Dynamics” (2025); Bulletin of the Atomic Scientists, “Stopping the Clock on catastrophic AI risk” (2025); AEI, “AI Acceleration: The Solution to AI Risk” (2025)