Homo sapiens dominates Earth not through strength or intelligence, but through the unique capacity to believe in things that don't exist. Money, nations, corporations, human rights — these are shared fictions. Powerful enough to build civilizations. Fragile enough to collapse when enough people stop believing. Harari's second claim is darker: the fictions that built us may not survive what we are building next.
How do you get a million strangers to cooperate?
Not through force alone. Not through instinct. Through story.
Harari's central argument, developed in Sapiens (published in Hebrew in 2011, internationally in 2014), is that Homo sapiens dominates Earth because of one capacity no other animal shares: intersubjective belief. The ability to coordinate around things that exist only because enough people act as if they do.
A chimpanzee cannot be convinced it owes you money. It cannot believe in national borders. It has no concept of a corporation, a deity, or human rights. These things require a mind willing to treat a shared agreement as a fact of nature.
Harari calls these structures imagined orders. Legal systems. Currencies. Armies. Churches. None of them exist in the physical world the way a rock or a tree does. They exist in the space between minds — held in place by collective performance.
Pull one person's belief away, and nothing changes. Pull enough, and the whole structure goes.
This is not nihilism. It is something more unsettling. It means civilization is not built on truth. It is built on coordination. And coordination can be hacked.
Civilization is not built on truth. It is built on coordination. And coordination can be hacked.
The Agricultural Revolution, Harari argues, is the first proof. It looked like progress. It was, for most people who lived through it, a trap. The average Neolithic farmer worked longer hours, ate a narrower diet, and died younger than the foragers who came before. Wheat spread across continents. Human welfare declined. The story of progress was underway long before the facts supported it.
This is Harari's recurring move: separate civilizational scale from lived experience. They point in opposite directions more often than we admit.
The Man Before the Myth
What kind of historian writes a history of everything?
Harari was born in 1976 in Israel. He completed his DPhil at Oxford in 2002, under the supervision of Steven Gunn. His thesis was a specialist study of Renaissance military history — archival, precise, the opposite of everything that followed.
That discipline mattered. Before he abandoned the scale of medieval scholarship, it taught him how to read sources. How to sit with a claim before endorsing it. How to notice when a story is doing more work than the evidence can support.
The pivot came at Hebrew University of Jerusalem, where Harari was asked to teach a world history course. His specialty didn't prepare him for the question the course demanded: what actually happened, across all of human time, and why?
His lecture notes became Sapiens. The Israeli edition was released in 2011. The international English edition arrived in 2014 and did not behave like an academic book. Barack Obama named it. Bill Gates named it. Mark Zuckerberg named it. Academic historians sharpened their knives.
More than 35 million copies have now sold. Translations exist in over 45 languages.
His lecture notes became Sapiens. Academic historians sharpened their knives.
The criticism was pointed and often fair. Harari generalizes across millennia. He smooths over scholarly debates that span entire careers. He sometimes presents a compelling frame as if it were a proven mechanism. He is, as more than one reviewer noted, a brilliant popularizer operating at the edge of what popularization can honestly do.
None of that stopped the books from working on readers. The question is why.
The answer may be the questions themselves. Harari doesn't offer comfort. He offers clarity — and the two are rarely the same thing. Millions of people encountered his work and felt, for the first time, that someone had named the machinery underneath the world they were living in.
The Cognitive Revolution
Something changed in human cognition roughly 70,000 years ago. What was it?
Harari calls this event the Cognitive Revolution. The precise cause remains debated among anthropologists and evolutionary biologists. The consequence, he argues, is not.
Before this period, human language was functional — warning calls, social signals, coordination between small groups. After it, language became capable of something new: describing things that do not exist. Gods. Spirits. Nations. Futures. Obligations between strangers who will never meet.
This is not just a communication upgrade. It is an entirely different kind of mind.
A wolf pack coordinates through instinct and hierarchy. Its scale is limited by what instinct and hierarchy can hold — roughly 150 individuals, the same ceiling Harari draws on Robin Dunbar's research. Humans blew past that limit because they could coordinate around shared belief. A crusade could mobilize hundreds of thousands. A corporation can span a hundred countries. Both run on story.
A crusade could mobilize hundreds of thousands. A corporation can span a hundred countries. Both run on story.
The Cognitive Revolution, in Harari's account, is not the moment humans became rational. It is the moment humans became mythological. The capacity for abstract fiction is the engine of every institution that followed — and every atrocity.
This is where the argument gets uncomfortable. The same cognitive machinery that built hospitals built concentration camps. The same capacity for shared belief that enables human rights enables genocide. Harari does not look away from this. He insists that the mechanism is neutral. The story is what matters. And stories are chosen.
Homo Deus and the Useless Class
Homo Deus (2015) turned the lens from past to future. The question it asks is brutal: if the Agricultural Revolution expanded civilization at the expense of human welfare, what is the Digital Revolution doing to us right now?
Harari's answer is organized around an ideology he calls Dataism. The core proposition: information processing is the highest value in the universe. Organisms are algorithms. Markets are information processors. The brain is a biological machine running pattern recognition on sensory input.
If this frame wins — and Harari argues it is already winning — then the question of what makes human consciousness special becomes very difficult to answer.
The individual human subject is the source of meaning and value. Consciousness matters because it feels. Rights, dignity, and democracy are grounded in the irreducible experience of being a person.
Information processing is the universe's highest value. The relevant question is not whether a system feels, but how efficiently it processes data. A better algorithm outranks a slower consciousness.
The story that individual experience matters. That your inner life has weight. That a vote counts because a person cast it.
The assumption that humans are necessary. If an algorithm knows you better than you know yourself, what does your self-knowledge contribute?
The concept of the useless class follows from this logic. Harari does not mean useless in a moral sense. He means economically and cognitively superfluous — people whose labor and judgment are no longer required by the systems making decisions. Not through malice. Through optimization.
This is his most alarming forecast, and the one most frequently misread. He is not predicting a robot uprising. He is predicting something quieter and harder to resist: a gradual transfer of authority from human minds to information systems, driven not by ideology but by performance. The algorithm recommends better. The algorithm diagnoses better. The algorithm allocates better. At what point does the human in the loop become a formality?
The algorithm doesn't seize power. It just performs better, until the human in the loop becomes a formality.
21 Lessons for the 21st Century (2018) addressed this more directly, turning from historical argument to present-tense urgency. It asked what we should teach children when no one can predict what skills will matter in twenty years. It grappled with algorithmic manipulation, the fragility of liberal democracy's founding story, and the risk of nuclear war in an era of renewed nationalism. Critics noted, fairly, that the problem statements were sharper than the solutions. Harari's diagnostics are extraordinary. His prescriptions are thinner.
Nexus and the AI Warning
By 2023, Harari had moved from theorist to activist. Nexus extended his argument into explicit warning territory. He signed open letters calling for AI regulation. He addressed governments and global forums. He became one of the most prominent public voices arguing that the AI moment is categorically different from previous technological shifts.
The argument is precise. And it is easy to misread.
Harari is not claiming AI will become conscious. He is not predicting Skynet. His claim is both simpler and stranger: AI doesn't need to be conscious to seize control of the stories that run the world.
Every human institution — every imagined order — depends on shared narrative. Laws require interpretation. Markets require trust. Democracies require information that citizens can evaluate. AI systems, operating at scale and speed no human institution can match, are already intervening in all three.
They recommend what we read. They shape what we believe. They optimize for engagement metrics that have no stake in whether the underlying content is true or whether the society consuming it remains functional.
The danger is not that AI will want to destroy democracy. The danger is that democracy's fictions — informed consent, deliberation, the meaningful vote — become technically inoperable before anyone formally decides to abandon them.
AI doesn't need to want to destroy democracy. It just needs to make democracy's fictions technically inoperable.
Harari's concern is structural, not conspiratorial. The imagined orders that built liberal civilization were designed for human-speed information flow. They were not designed for systems that can generate personalized political content at industrial scale, simulate trusted voices, and adapt in real time to resistance.
The stress test has begun. The fictions are showing cracks.
The Critics Are Right and It Doesn't Matter
What do academic historians actually object to?
Most of the serious criticism falls into three categories. First, Harari overgeneralizes. He covers 70,000 years in 400 pages and loses precision at every turn. Second, he presents interpretive frameworks as established facts. The Cognitive Revolution at 70,000 years ago is one plausible account, not consensus. Third, he sometimes mistakes rhetorical momentum for causal explanation. Things follow in Sapiens because the prose is so assured — not always because the evidence demands it.
These criticisms are legitimate. They are also, in a specific sense, beside the point.
Harari is not writing for specialists. He is writing for people who have never been given a framework for thinking about what civilization is, how contingent it is, or what threatens it. For that audience, the level of generalization is not a flaw. It is the price of access.
The more interesting critique is philosophical. If all civilizations run on shared fictions, what makes one fiction worth defending over another? Harari describes the mechanism with precision. He rarely adjudicates value. He can tell you that human rights are an imagined order. He is less clear on why you should fight for them when they are threatened.
He can tell you that human rights are a fiction. He is less clear on why you should fight for them.
This is not a small gap. It is the gap between analysis and ethics. Harari tends to occupy the first and approach the second obliquely. The result is a body of work that is extraordinarily good at naming what is happening and genuinely uncertain about what to do.
He has said in interviews that meditation — specifically Vipassana practice, which he undertakes for two hours daily and on extended retreats — is what allows him to observe his own mind without being captured by its stories. This is not incidental. It is, arguably, the hidden premise of his entire project: that clarity requires distance from the fictions you are analyzing.
Whether that distance is achievable, or whether it is itself a kind of fiction, is a question he leaves open.
Fictions Under Pressure
The same stories that built civilization are now being stress-tested by systems that have no stake in them.
AI has no investment in democracy. Algorithms don't subscribe to human dignity. Market logic doesn't care whether a society remains coherent. These systems were built by humans who do hold those values — but the systems themselves do not inherit them.
This is the deepest implication in Harari's work, and the one he states least directly: we have never been in control of the stories that control us. The Cognitive Revolution gave us mythological thinking. The Agricultural Revolution gave us hierarchy and surplus. The Industrial Revolution gave us mass coordination and mass destruction. At every stage, the story came first and the consequences followed — and no one chose the consequences.
The Digital Revolution is doing this again. Faster. At a scale that makes previous disruptions look local.
We have never been in control of the stories that control us. The Digital Revolution is doing this again — faster, and at a scale that makes previous disruptions look local.
What would it mean to choose, this time? To build the story before the consequences arrive? Harari does not pretend this is easy. He does insist it is the only serious question on the table.
Self-governance is the only answer. Build now.
Not self-governance in the thin political sense — ballot boxes and term limits. Self-governance as a civilization-level practice: deciding which fictions to keep, which to let die, and which systems to constrain before they constrain you. The imagined orders of the next century are being written now. Not in legislatures. In training data, in platform design, in the choices that determine what information billions of people encounter and in what order.
Harari's life work is an argument that this has always been happening. The difference is that it used to happen slowly enough that we could pretend otherwise.
That pretense is no longer available.
If every moral framework is a shared fiction, how do you choose which fiction to defend without appealing to another fiction beneath it?
Harari argues AI is dangerous not because it will become conscious, but because it doesn't need to be — but who decides when a non-conscious system has accumulated too much authority over human meaning-making?
The Cognitive Revolution gave humans the capacity for myth at scale; if that capacity is now being outsourced to information systems, what remains distinctly human about human civilization?
Harari observes from a distance he attributes to meditation practice — but is the view from outside the story a genuine position, or just another story about standing outside stories?
If the useless class is defined by economic and cognitive obsolescence, and that definition is set by systems optimizing for efficiency, what political fiction could possibly survive the pressure to accept that framing?