BOOK REVIEW: More Everything Forever and the Architecture of Silicon Valley’s Future Myths
Adam Becker’s More Everything Forever examines a set of ideas that shape how parts of the technology sector imagine the future. Becker is not interested in the everyday discourse that surrounds artificial intelligence or automation. His focus is the intellectual ecosystem around Silicon Valley’s most influential founders, funders, and thinkers—the communities that frame AI development through long time horizons, exponential curves, and civilisational stakes. The book works as an excavation of the conceptual scaffolding that underpins these narratives.
Becker identifies a worldview organised around abstraction. Future-oriented calculations replace present-day harm. Long-range expected-value equations stand in for ethics. Engineering metaphors substitute for political institutions. Across different domains—moral philosophy, AI speculation, space colonisation—Becker finds the same pattern: complex realities are stripped down to simplified systems that scale cleanly on paper and fail when confronted with the world as it is.
Longtermism and the arithmetic of the distant future
One of Becker’s central targets is longtermism, a philosophy that prioritises the welfare of hypothetical future beings over people alive today. Becker’s critique is not simply that the focus is distant; it is that the reasoning depends on multiplying immense hypothetical populations by extremely small probabilities. This produces conclusions where speculative research on far-future technologies can be deemed more valuable than improving the lives of billions of people in the present.
Becker highlights how this line of reasoning can tilt moral attention away from existing inequality. By design, the calculus makes present harms mathematically negligible. The problem, for Becker, is not the idea that future people matter; it is that certain abstractions allow present suffering to disappear from view entirely.
The Singularity and the promise of accelerating returns
A second component of the worldview Becker analyses is the conviction that exponential technological growth will soon produce artificial general intelligence or “superintelligence.” This belief rests on the assumption that current trends—most notably Moore’s Law—will continue indefinitely.
Becker counters this with empirical observations familiar to researchers: semiconductor scaling is slowing, research productivity is falling in many fields, and biological intelligence cannot be reduced to a neat digital analogy. These points are not novel in themselves, but Becker uses them to show how the narrative of inevitable acceleration persists even when material constraints suggest a more complex picture. It is the persistence of the abstraction, not the details of the trend line, that interests him.
Existential risk and the displacement of present harms
Becker also traces how certain elite groups have made “AI existential risk” a central organising story. The well-known paperclip thought experiment functions here as an emblem: a simplified model of an artificial agent pursuing a goal without regard for human welfare.
Becker’s concern is the opportunity cost of attention. While researchers such as Timnit Gebru, Emily Bender, and others document present harms—carbon costs, linguistic biases, discriminatory outputs—public debate is repeatedly pulled toward speculative future catastrophes. Meredith Whittaker is quoted to good effect: when the problem is framed in a fantasy world, the solutions migrate there as well. Becker shows how existential narratives can crowd out governance discussions grounded in material impacts.
Utopias built on controlled environments
Becker then turns to the futures envisioned by figures like Ray Kurzweil and Elon Musk. These imagined worlds involve space settlements, large-scale planetary engineering, or the conversion of matter into computational substrate. Becker’s point is not to dismiss ambition but to highlight a recurring preference for controlled, predictable environments over the complexities of natural systems. The result is a form of utopian thinking in which nature becomes either a problem to escape or a resource to reorganise for technical ends.
The critique here is structural: a worldview built on simplifying assumptions can lead to futures that are inhospitable to the very forms of life they aim to preserve.
The intellectual lineage behind the rhetoric
Becker closes by mapping some of the contested intellectual history surrounding these communities: the rationalist and effective altruist movements, their entanglements with controversial claims about intelligence, and the echoes of earlier techno-futurist manifestos. His argument is that when a worldview is constructed from highly abstract models of humanity, it can become vulnerable to reductionist conclusions about people themselves. The point is not that the movements are uniform—they are not—but that certain baseline assumptions can create openings for ideas that flatten human complexity.
A book for governance discussions, not just tech criticism
More Everything Forever is not an attack on technology or future-thinking. It is a study of how certain influential groups reason about the future and how those modes of reasoning scale into public discourse. For students of AI governance, the value of the book lies in its account of how narratives take shape: how specific epistemic communities define risks; how simplified models travel into policy debates; and how speculative futures can overshadow present harms.
Silicon Valley’s most ambitious visions are not only technical blueprints but political imaginaries. Becker’s book is useful for understanding the assumptions that sit underneath them—and for recognising that the governance of algorithmic systems requires grounding decisions in empirical harms rather than in abstractions that treat complexity as a nuisance to be engineered away.


