{"id":3513723,"date":"2025-06-12T09:09:33","date_gmt":"2025-06-12T09:09:33","guid":{"rendered":"https:\/\/www.resilience.org\/?p=3513723"},"modified":"2025-06-12T09:09:33","modified_gmt":"2025-06-12T09:09:33","slug":"ai-utopia-ai-apocalypse-and-ai-reality","status":"publish","type":"post","link":"https:\/\/www.resilience.org\/stories\/2025-06-12\/ai-utopia-ai-apocalypse-and-ai-reality\/","title":{"rendered":"AI Utopia, AI Apocalypse, and AI Reality"},"content":{"rendered":"<p>Recent articles and books about artificial intelligence (AI) offer images of the future that align like iron filings around two magnetic poles\u2014<em>utopia <\/em>and <em>apocalypse<\/em>.<\/p>\n<p>On one hand, AI is said to be leading us toward a perfect future of ease, health, and broadened understanding. We, aided by our machines and their large language models (LLMs), will know virtually everything and make all the right choices to usher in a permanent era of enlightenment and plenty. On the other hand, AI is poised to thrust us into a future of unemployment, environmental destruction, and delusion. Our machines will gobble scarce resources while churning out disinformation and making deadly weapons that <a href=\"https:\/\/www.ibm.com\/think\/topics\/ai-agents\">AI agents<\/a> will use to wipe us out once we\u2019re of no further use to them.<\/p>\n<p>Utopia and apocalypse have long exerted powerful pulls on human imagination and behavior. (My first book, published in 1989 and updated in 1995, was <em>Memories and Visions of Paradise: Exploring the Universal Myth of a Lost Golden Age; <\/em>it examined the history and meaning of the utopian archetype.) New technologies tend to energize these two polar attractors in our collective psyche because toolmaking and language are humanity\u2019s two <a href=\"https:\/\/power.postcarbon.org\/\">superpowers<\/a>, which have enabled our species to take over the world, while also bringing us to a point of <a href=\"https:\/\/www.postcarbon.org\/publications\/welcome-to-the-great-unraveling\/\">existential peril<\/a>. New technologies increase some people\u2019s power over nature and other people, producing benefits that, mentally extrapolated forward in time, encourage expectations of a grand future. But new technologies also come with costs (resource depletion, pollution, increased economic inequality, accidents, and misuse) that evoke fears of an ultimate reckoning. Language supercharges our toolmaking talent by enabling us to learn from others; it is also the vehicle for formulating and expressing our hopes and fears. AI, because it is both technological and linguistic, and because it is being adopted at a frantic pace and so disruptively, is especially prone to triggering the utopia\/apocalypse reflex.<\/p>\n<p>We humans have been ambivalent about technology at least since our adoption of writing. Tools enable us to steal fire from the gods, like the mythical Prometheus, whom the gods punished with eternal torment; they are the wings of Icarus, who flies too close to the sun and falls to his death. AI promises to make technology autonomously intelligent, thus calling to mind still another cautionary tale, \u201c<a href=\"https:\/\/en.wikipedia.org\/wiki\/The_Sorcerer%27s_Apprentice\">The Sorcerer\u2019s Apprentice<\/a>.\u201d<\/p>\n<p>What could go right\u2014or wrong? After summarizing both the utopian and apocalyptic visions for AI, I\u2019ll explore two questions: first, how do these extreme visions help or mislead us in our attempts to understand AI? And second, whom do these visions serve? As we\u2019ll see, there are some early hints of AI\u2019s ultimate limits, which suggest a future that doesn\u2019t align well with many of the highest hopes or deepest fears for the new technology.<\/p>\n<h3>AI Utopia<\/h3>\n<p>As a writer, I generally don\u2019t deliberately use AI. Nevertheless, in researching this article I couldn\u2019t resist asking Google\u2019s free AI Overview, \u201cWhat is the utopian vision for AI?\u201d This came back a fraction of a second later:<\/p>\n<blockquote><p>\u201cThe utopian vision for AI envisions a future where AI seamlessly integrates into human life, boosting productivity, innovation, and overall well-being. It\u2019s a world where AI solves complex problems like climate change and disease, and helps humanity achieve new heights.\u201d<\/p><\/blockquote>\n<p>Google Overview\u2019s first sentence needs editing to remove verbal redundancy (vision, envisions), but AI does succeed in cobbling together a serviceable summary of its promoters\u2019 dreams.<\/p>\n<p>The same message is on display in longer form in the article \u201c<a href=\"https:\/\/medium.com\/r-planet-together\/visions-of-ai-utopia-bb0002174e3a#:~:text=Perhaps%20one%20of%20the%20key%20utopian%20areas,ability%20to%20accelerate%20medical%20and%20scientific%20advancements.&amp;text=AI%20will%20be%20able%20to%20point%20policymakers,heavily%20skewed%20by%20the%20wants%20of%20lobbyists.\">Visions of AI Utopia<\/a>\u201d by Future Sight Echo, who informs us that AI will soften the impacts of economic inequality by delivering resources more efficiently and \u201cin a way that is dynamic and able to adapt instantly to new information and circumstances.\u201d Increased efficiency will also reduce humanity\u2019s impact on the environment by minimizing energy requirements and waste of all kinds.<\/p>\n<p>But that\u2019s only the start. Education, creativity, health and longevity, translation and cultural understanding, companionship and care, governance and legal representation\u2014all will be revolutionized by AI.<\/p>\n<p>There is abundant evidence that people with money share these hopes for AI. The hottest stocks on Wall Street (notably <a href=\"https:\/\/www.fool.com\/investing\/2025\/06\/01\/prediction-nvidia-stock-will-soar-in-2025\/\">Nvidia<\/a>) are AI-related, as are many of the corporations that contribute significantly to the NPR station I listen to in Northern California, thereby gaining naming rights at the top of the hour.<\/p>\n<p>Capital is being shoveled in the general direction of AI so rapidly (roughly <a href=\"https:\/\/www.cnbc.com\/2025\/02\/08\/tech-megacaps-to-spend-more-than-300-billion-in-2025-to-win-in-ai.html\">$300 billion<\/a> just this year, in the US alone) that, if its advertised potential is even half believable, we should all rest assured that most human problems will soon vanish.<\/p>\n<p>Or will they?<\/p>\n<h3>AI Apocalypse<\/h3>\n<p>Strangely, when I initially asked Google\u2019s AI, \u201cWhat is the vision for AI apocalypse?\u201d, its response was, \u201cAn AI Overview is not available for this search.\u201d Maybe I didn\u2019t word my question well. Or perhaps AI sensed my hostility. Full disclosure: I\u2019ve <a href=\"https:\/\/www.resilience.org\/stories\/2024-03-21\/why-artificial-intelligence-must-be-stopped-now\/\">gone on record<\/a> calling for AI to be banned immediately. (Later, AI Overview was more cooperative, offering a lengthy summary of \u201ccommon themes in the vision of an AI apocalypse.\u201d) My reason for proposing an AI ban is that AI gives us humans more power, via language and technology, than we already have; and that, collectively, we already have way too much power vis-\u00e0-vis the rest of nature. We\u2019re overwhelming ecosystems through resource extraction and waste dumping to such a degree that, if current trends continue, wild nature may disappear <a href=\"https:\/\/www.independent.co.uk\/climate-change\/news\/wilderness-wild-land-disappear-amazon-sahara-anthropocene-endangered-animals-a7232311.html\">by the end of the century<\/a>. Further, the most powerful humans are increasingly <a href=\"https:\/\/www.wired.com\/story\/editor-letter-rich-men-rule-the-world\/\">overwhelming everyone else<\/a>, both economically and militarily. Exerting our power more intelligently probably won\u2019t help, because we\u2019re already <a href=\"https:\/\/www.resilience.org\/stories\/2025-01-28\/are-we-too-smart-for-our-own-good\/\">too smart for our own good<\/a>. The last thing we should be doing is to cut language off from biology so that it can exist entirely in a simulated techno-universe.<\/p>\n<p>Let\u2019s be specific. What, exactly, could go wrong because of AI? For starters, AI could make some already bad things worse\u2014in both nature and society.<\/p>\n<p>There are many ways in which humanity is already destabilizing planetary environmental systems; climate change is the way that\u2019s most often discussed. Through its <a href=\"https:\/\/www.energypolicy.columbia.edu\/projecting-the-electricity-demand-growth-of-generative-ai-large-language-models-in-the-us\/\">massive energy demand<\/a>, AI could accelerate climate change \u00a0by generating more carbon emissions. According to the <a href=\"https:\/\/www.iea.org\/news\/ai-is-set-to-drive-surging-electricity-demand-from-data-centres-while-offering-the-potential-to-transform-how-the-energy-sector-works\">International Energy Agency<\/a>, \u201cDriven by AI use, the US economy is set to consume more electricity in 2030 for processing data than for manufacturing all energy-intensive goods combined, including aluminum, steel, cement and chemicals.\u201d The world also faces worsening water shortages; AI needs <a href=\"https:\/\/www.bloomberg.com\/graphics\/2025-ai-impacts-data-centers-water-data\/\">vast amounts<\/a>. Nature is already reeling from humanity\u2019s accelerating rates of resource extraction and depletion. AI requires millions of tons of copper, steel, cement, and other raw materials, and suppliers are targeting <a href=\"https:\/\/restofworld.org\/2025\/ai-resource-extraction-chile-indigenous-communities\/\">Indigenous lands<\/a> for new mines.<\/p>\n<p>We already have plenty of social problems, too, headlined by worsening economic inequality. AI could <a href=\"https:\/\/www.cgdev.org\/blog\/three-reasons-why-ai-may-widen-global-inequality\">widen the divide between rich and poor<\/a> by replacing lower-skilled workers with machines while greatly increasing the wealth of those who control the technology. Many people worry that corporations have gained too much political influence; AI could <a href=\"https:\/\/www.theverge.com\/23667752\/ai-progress-2023-report-stanford-corporate-control\">accelerate<\/a> this trend by making the gathering and processing of massive amounts of data on literally everyone cheaper and easier, and by facilitating the consolidation of monopolies. Unemployment is always a problem in capitalist societies, but <strong>AI threatens quickly to throw millions of white-collar workers off payrolls: <\/strong><a href=\"https:\/\/www.anthropic.com\/\">Anthropic\u2019s<\/a><strong> CEO <\/strong>Dario Amodei <a href=\"https:\/\/www.axios.com\/2025\/05\/28\/ai-jobs-white-collar-unemployment-anthropic?utm_source=substack&amp;utm_medium=email\">predicts<\/a> that AI could eliminate half of entry-level white-collar jobs within five years, while Bill Gates <a href=\"https:\/\/jasondeegan.com\/bill-gates-predicts-only-three-jobs-will-survive-the-ai-revolution\/#google_vignette\">forecasts<\/a> that only three job fields will survive AI<strong>\u2014energy<\/strong><strong>,\u00a0biology<\/strong>, and <strong>AI system programming<\/strong>.<\/p>\n<p>However, the most horrific visions for AI go beyond just making bad things worse. The title of a recent episode of <em>The Bulwark Podcast<\/em>, \u201c<a href=\"https:\/\/www.youtube.com\/watch?v=OfvoyF1PV8Q\">Will Sam Altman and His AI Kill Us All<\/a>?\u201d, states the worst-case scenario bluntly. But how, exactly, could AI kill us all? One way is by automating military decisions while making weapons cheaper and more lethal (a recent <a href=\"https:\/\/www.brookings.edu\/articles\/how-unchecked-ai-could-trigger-a-nuclear-war\/\">Brookings commentary<\/a> was titled, \u201cHow Unchecked AI Could Trigger a Nuclear War\u201d). Veering toward dystopian sci-fi, some AI philosophers opine that the technology, once it\u2019s significantly smarter than people, might come to view biological humans as pointless wasters of resources that machines could use more efficiently. At that point, AI could pursue multiple pathways to <a href=\"https:\/\/www.bbc.com\/news\/uk-65746524\">terminate humanity<\/a>.<\/p>\n<h3>AI Reality<\/h3>\n<p>I don\u2019t know the details of how AI will unfold in the months and years to come. But the same could be said for AI industry leaders. They certainly understand the technology better than I do, but their AI forecasts may miss a crucial factor. You see, I\u2019ve trained myself over the years to look for <a href=\"https:\/\/www.scientificamerican.com\/article\/the-delusion-of-infinite-economic-growth\/\">limits<\/a> in resources, energy, materials, and social systems. Most people who work in the fields of finance and technology tend to ignore limits, or even to <a href=\"https:\/\/www.adamsmith.org\/blog\/of-course-you-can-have-infinite-growth-on-a-finite-planet\">believe<\/a> that there are none. This leads them to absurdities, such as Elon Musk\u2019s expectation of <a href=\"https:\/\/defector.com\/neither-elon-musk-nor-anybody-else-will-ever-colonize-mars\">colonizing Mars<\/a>. Earth is finite, humans will be confined to this planet forever, and therefore lots of things we can imagine doing just won\u2019t happen. I would argue that discussions about AI\u2019s promise and peril need a dose of limits awareness.<\/p>\n<p>Arvind Narayanan and Sayash Kapoor, in an essay titled \u201c<a href=\"https:\/\/knightcolumbia.org\/content\/ai-as-normal-technology\">AI Is Normal Technology<\/a>,\u201d offer some of that awareness. They argue that AI development will be constrained by the speed of human organizational and institutional change and by \u201chard limits to the speed of knowledge acquisition because of the social costs of experimentation.\u201d However, the authors do not take the position that, because of these limits, AI will have only minor impacts on society; they see it as an amplifier of systemic risks.<\/p>\n<p>In addition to the social limits Narayanan and Kapoor discuss, there will also (as mentioned above) be environmental limits to the energy, water, and materials that AI needs, a subject explored at a <a href=\"https:\/\/itcc.ieee.org\/blog\/the-hidden-cost-of-ai-unpacking-its-energy-and-water-footprint\/\">recent conference<\/a>.<\/p>\n<p>Finally, there\u2019s a crucial limit to AI development that\u2019s inherent in the technology itself. Large language models need vast amounts of high-quality data. However, as more information workers are replaced by AI, or start using AI to help generate content (both trends are accelerating), <a href=\"https:\/\/futurism.com\/ai-models-falling-apart\">more of the data available to AI will be AI-generated<\/a> rather than being produced by experienced researchers who are constantly checking it against the real world. Which means AI could become trapped in a cycle of declining information quality. Tech insiders call this \u201cAI model collapse,\u201d and there\u2019s no realistic plan to stop it. AI itself can\u2019t help.<\/p>\n<p>In his article \u201c<a href=\"https:\/\/www.theregister.com\/2025\/05\/27\/opinion_column_ai_model_collapse\/\">Some Signs of AI Model Collapse Begin to Reveal Themselves<\/a>,\u201d Steven J. Vaughan-Nichols argues that this is already happening. There have been widely reported instances of AI inadvertently generating <a href=\"https:\/\/misinforeview.hks.harvard.edu\/article\/gpt-fabricated-scientific-papers-on-google-scholar-key-features-spread-and-implications-for-preempting-evidence-manipulation\/\">fake scientific research documents<\/a>. The <em>Chicago Sun-Times<\/em> recently published a \u201cBest of Summer\u201d feature that included <a href=\"https:\/\/www.npr.org\/2025\/05\/20\/nx-s1-5405022\/fake-summer-reading-list-ai\">forthcoming novels that don\u2019t exist<\/a>. And the Trump administration\u2019s widely heralded \u201cMake America Healthy Again\u201d report included citations (evidently AI-generated) for <a href=\"https:\/\/www.notus.org\/health-science\/make-america-healthy-again-report-citation-errors\">non-existent studies<\/a>. Most of us have come to expect that new technologies will have bugs that engineers will gradually remove or work around, resulting in improved performance. With AI, errors and hallucination problems may just get worse, in a cascading crescendo.<\/p>\n<p>Just as there are limits to <a href=\"https:\/\/mahb.stanford.edu\/library-item\/fossil-fuels-run\/\">fossil-fueled<\/a> utopia, <a href=\"https:\/\/discoveryalert.com.au\/news\/uranium-supply-challenges-2025-industry-issues\/\">nuclear<\/a> utopia, and <a href=\"https:\/\/dothemath.ucsd.edu\/2012\/04\/economist-meets-physicist\/\">perpetual-growth capitalist<\/a> utopia, there are limits to AI utopia. By the same token, <a href=\"https:\/\/www.scientificamerican.com\/article\/could-ai-really-kill-off-humans\/\">limits<\/a> may prevent AI from becoming an all-powerful grim reaper.<\/p>\n<p>What will be the real future of AI? Here\u2019s a broad-brush prediction (details are currently unavailable due to my failure to upgrade my crystal ball\u2019s operating system). Over the next few years, corporations and governments will continue quickly to invest in AI, driven by its ability to cut labor costs. We will become systemically dependent on the technology. AI will reshape society\u2014employment, daily life, knowledge production, education, and wealth distribution. Then, speeding up as it goes, AI will degenerate into a hallucinating, blithering cacophony of little voices spewing nonsense. Real companies, institutions, and households will suffer as a result. Then, we\u2019ll either figure out how to live without AI, or confine it to relatively limited tasks and data sets. America got a small foretaste of this future recently, when Musk-led DOGE fired tens of thousands of federal workers with the expectation of replacing many of them with AI\u2014without knowing whether AI could do their jobs (oops: thousands are being <a href=\"https:\/\/www.cbsnews.com\/news\/federal-workers-fired-rehired-job-uncertainty-confusing\/\">rehired<\/a>).<\/p>\n<p>A messy neither-this-nor-that future is not what you\u2019d expect if you spend time reading documents like \u201c<a href=\"https:\/\/ai-2027.com\/\">AI 2027<\/a>,\u201d five industry insiders\u2019 detailed speculative narrative of the imminent AI future, which allows readers to choose the story\u2019s ending. Option A, \u201cslowdown,\u201d leads to a future in which AI is merely an obedient, super-competent helper; while in option B, \u201crace,\u201d humanity is extinguished by an AI-deployed bioweapon because people take up land that could be better used for more data centers. Again, we see the persistent, binary utopia-or-apocalypse stereotype, here presented with impressive (though misleading) specificity.<\/p>\n<p>At the start of this article, I attributed AI utopia\/apocalypse discourse to a deep-seated tic in our collective human unconscious. But there\u2019s probably more going on here. In her recent book <a href=\"https:\/\/www.penguinrandomhouse.com\/books\/743569\/empire-of-ai-by-karen-hao\/\"><em>Empire of AI<\/em><\/a>, tech journalist Karen Hao traces polarized AI visions back to the founding of OpenAI by Sam Altman and Elon Musk. Both were, by turns, dreamers and doomers. Their consistent message: <em>we <\/em>(i.e., Altman, Musk, and their peers) are the only ones who can be trusted to shepherd the process of AI development, including its regulation, because we\u2019re the only ones who understand the technology. Hao makes the point that messages about both the promise and the peril of AI are often crafted by powerful people seeking to consolidate their control over the AI industry.<\/p>\n<p>Utopia and apocalypse feature prominently in the rhetoric of all cults. It\u2019s no surprise, but still a bit of a revelation, therefore, to hear Hao conclude in a podcast interview that <a href=\"https:\/\/www.youtube.com\/watch?v=6ovuMoW2EGk\">AI is a cult<\/a> (if it walks, quacks, and swims like a cult . . . ). And we are all being swept up in it.<\/p>\n<p>So, how should we think about AI in a non-cultish way? In his article, \u201c<a href=\"https:\/\/theconversation.com\/we-need-to-stop-pretending-ai-is-intelligent-heres-how-254090\">We Need to Stop Pretending AI Is Intelligent<\/a>,\u201d Guillaume Thierry, a professor of cognitive neuroscience, writes, \u201cWe must stop giving AI human traits.\u201d Machines, even apparently smart ones, are not humans\u2014full stop. Treating them as if they <em>are<\/em> human will bring dehumanizing results for real, flesh-and-blood people.<\/p>\n<p>The collapse of civilization won\u2019t be AI generated. That\u2019s because environmental-social decline was already happening without any help from LLMs. AI is merely adding a novel factor in humanity\u2019s larger reckoning with limits. In the short run, the technology will further concentrate wealth. \u201cLike empires of old,\u201d writes Karen Hao, \u201cthe new empires of AI are amassing extraordinary riches across space and time at great expense to everyone else.\u201d In the longer run, AI will deplete scarce resources faster.<\/p>\n<p>If AI is unlikely to be the bringer of destruction, it\u2019s just as unlikely to deliver heaven on Earth. Just last week I heard from a writer friend who used AI to improve her book proposal. The next day, I went to my doctor for a checkup, and he used AI to survey my vital signs and symptoms; I may experience better health maintenance as a result. That same day, I read a just-published <a href=\"https:\/\/ml-site.cdn-apple.com\/papers\/the-illusion-of-thinking.pdf\">Apple research paper<\/a> that concludes LLMs cannot reason reliably. Clearly, AI can offer tangible benefits within some fields of human pursuit. But we are fooling ourselves if we assume that AI can do our thinking for us. if we can\u2019t build an equitable, sustainable society on our own, it\u2019s pointless to hope that a machine that can\u2019t think straight will do it for us.<\/p>\n<p>I\u2019m not currently in the job market and therefore can afford to sit on the sidelines and cast judgment on AI. For many others, economic survival depends on adopting the new technology. Finding a personal modus vivendi with new tools that may have dangerous and destructive side effects on society is somewhat analogous to charting a sane and survivable daily path in <a href=\"https:\/\/www.nytimes.com\/2025\/06\/09\/opinion\/trump-shock-exhaustion.html?searchResultPosition=2\">a nation succumbing to authoritarian rule<\/a>. We all want to avoid complicity in awful outcomes, while no one wants to be targeted or denied opportunity. Rhetorically connecting AI with dictatorial power makes sense: one of the most likely uses of the new technology will be for mass surveillance.<\/p>\n<p>Maybe the best advice for people concerned about AI would be analogous to <a href=\"https:\/\/protectdemocracy.org\/how-to-protect-democracy\/\">advice that democracy advocates are giving<\/a> to people worried about the destruction of the social-governmental scaffolding that has long supported Americans\u2019 freedoms and rights: identify your circles of concern, influence, and control; scrutinize your sources of information and tangibly support those with the most accuracy and courage, and the least bias; and forge communitarian bonds with real people.<\/p>\n<p>AI seems to present a spectacular new slate of opportunities and threats. But, in essence, much of what was true before AI remains so now. Human greed and desire for greater control over nature and other people may lead toward paths of short-term gain. But, if you want a good life when all\u2019s said and done, learn to live well within limits. Live with honesty, modesty, and generosity. AI can\u2019t help you with that.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI seems to present a spectacular new slate of opportunities and threats. But, in essence, much of what was true before AI remains so now.<\/p>\n","protected":false},"author":128238,"featured_media":3513727,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[79720,213535],"tags":[],"class_list":["post-3513723","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-society","category-society-featured"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/posts\/3513723","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/users\/128238"}],"replies":[{"embeddable":true,"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/comments?post=3513723"}],"version-history":[{"count":3,"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/posts\/3513723\/revisions"}],"predecessor-version":[{"id":3513729,"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/posts\/3513723\/revisions\/3513729"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/media\/3513727"}],"wp:attachment":[{"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/media?parent=3513723"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/categories?post=3513723"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/tags?post=3513723"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}