{"id":3500284,"date":"2024-03-21T10:52:19","date_gmt":"2024-03-21T10:52:19","guid":{"rendered":"https:\/\/www.resilience.org\/?p=3500284"},"modified":"2024-03-25T20:12:43","modified_gmt":"2024-03-25T20:12:43","slug":"why-artificial-intelligence-must-be-stopped-now","status":"publish","type":"post","link":"https:\/\/www.resilience.org\/stories\/2024-03-21\/why-artificial-intelligence-must-be-stopped-now\/","title":{"rendered":"Why Artificial Intelligence Must Be Stopped Now"},"content":{"rendered":"<p>The promise of AI is eclipsed by its perils, which include our own annihilation.<\/p>\n<h3>Introduction<\/h3>\n<p>Those advocating for artificial intelligence tout the huge benefits of using this technology. For instance, an article in CNN points out how AI is helping Princeton scientists solve\u00a0<a class=\"external text\" href=\"https:\/\/www.cnn.com\/2024\/02\/21\/climate\/nuclear-fusion-ai-climate-solution\/index.html\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>\u201ca key problem\u201d with fusion energy<\/u><\/a>. AI that can translate text to audio and audio to text is making information more accessible. Many digital tasks can be done faster using this technology.<\/p>\n<p>However, any advantages that AI may promise are eclipsed by the cataclysmic dangers of this controversial new technology. Humanity has a narrow chance to stop a technological revolution whose unintended negative consequences will vastly outweigh any short-term benefits.<\/p>\n<p>In the early 20th century, people (notably in the United States) could conceivably have stopped the proliferation of automobiles by focusing on improving public transit, thereby saving enormous amounts of energy, avoiding billions of tons of greenhouse gas emissions, and preventing the loss of more than\u00a0<a class=\"external text\" href=\"https:\/\/www.usatoday.com\/money\/blueprint\/auto-insurance\/fatal-car-crash-statistics\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>40,000<\/u>\u00a0<u>lives<\/u><\/a>\u00a0in car accidents each year in the U.S. alone. But we didn\u2019t do that.<\/p>\n<p>In the mid-century, we might have been able to stave off the development of the atomic bomb and averted the apocalyptic dangers we now find ourselves in. We missed that opportunity, too. (New nukes are still being\u00a0<a class=\"external text\" href=\"https:\/\/www.defense.gov\/News\/Releases\/Release\/Article\/3571660\/department-of-defense-announces-pursuit-of-b61-gravity-bomb-variant\/\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>designed and built<\/u><\/a>.)<\/p>\n<p>In the late 20th century, regulations guided by the\u00a0<a class=\"external text\" href=\"https:\/\/www.sciencedirect.com\/topics\/earth-and-planetary-sciences\/precautionary-principle\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>precautionary principle<\/u><\/a>\u00a0could have prevented the spread of\u00a0<a class=\"external text\" href=\"https:\/\/richardheinberg.com\/museletter-366-why-2-is-the-most-dangerous-number\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>toxic chemicals<\/u><\/a>\u00a0that now poison the entire planet. We failed in that instance as well.<\/p>\n<p>Now we have one more chance.<\/p>\n<p>With AI, humanity is outsourcing its executive control of nearly every key sector \u2014finance, warfare, medicine, and agriculture\u2014to algorithms with no moral capacity.<\/p>\n<p>If you are wondering what could go wrong, the answer is plenty.<\/p>\n<p>If it still exists, the window of opportunity for stopping AI will soon close. AI is being commercialized\u00a0<a class=\"external text\" href=\"https:\/\/blog.box.com\/state-of-enterprise-ai-adoption-in-2024\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>faster<\/u><\/a>\u00a0than other major technologies. Indeed, speed is its essence: It self-evolves through machine learning, with each iteration far outdistancing\u00a0<a class=\"external text\" href=\"https:\/\/ourworldindata.org\/moores-law\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>Moore\u2019s Law<\/u><\/a>.<\/p>\n<p>And because AI is being used to accelerate all things that have major impacts on the planet (manufacturing, transport, communication, and resource extraction), it is not only an uber-threat to the survival of humanity but also to all life on Earth.<\/p>\n<h3>AI Dangers Are Cascading<\/h3>\n<p>In June 2023, I wrote an\u00a0<a class=\"external text\" href=\"https:\/\/www.commondreams.org\/opinion\/if-youre-driving-off-a-cliff-do-you-need-a-faster-car\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>article<\/u><\/a>\u00a0outlining some of AI\u2019s dangers. Now, that article is quaintly outdated. In just a brief period, AI has revealed more dangerous implications than many of us could have imagined.<\/p>\n<p>In an article titled \u201c<a class=\"external text\" href=\"https:\/\/www.scanthehorizon.org\/p\/dnai-the-artificial-intelligence\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>DNAI\u2014The Artificial Intelligence\/Artificial Life Convergence<\/u><\/a>,\u201d Jim Thomas reports on the prospects for \u201cextreme genetic engineering\u201d provided by AI. If artificial intelligence is good at generating text and images, it is also super-competent at reading and rearranging the letters of the genetic alphabet. Already, AI tech giant Nvidia has developed what Thomas calls \u201ca first-pass ChatGPT for virus and microbe design,\u201d and applications for its use are being found throughout life sciences, including medicine, agriculture, and the development of bioweapons.<\/p>\n<p>How would biosafety precautions for new synthetic organisms work, considering that the entire design system creating them is inscrutable? How can we adequately defend ourselves against the dangers of thousands of new AI-generated proteins when we are already doing an abysmal job of assessing the dangers of new chemicals?<\/p>\n<p>Research is advancing at warp speed, but oversight and regulation are moving at a snail\u2019s pace.<\/p>\n<p>Threats to the\u00a0<a class=\"external text\" href=\"https:\/\/www.cnn.com\/2023\/12\/14\/economy\/ai-danger-financial-system\/index.html\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>financial system<\/u><\/a>\u00a0from AI are just beginning to be understood. In December 2023, the U.S. Financial Stability Oversight Council (FSOC), composed of leading regulators across the government, classified AI as an \u201cemerging vulnerability.\u201d<\/p>\n<p>Because AI acts as a \u201cblack box\u201d that hides its internal operations, banks using it could find it harder \u201cto assess the system\u2019s conceptual soundness.\u201d According to a\u00a0<a class=\"external text\" href=\"https:\/\/www.cnn.com\/2023\/12\/14\/economy\/ai-danger-financial-system\/index.html\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>CNN article<\/u><\/a>, the FSOC regulators pointed out that AI \u201ccould produce and possibly mask biased or inaccurate results, [raising] worries about fair lending and other consumer protection issues.\u201d Could AI-driven stocks and bonds trading\u00a0<a class=\"external text\" href=\"https:\/\/money.usnews.com\/investing\/articles\/how-ai-could-spark-next-financial-crisis-gensler\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>tank securities markets<\/u><\/a>? We may not have to wait long to find out. Securities and Exchange Commission Chair Gary Gensler, in May 2023, spoke \u201cabout AI\u2019s potential to induce a [financial] crisis,\u201d according to a U.S. News\u00a0<a class=\"external text\" href=\"https:\/\/money.usnews.com\/investing\/articles\/how-ai-could-spark-next-financial-crisis-gensler\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>article<\/u><\/a>, calling it \u201ca potential systemic risk.\u201d<\/p>\n<p>Meanwhile, ChatGPT recently spent the better part of a day\u00a0<a class=\"external text\" href=\"https:\/\/arstechnica.com\/information-technology\/2024\/02\/chatgpt-alarms-users-by-spitting-out-shakespearean-nonsense-and-rambling\/\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>spewing bizarre nonsense<\/u><\/a>\u00a0in response to users\u2019 questions and often has \u201challucinations,\u201d which is when the system \u201cstarts to make up stuff\u2014stuff that is not [in line] with reality,\u201d said Jevin West, a professor at the University of Washington, according to a CNN\u00a0<a class=\"external text\" href=\"https:\/\/www.cnn.com\/2023\/08\/29\/tech\/ai-chatbot-hallucinations\/index.html\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>article<\/u><\/a>\u00a0he was quoted in. What happens when AI starts hallucinating financial records and stock trades?<\/p>\n<p>Lethal\u00a0<a class=\"external text\" href=\"https:\/\/futureoflife.org\/project\/lethal-autonomous-weapons-systems\/\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>autonomous weapons<\/u><\/a>\u00a0are already being used on the battlefield. Add AI to these weapons, and whatever human accountability, moral judgment, and compassion still persist in warfare will tend to vanish.\u00a0<a class=\"external text\" href=\"https:\/\/www.thenation.com\/article\/world\/killer-robots-drone-warfare\/\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>Killer robots<\/u><\/a>\u00a0are already being tested in a spate of bloody new conflicts worldwide\u2014in Ukraine and Russia, Israel and Palestine, as well as in Yemen and elsewhere.<\/p>\n<p>It was obvious from the start that AI would worsen economic inequality. In January, the\u00a0<a class=\"external text\" href=\"https:\/\/www.imf.org\/en\/Blogs\/Articles\/2024\/01\/14\/ai-will-transform-the-global-economy-lets-make-sure-it-benefits-humanity\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>IMF forecasted that<\/u><\/a>\u00a0AI would affect nearly 40 percent of jobs globally (around 60 percent in wealthy countries). Wages will be impacted, and jobs will be eliminated. These are undoubtedly underestimates since the technology\u2019s capability is constantly increasing.<\/p>\n<p>Overall, the result will be that people who are placed to benefit from the technology will get wealthier (some spectacularly so), while most others will fall even further behind. More specifically,\u00a0<a class=\"external text\" href=\"https:\/\/www.techpolicy.press\/monopoly-power-is-the-elephant-in-the-room-in-the-ai-debate\/\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>immensely wealthy and powerful<\/u><\/a>\u00a0digital technology companies will grow their social and political clout far beyond already absurd levels.<\/p>\n<p>It is sometimes claimed that AI will help solve climate change by speeding up the development of low-carbon technologies. But AI\u2019s\u00a0<a class=\"external text\" href=\"https:\/\/www.scientificamerican.com\/article\/the-ai-boom-could-use-a-shocking-amount-of-electricity\/\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>energy usage<\/u><\/a>\u00a0could soon eclipse that of many smaller countries. And AI data centers also tend to gobble up\u00a0<a class=\"external text\" href=\"https:\/\/www.youtube.com\/watch?v=fAPusgiz4B8\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>land and water<\/u><\/a>.<\/p>\n<p>AI is even invading our love lives, as presaged in the 2013 movie \u201c<a class=\"external text\" href=\"https:\/\/www.imdb.com\/title\/tt1798709\/\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>Her<\/u><\/a>.\u201d While the internet has reshaped relationships via online dating, AI has the potential to replace human-to-human partnering with human-machine intimate relationships. Already,\u00a0<a class=\"external text\" href=\"https:\/\/replika.ai\/\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>Replika<\/u><\/a>\u00a0is being marketed as the \u201c<a class=\"external text\" href=\"https:\/\/theconversation.com\/i-tried-the-replika-ai-companion-and-can-see-why-users-are-falling-hard-the-app-raises-serious-ethical-questions-200257\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>AI companion who cares<\/u><\/a>\u201d\u2014offering to engage users in deeply personal conversations, including sexting. Sex\u00a0<a class=\"external text\" href=\"https:\/\/www.cosmopolitan.com\/uk\/love-sex\/sex\/a36480612\/sex-robots\/\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>robots<\/u><\/a>\u00a0are being developed,\u00a0<a class=\"external text\" href=\"https:\/\/www.theguardian.com\/technology\/2017\/jul\/05\/sex-robots-promise-revolutionary-service-but-also-risks-says-study\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>ostensibly<\/u><\/a>\u00a0for elderly and disabled folks, though the first customers seem to be wealthy men.<\/p>\n<p>Face-to-face human interactions are\u00a0<a class=\"external text\" href=\"https:\/\/www.hplusjournal.com\/home\/the-dangers-of-decreased-face-to-face-communication\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>becoming rarer<\/u><\/a>, and couples are reporting a\u00a0<a class=\"external text\" href=\"https:\/\/www.google.com\/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=&amp;ved=2ahUKEwjY6qbEsr-EAxVGGTQIHacnAxEQFnoECCQQAQ&amp;url=https%3A%2F%2Fwww.scientificamerican.com%2Farticle%2Fpeople-have-been-having-less-sex-whether-theyre-teenagers-or-40-somethings%2F&amp;usg=AOvVaw0QQkym2OTmqrh6xTAfrqXL&amp;opi=89978449\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>lower frequency of sexual intimacy<\/u><\/a>. With AI, these worrisome trends could grow exponentially. Soon, it\u2019ll just be you and your machines against the world.<\/p>\n<p>As the U.S. presidential election nears, the potential release of a spate of\u00a0<a class=\"external text\" href=\"https:\/\/www.wsj.com\/tech\/ai\/new-era-of-ai-deepfakes-complicates-2024-elections-aa529b9e\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>deepfake audio and video recordings<\/u><\/a>\u00a0could have the nation\u2019s democracy\u00a0<a class=\"external text\" href=\"https:\/\/www.google.com\/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=&amp;cad=rja&amp;uact=8&amp;ved=2ahUKEwjQ0rCEu7qEAxV_E0QIHdLbCSk4ChAWegQIBBAB&amp;url=https%3A%2F%2Fwww.nytimes.com%2F2022%2F09%2F17%2Fus%2Famerican-democracy-threats.html&amp;usg=AOvVaw0O_cttqBelHVbjf3T0ffNh&amp;opi=89978449\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>hanging by a thread<\/u><\/a>. Did the candidate really say that? It will take a while to find out. But will the fact-check itself be AI-generated? India is experimenting with AI-generated political content in the run-up to its national elections, which are scheduled to take place in 2024, and the results are\u00a0<a class=\"external text\" href=\"https:\/\/www.aljazeera.com\/news\/2024\/2\/20\/deepfake-democracy-behind-the-ai-trickery-shaping-indias-2024-elections\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>weird, deceptive, and subversive<\/u><\/a>.<\/p>\n<p>A comprehensive look at the situation reveals that AI will likely accelerate all the negative trends currently threatening nature and humanity. But this indictment still fails to account for its ultimate ability to render humans, and perhaps all living things, obsolete.<\/p>\n<p>AI\u2019s threats aren\u2019t a series of easily fixable bugs. They are inevitable expressions of the technology\u2019s inherent nature\u2014its hidden inner workings and self-evolution of function. And these aren\u2019t trivial dangers; they are existential.<\/p>\n<p>The fact that some AI developers, who are the people most familiar with the technology, are its most\u00a0<a class=\"external text\" href=\"https:\/\/www.google.com\/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=&amp;cad=rja&amp;uact=8&amp;ved=2ahUKEwiA7dOavrqEAxX_IkQIHRrGABYQFnoECCwQAQ&amp;url=https%3A%2F%2Fwww.vox.com%2Fthe-highlight%2F23447596%2Fartificial-intelligence-agi-openai-gpt3-existential-risk-human-extinction&amp;usg=AOvVaw3HR_qFkzlbLzD9YdzyhZ4R&amp;opi=89978449\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>strident critics<\/u><\/a>\u00a0should tell us something. In fact, policymakers, AI experts, and journalists have issued a\u00a0<a class=\"external text\" href=\"https:\/\/www.safe.ai\/statement-on-ai-risk#open-letter\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>statement<\/u><\/a>\u00a0warning that \u201cmitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.\u201d<\/p>\n<h3>Don\u2019t Pause It, Stop It<\/h3>\n<p>Many AI-critical opinion pieces in the mainstream media call for a\u00a0<a class=\"external text\" href=\"https:\/\/time.com\/6295879\/ai-pause-is-humanitys-best-bet-for-preventing-extinction\/\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>pause<\/u><\/a>\u00a0in its development \u201cat a safe level.\u201d Some critics call for regulation of the technology\u2019s \u201cbad\u201d applications\u2014in weapons research, facial recognition, and disinformation. Indeed, European Union officials took a step in this direction in December 2023, reaching a provisional deal on the\u00a0<a class=\"external text\" href=\"https:\/\/www.bbc.com\/news\/world-europe-67668469\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>world\u2019s first comprehensive laws to regulate AI<\/u><\/a>.<\/p>\n<p>Whenever a new technology is introduced, the usual practice is to wait and see its positive and negative outcomes before implementing regulations. But if we wait until AI has developed further, we will\u00a0<a class=\"external text\" href=\"https:\/\/www.forbes.com\/sites\/forbestechcouncil\/2023\/11\/10\/are-we-ready-to-face-down-the-risk-of-ai-singularity\/?sh=22166773308d\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>no longer be in charge<\/u><\/a>. We may find it impossible to regain control of the technology we have created.<\/p>\n<p>The argument for a total AI ban arises from the technology\u2019s very nature\u2014its technological evolution involves acceleration to speeds that defy human control or accountability. A total ban is the solution that AI pioneer Eliezer Yudkowsky advised in his pivotal\u00a0<a class=\"external text\" href=\"https:\/\/time.com\/6266923\/ai-eliezer-yudkowsky-open-letter-not-enough\/\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>op-ed in TIME<\/u><\/a>:<\/p>\n<blockquote><p>\u201c[T]he most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in \u2018maybe possibly some remote chance,\u2019 but as in \u2018that is the obvious thing that would happen.\u2019\u201d<\/p><\/blockquote>\n<p>Yudkowsky goes on to\u00a0<a class=\"external text\" href=\"https:\/\/time.com\/6266923\/ai-eliezer-yudkowsky-open-letter-not-enough\/\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>explain<\/u><\/a>\u00a0that we are currently unable to imbue AI with caring or morality, so we will get AI that \u201cdoes not love you, nor does it hate you, and you are made of atoms it can use for something else.\u201d<\/p>\n<p>Underscoring and validating Yudkowsky\u2019s warning, a U.S. State Department-funded study published on March 11 declared that unregulated AI poses an \u201c<a class=\"external text\" href=\"https:\/\/time.com\/6898967\/ai-extinction-national-security-risks-report\/\" target=\"_blank\" rel=\"nofollow noreferrer noopener\">extinction-level threat<\/a>\u201d to humanity.<\/p>\n<p>To stop further use and development of this technology would require a global treaty\u2014an enormous hurdle to overcome. Shapers of the agreement would have to identify the key technological elements that make AI possible and ban research and development in those areas, anywhere and everywhere in the world.<\/p>\n<p>There are only a few historical precedents when something like this has happened. A millennium ago, Chinese leaders shut down a\u00a0<a class=\"external text\" href=\"https:\/\/press.uchicago.edu\/ucp\/books\/book\/chicago\/P\/bo5975947.html\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>nascent industrial revolution<\/u><\/a>\u00a0based on coal and coal-fueled technologies (hereditary aristocrats feared that upstart industrialists would eventually take over political power). During the Tokugawa Shogunate period (1603-1867) in Japan, most guns were banned,\u00a0<a class=\"external text\" href=\"https:\/\/www.businessinsider.com\/gun-control-how-japan-has-almost-completely-eliminated-gun-deaths-2017-10\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>almost completely eliminating gun deaths<\/u><\/a>. And in the 1980s, world leaders convened at the United Nations to\u00a0<a class=\"external text\" href=\"https:\/\/rapidtransition.org\/stories\/back-from-the-brink-how-the-world-rapidly-sealed-a-deal-to-save-the-ozone-layer\/\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>ban most CFC chemicals<\/u><\/a>\u00a0to preserve the planet\u2019s atmospheric ozone layer.<\/p>\n<p>The banning of AI would likely present a greater challenge than was faced in any of these three historical instances. But if it\u2019s going to happen, it has to happen now.<\/p>\n<p>Suppose a movement to ban AI were to succeed. In that case, it might break our collective fever dream of neoliberal capitalism so that people and their governments finally recognize the need to set limits. This should already have happened with regard to the climate crisis, which demands that we strictly limit fossil fuel extraction and energy usage. If the AI threat, being so acute, compels us to set limits on ourselves, perhaps it could spark the institutional and intergovernmental courage needed to act on\u00a0<a class=\"external text\" href=\"https:\/\/www.postcarbon.org\/publications\/welcome-to-the-great-unraveling\/\" target=\"_blank\" rel=\"nofollow noreferrer noopener\"><u>other existential threats<\/u><\/a>.<\/p>\n<p><em>\u201c<a class=\"external text\" href=\"https:\/\/observatory.wiki\/Why_Artificial_Intelligence_Must_Be_Stopped_Now\" target=\"_blank\" rel=\"nofollow noreferrer noopener\">Why Artificial Intelligence Must Be Stopped Now<\/a>\u201d by\u00a0<a title=\"Richard Heinberg\" href=\"https:\/\/observatory.wiki\/Richard_Heinberg\">Richard Heinberg<\/a>\u00a0is licensed by\u00a0<a class=\"external text\" href=\"https:\/\/observatory.wiki\/\" target=\"_blank\" rel=\"nofollow noreferrer noopener\">the Observatory<\/a>\u00a0under a\u00a0<a class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/4.0\/\" target=\"_blank\" rel=\"nofollow noreferrer noopener\">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)<\/a>. For permissions requests beyond the scope of this license, please see\u00a0<a class=\"external text\" href=\"https:\/\/observatory.wiki\/Project:Content_reuse_and_reprint_rights\" target=\"_blank\" rel=\"nofollow noreferrer noopener\">Observatory.wiki\u2019s Reuse and Reprint Rights guidance<\/a>.\u00a0<span class=\"navigation-not-searchable\"><span class=\"text-nowrap\">Last edited: March 20, 2024<\/span><\/span><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>With AI, humanity is outsourcing its executive control of nearly every key sector \u2014finance, warfare, medicine, and agriculture\u2014to algorithms with no moral capacity. If you are wondering what could go wrong, the answer is plenty.<\/p>\n","protected":false},"author":128238,"featured_media":3500288,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[79720,213535],"tags":[],"class_list":["post-3500284","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-society","category-society-featured"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/posts\/3500284","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/users\/128238"}],"replies":[{"embeddable":true,"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/comments?post=3500284"}],"version-history":[{"count":0,"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/posts\/3500284\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/media\/3500288"}],"wp:attachment":[{"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/media?parent=3500284"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/categories?post=3500284"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.resilience.org\/wp-json\/wp\/v2\/tags?post=3500284"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}