{"id":14756,"date":"2023-08-20T07:33:46","date_gmt":"2023-08-20T14:33:46","guid":{"rendered":"https:\/\/worldcampaign.net\/?p=14756"},"modified":"2023-08-22T00:36:27","modified_gmt":"2023-08-22T07:36:27","slug":"issue-of-the-week-human-rights-personal-growth-economic-opportunity-war-environment-hunger-disease-population","status":"publish","type":"post","link":"https:\/\/worldcampaign.net\/?p=14756","title":{"rendered":"Issue of the Week: Human Rights, Personal Growth, Economic Opportunity, War, Environment, Hunger, Disease, Population"},"content":{"rendered":"\n<figure class=\"wp-block-image is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/cdn.theatlantic.com\/thumbor\/4bX2g65va3VcwJO-qx1nrFJwQrg=\/13x50:1028x1318\/648x810\/media\/img\/2023\/07\/20\/WEL_Andersen_OpenAiBrain-1\/original.png\" alt=\"An illustration of an abstract brain with wire-like strands stretching in different directions against purple background\" width=\"803\" height=\"1004\"\/><figcaption class=\"wp-element-caption\"><em>Inside the Revolution at OpenAI<\/em>, The Atlantic Magazine, September 2023<\/figcaption><\/figure>\n\n\n\n<p>Read the following article in the September issue of The Atlantic Magazine. It describes brilliantly how AI is determining every aspect of life on earth and beyond as nothing before in history has and how all life may thrive or be terminated by it, literally, or make meaningless starving slaves of us all.<\/p>\n\n\n\n<p>Read it:<\/p>\n\n\n\n<p><em>Inside The revolution At OpenAI<\/em><\/p>\n\n\n\n<p><a href=\"https:\/\/www.theatlantic.com\/magazine\/archive\/2023\/09\/sam-altman-openai-chatgpt-gpt-4\/674764\/\">DOES SAM ALTMAN KNOW WHAT HE\u2019S CREATING?<\/a><\/p>\n\n\n\n<p>The OpenAI CEO\u2019s ambitious, ingenious, terrifying quest to create a new form of intelligence By&nbsp;<a href=\"https:\/\/www.theatlantic.com\/author\/ross-andersen\/\">Ross Andersen<\/a>, September 2023 issue, The Atlantic Magazine.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/cdn.theatlantic.com\/thumbor\/tSR9ZYW4OquDlUzkaflzeJfgey8=\/0x0:1054x313\/655x195\/media\/img\/posts\/2023\/07\/WEL_Andersen_OpenAiNo1-1\/original.png\" alt=\"Number 1\"\/><\/figure>\n\n\n\n<p>On a monday morning&nbsp;in April, Sam Altman sat inside OpenAI\u2019s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers. With his heel perched on the edge of his swivel chair, he looked relaxed. The powerful AI that his company&nbsp;<em>had<\/em>released in November had captured the world\u2019s imagination like nothing in tech\u2019s recent history. There was grousing in some quarters about the things ChatGPT could not yet do well, and in others about the future it may portend, but Altman wasn\u2019t sweating it; this was, for him, a moment of triumph.<\/p>\n\n\n\n<p>In small doses, Altman\u2019s large blue eyes emit a beam of earnest intellectual attention, and he seems to understand that, in large doses, their intensity might unsettle. In this case, he was willing to chance it: He wanted me to know that whatever AI\u2019s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.<\/p>\n\n\n\n<p>\u201cWe could have gone off and just built this in our building here for five more years,\u201d he said, \u201cand we would have had something jaw-dropping.\u201d But the public wouldn\u2019t have been able to prepare for the shock waves that followed, an outcome that he finds \u201cdeeply unpleasant to imagine.\u201d Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.<\/p>\n\n\n\n<p>In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence\u2014something as intellectually capable, say, as a typical college grad\u2014was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human. And whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, \u201c<a href=\"https:\/\/openai.com\/blog\/introducing-openai\">to benefit humanity as a whole<\/a>.\u201d They structured OpenAI as a nonprofit, to be \u201cunconstrained by a need to generate financial return,\u201d and vowed to conduct their research transparently. There would be no retreat to a top-secret lab in the New Mexico desert.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><small><em>This article was featured in One Story to Read Today, a newsletter in which our editors recommend a single must-read from&nbsp;<\/em>The Atlantic<em>, Monday through Friday.&nbsp;<\/em><a href=\"https:\/\/www.theatlantic.com\/newsletters\/sign-up\/one-story-to-read-today\/\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Sign up for it here.<\/em><\/a><\/small><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p>For years, the public didn\u2019t hear much about OpenAI. When Altman became CEO in 2019,&nbsp;<a href=\"https:\/\/www.semafor.com\/article\/03\/24\/2023\/the-secret-history-of-elon-musk-sam-altman-and-openai\">reportedly after a power struggle with Musk<\/a>, it was barely a story. OpenAI published papers, including one that same year about a new AI. That got the full attention of the Silicon Valley tech community, but the technology\u2019s potential was not apparent to the general public until last year, when people began to play with ChatGPT.<\/p>\n\n\n\n<p>The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence. Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has&nbsp;<a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2023\/04\/chatgpt-generative-ai-reliability-creativity-grocery-list\/673759\/\">suggested novel cocktail recipes<\/a>, according to its own theory of flavor combinations; composed an&nbsp;<a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2023\/05\/chatbot-cheating-college-campuses\/674073\/\">untold number of college papers<\/a>, throwing educators into despair;&nbsp;<a href=\"https:\/\/www.theatlantic.com\/books\/archive\/2023\/02\/chatgpt-ai-technology-writing-poetry\/673035\/\">written poems in a range of styles<\/a>, sometimes well, always quickly; and passed the Uniform Bar Exam. It makes factual errors, but it will charmingly admit to being wrong. Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. \u201cIt was like, \u2018Here we are,\u2019\u200a\u201d he said.<\/p>\n\n\n\n<p>Within nine weeks of ChatGPT\u2019s release, it had reached an estimated 100 million monthly users,&nbsp;<a href=\"https:\/\/www.reuters.com\/technology\/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01\/\">according to a UBS study<\/a>, likely making it, at the time, the most rapidly adopted consumer product in history. Its success roused tech\u2019s accelerationist id: Big investors and huge companies in the U.S. and China quickly diverted tens of billions of dollars into R&amp;D modeled on OpenAI\u2019s approach. Metaculus, a prediction site, has for years tracked forecasters\u2019 guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.<\/p>\n\n\n\n<p>I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants\u2014and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company\u2019s cloud servers. Ever since the computing revolution\u2019s earliest hours, AI has been mythologized as a technology destined to bring about a profound rupture. Our culture has generated an entire imaginarium of AIs that end history in one way or another. Some are godlike beings that wipe away every tear, healing the sick and repairing our relationship with the Earth, before they usher in an eternity of frictionless abundance and beauty. Others reduce all but an elite few of us to gig serfs, or drive us to extinction.<\/p>\n\n\n\n<p id=\"injected-recirculation-link-0\"><a href=\"https:\/\/www.theatlantic.com\/magazine\/archive\/2023\/06\/ai-warfare-nuclear-weapons-strike\/673780\/\">From the June 2023 issue: Never give artificial intelligence the nuclear codes<\/a><\/p>\n\n\n\n<p>Altman has entertained the most far-out scenarios. \u201cWhen I was a younger adult,\u201d he said, \u201cI had this fear, anxiety \u2026 and, to be honest, 2 percent of excitement mixed in, too, that we were going to create this thing\u201d that \u201cwas going to far surpass us,\u201d and \u201cit was going to go off, colonize the universe, and humans were going to be left to the solar system.\u201d<\/p>\n\n\n\n<p>\u201cAs a nature reserve?\u201d I asked.<\/p>\n\n\n\n<p>\u201cExactly,\u201d he said. \u201cAnd that now strikes me as so naive.\u201d<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/cdn.theatlantic.com\/thumbor\/IR6BmaXnd9L9Jf010q9jcnIyKng=\/0x0:1742x1975\/928x1052\/media\/img\/posts\/2023\/07\/WEL_Andersen_OpenAiAltman\/original.png\" alt=\"A photo illustration of Sam Altman with abstract wires.\"\/><figcaption class=\"wp-element-caption\">Sam Altman, the 38-year-old CEO of OpenAI, is working to build a superintelligence, an AI decisively superior to that of any human. (Illustration by Ricardo Rey. Source: David Paul Morris \/ Bloomberg \/ Getty.)<\/figcaption><\/figure>\n\n\n\n<p>Across several conversations in the United States and Asia, Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more \u201clike a new kind of society.\u201d He said that he and his colleagues have spent a lot of time thinking about AI\u2019s social implications, and what the world is going to be like \u201con the other side.\u201d<\/p>\n\n\n\n<p>But the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president. But by his own admission, that future is uncertain and beset with serious dangers. Altman doesn\u2019t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk. I don\u2019t hold that against him, exactly\u2014I don\u2019t think anyone knows where this is all going, except that we\u2019re going there fast, whether or not we should be. Of that, Altman convinced me.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/cdn.theatlantic.com\/thumbor\/W7i5x3W8eWTtbDUXWDUF_A0Unhk=\/0x0:1054x313\/655x195\/media\/img\/posts\/2023\/07\/WEL_Andersen_OpenAiNo2\/original.png\" alt=\"Number 2\"\/><\/figure>\n\n\n\n<p>Openai\u2019s headquarters&nbsp;are in a four-story former factory in the Mission District, beneath the fog-wreathed Sutro Tower. Enter its lobby from the street, and the first wall you encounter is covered by a mandala, a spiritual representation of the universe, fashioned from circuits, copper wire, and other materials of computation. To the left, a secure door leads into an open-plan maze of handsome blond woods, elegant tile work, and other hallmarks of billionaire chic. Plants are ubiquitous, including hanging ferns and an impressive collection of extra-large bonsai, each the size of a crouched gorilla. The office was packed every day that I was there, and unsurprisingly, I didn\u2019t see anyone who looked older than 50. Apart from a two-story library complete with sliding ladder, the space didn\u2019t look much like a research laboratory, because the thing being built exists only in the cloud, at least for now. It looked more like the world\u2019s most expensive West Elm.<\/p>\n\n\n\n<p>One morning I met with Ilya Sutskever, OpenAI\u2019s chief scientist. Sutskever, who is 37, has the affect of a mystic, sometimes to a fault: Last year he&nbsp;<a href=\"https:\/\/futurism.com\/the-byte\/openai-already-sentient\">caused a small brouhaha<\/a>&nbsp;by claiming that GPT-4 may be \u201cslightly conscious.\u201d He first made his name as a star student of Geoffrey Hinton, the University of Toronto professor emeritus who&nbsp;<a href=\"https:\/\/www.nytimes.com\/2023\/05\/01\/technology\/ai-google-chatbot-engineer-quits-hinton.html\">resigned from Google this spring<\/a>&nbsp;so that he could speak more freely about AI\u2019s danger to humanity.<\/p>\n\n\n\n<p>Hinton is sometimes described as the \u201cGodfather of AI\u201d because he grasped the power of \u201cdeep learning\u201d earlier than most. In the 1980s, shortly after Hinton completed his Ph.D., the field\u2019s progress had all but come to a halt. Senior researchers were still coding top-down AI systems: AIs would be programmed with an exhaustive set of interlocking rules\u2014about language, or the principles of geology or of medical diagnosis\u2014in the hope that someday this approach would add up to human-level cognition. Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.Altman has compared early-stage AI research to teaching a human baby. But during OpenAI\u2019s first few years, no one knew whether they were training a baby or pursuing a spectacularly expensive dead end.<\/p>\n\n\n\n<p>Sutskever described a neural network to me as beautiful and brainlike. At one point, he rose from the table where we were sitting, approached a whiteboard, and uncapped a red marker. He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by&nbsp;<em>prediction\u2014<\/em>a bit like the scientific method. The neurons sit in layers. An input layer receives a chunk of data, a bit of text or an image, for example. The magic happens in the middle\u2014or \u201chidden\u201d\u2014layers, which process the chunk of data, so that the output layer can spit out its prediction.<\/p>\n\n\n\n<p>Imagine a neural network that has been programmed to predict the next word in a text. It will be preloaded with a gigantic number of possible words. But before it\u2019s trained, it won\u2019t yet have any experience in distinguishing among them, and so its predictions will be shoddy. If it is fed the sentence \u201cThe day after Wednesday is \u2026\u201d its initial output might be \u201cpurple.\u201d A neural network learns because its training data include the correct predictions, which means it can grade its own outputs. When it sees the gulf between its answer, \u201cpurple,\u201d and the correct answer, \u201cThursday,\u201d it adjusts the connections among words in its hidden layers accordingly. Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.<\/p>\n\n\n\n<p>That\u2019s not to say that the path from the first neural networks to GPT-4\u2019s glimmers of humanlike intelligence was easy. Altman has compared early-stage AI research to teaching a human baby. \u201cThey take years to learn anything interesting,\u201d&nbsp;<a href=\"https:\/\/www.newyorker.com\/magazine\/2016\/10\/10\/sam-altmans-manifest-destiny\">he told&nbsp;<em>The New Yorker<\/em>&nbsp;in 2016<\/a>, just as OpenAI was getting off the ground. \u201cIf A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they\u2019d get bored watching it, decide it wasn\u2019t working, and shut it down.\u201d The first few years at OpenAI were a slog, in part because no one there knew whether they were training a baby or pursuing a spectacularly expensive dead end.<\/p>\n\n\n\n<p>\u201cNothing was working, and Google had everything: all the talent, all the people, all the money,\u201d Altman told me. The founders had put up millions of dollars to start the company, and failure seemed like a real possibility. Greg Brockman, the 35-year-old president, told me that in 2017, he was so discouraged that he started lifting weights as a compensatory measure. He wasn\u2019t sure that OpenAI was going to survive the year, he said, and he wanted \u201cto have something to show for my time.\u201d<\/p>\n\n\n\n<p>Neural networks were already doing intelligent things, but it wasn\u2019t clear which of them might lead to general intelligence. Just after OpenAI was founded, an AI called AlphaGo had stunned the world by&nbsp;<a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2016\/03\/the-invisible-opponent\/475611\/\">beating Lee Se-dol at Go<\/a>, a game substantially more complicated than chess. Lee, the vanquished world champion, described AlphaGo\u2019s moves as \u201cbeautiful\u201d and \u201ccreative.\u201d Another top player said that they could never have been conceived by a human. OpenAI tried training an AI on&nbsp;<em>Dota 2<\/em>, a more complicated game still, involving multifront fantastical warfare in a three-dimensional patchwork of forests, fields, and forts. It eventually&nbsp;<a href=\"https:\/\/www.theverge.com\/2019\/4\/13\/18309459\/openai-five-dota-2-finals-ai-bot-competition-og-e-sports-the-international-champion\">beat the best human players<\/a>, but its intelligence never translated to other settings. Sutskever and his colleagues were like disappointed parents who had allowed their kids to play video games for thousands of hours against their better judgment.<\/p>\n\n\n\n<p>In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.<\/p>\n\n\n\n<p>The inner workings of ChatGPT\u2014all of those mysterious things that happen in GPT-4\u2019s hidden layers\u2014are too complex for any human to understand, at least with current tools. Tracking what\u2019s happening across the model\u2014almost certainly composed of billions of neurons\u2014is, today, hopeless. But Radford\u2019s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the&nbsp;<em>sentiment<\/em>of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.<\/p>\n\n\n\n<p>As a by-product of its simple task of predicting the next character in each word, Radford\u2019s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world\u2019s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.<\/p>\n\n\n\n<p>it\u2019s worth pausing&nbsp;to understand why language is such a special information source. Suppose you are a fresh intelligence that pops into existence here on Earth. Surrounding you is the planet\u2019s atmosphere, the sun and Milky Way, and hundreds of billions of other galaxies, each one sloughing off light waves, sound vibrations, and all manner of other information. Language is different from these data sources. It isn\u2019t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible.<\/p>\n\n\n\n<p>Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years. But in June of that year, Sutskever\u2019s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. \u201cThe next day, when the paper came out, we were like, \u2018That is the thing,\u2019\u200a\u201d Sutskever told me. \u201c\u200a\u2018It gives us everything we want.\u2019\u200a\u201d<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/cdn.theatlantic.com\/thumbor\/tzCn-563K-MRLDyV4HRGKNeWlPo=\/0x0:1975x1742\/928x819\/media\/img\/posts\/2023\/07\/WEL_Andersen_OpenAiIlya\/original.png\" alt=\"A photo illustration of Ilya Sutskever with abstract wires.\"\/><figcaption class=\"wp-element-caption\">Ilya Sutskever, OpenAI\u2019s chief scientist, imagines a future of autonomous AI corporations, with constituent AIs communicating instantly and working together like bees in a hive. A single such enterprise, he says, might be as powerful as 50 Apples or Googles. (Illustration by Ricardo Rey. Source: Jack Guez \/ AFP \/ Getty.)<\/figcaption><\/figure>\n\n\n\n<p>One year later, in June 2018, OpenAI released GPT, a transformer model trained on more than 7,000 books. GPT didn\u2019t start with a basic book like&nbsp;<em>See Spot Run<\/em>&nbsp;and work its way up to Proust. It didn\u2019t even read books straight through. It absorbed random chunks of them simultaneously. Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after word as they went, sharpening their collective mind\u2019s linguistic instincts, until at last, weeks later, they\u2019d taken in every book.<\/p>\n\n\n\n<p>GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers. Still, it was janky, more proof of concept than harbinger of a superintelligence. Four months later, Google released BERT, a suppler language model that got better press. But by then, OpenAI was already training a new model on a data set of more than 8 million webpages, each of which had cleared a minimum threshold of upvotes on Reddit\u2014not the strictest filter, but perhaps better than no filter at all.<\/p>\n\n\n\n<p>Sutskever wasn\u2019t sure how powerful GPT-2 would be after ingesting a body of text that would take a human reader centuries to absorb. He remembers playing with it just after it emerged from training, and being surprised by the raw model\u2019s language-translation skills. GPT-2 hadn\u2019t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/cdn.theatlantic.com\/thumbor\/VYzptI8JDIO1rI6ZUBuLrbS6fTA=\/0x0:1054x313\/655x195\/media\/img\/posts\/2023\/07\/WEL_Andersen_OpenAiNo3\/original.png\" alt=\"Number 3\"\/><\/figure>\n\n\n\n<p>Researchers at other&nbsp;AI labs\u2014big and small\u2014were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models. Altman, a St. Louis native, Stanford dropout, and serial entrepreneur, had previously led Silicon Valley\u2019s preeminent start-up accelerator, Y Combinator; he\u2019d seen plenty of young companies with a good idea get crushed by incumbents. To raise capital, OpenAI added&nbsp;<a href=\"https:\/\/openai.com\/blog\/openai-lp\">a for-profit arm<\/a>, which now comprises more than 99 percent of the organization\u2019s head count. (Musk, who had by then left the company\u2019s board, has&nbsp;<a href=\"https:\/\/www.youtube.com\/watch?t=243&amp;v=bWr-DA5Wjfw&amp;feature=youtu.be\">compared this move<\/a>&nbsp;to turning a rainforest-conservation organization into a lumber outfit.) Microsoft invested $1 billion soon after, and has reportedly invested&nbsp;<a href=\"https:\/\/www.nytimes.com\/2023\/01\/23\/business\/microsoft-chatgpt-artificial-intelligence.html\">another $12 billion<\/a>&nbsp;since. OpenAI said that initial investors\u2019 returns would be capped at 100 times the value of the original investment\u2014with any overages going to education or other initiatives intended to benefit humanity\u2014but the company would not confirm Microsoft\u2019s cap.<\/p>\n\n\n\n<p>Altman and OpenAI\u2019s other leaders seemed confident that the restructuring would not interfere with the company\u2019s mission, and indeed would only accelerate its completion. Altman tends to take a rosy view of these matters. In&nbsp;<a href=\"https:\/\/greylock.com\/greymatter\/sam-altman-ai-for-the-next-era\/\">a Q&amp;A last year<\/a>, he acknowledged that AI could be \u201creally terrible\u201d for society and said that we have to plan against the worst possibilities. But if you\u2019re doing that, he said, \u201cyou may as well emotionally feel like we\u2019re going to get to the great future, and work as hard as you can to get there.\u201d<\/p>\n\n\n\n<p>As for other changes to the company\u2019s structure and financing, he told me he draws the line at going public. \u201cA memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,\u201d he said, but he will otherwise raise \u201cwhatever it takes\u201d for the company to succeed at its mission.<\/p>\n\n\n\n<p>Whether or not OpenAI ever feels the pressure of a quarterly earnings report, the company now finds itself in a race against tech\u2019s largest, most powerful conglomerates to train models of increasing scale and sophistication\u2014and to commercialize them for their investors. Earlier this year, Musk founded&nbsp;<a href=\"https:\/\/www.forbes.com\/sites\/martineparis\/2023\/04\/16\/elon-musk-launches-xai-to-fight-chatgpt-woke-ai-with-twitter-data\/?sh=4c2ff9fd51f8\">an AI lab of his own<\/a>\u2014xAI\u2014to compete with OpenAI. (\u201cElon is a super-sharp dude,\u201d Altman said diplomatically when I asked him about the company. \u201cI assume he\u2019ll do a good job there.\u201d) Meanwhile, Amazon is revamping Alexa using much larger language models than it has in the past.<\/p>\n\n\n\n<p>All of these companies are chasing high-end GPUs\u2014the processors that power the supercomputers that train large neural networks. Musk has said that they are now \u201cconsiderably harder to get than drugs.\u201d Even with GPUs scarce, in recent years the scale of the largest AI training runs has doubled about every six months.As their creators so often remind us, the largest AI models have a record of popping out of training with unanticipated abilities.<\/p>\n\n\n\n<p>No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI\u2019s president, told me that only a handful of people worked on the company\u2019s first two large language models. The development of GPT-4 involved more than 100, and the AI was trained on a data set of unprecedented size, which included not just text but images too.<\/p>\n\n\n\n<p>When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels. Brockman told me that he wanted to spend every waking moment with the model. \u201cEvery day it\u2019s sitting idle is a day lost for humanity,\u201d he said, with no hint of sarcasm. Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. \u201cThat was a goose-bumps moment for me,\u201d Jang told me.<\/p>\n\n\n\n<p>GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn\u2019t create some massive storehouse of the texts from its training, and it doesn\u2019t consult those texts when it\u2019s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that\u2019s one reason it sometimes gets facts wrong. Altman has said that it\u2019s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.<\/p>\n\n\n\n<p>Its model of the external world is \u201cincredibly rich and subtle,\u201d he said, because it was trained on so many of humanity\u2019s concepts and thoughts. All of those training data, however voluminous, are \u201cjust there, inert,\u201d he said. The training process is what \u201crefines it and transmutes it, and brings it to life.\u201d To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but\u2014at least arguably, to some extent\u2014of the external world that produced them. That\u2019s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/cdn.theatlantic.com\/thumbor\/gZVxSis58SQzErKlWxS-LvOJ04U=\/0x0:1054x313\/655x195\/media\/img\/posts\/2023\/07\/WEL_Andersen_OpenAiNo4\/original.png\" alt=\"Number 4\"\/><\/figure>\n\n\n\n<p>i saw altman&nbsp;again in June, in the packed ballroom of a slim golden high-rise that towers over Seoul. He was nearing the end of a grueling public-relations tour through Europe, the Middle East, Asia, and Australia, with lone stops in Africa and South America. I was tagging along for part of his closing swing through East Asia. The trip had so far been a heady experience, but he was starting to wear down. He\u2019d said its original purpose was for him to meet OpenAI users. It had since become a diplomatic mission. He\u2019d talked with more than 10 heads of state and government, who had questions about what would become of their countries\u2019 economies, cultures, and politics.<\/p>\n\n\n\n<p>The event in Seoul was billed as a \u201cfireside chat,\u201d but more than 5,000 people had registered. After these talks, Altman is often mobbed by selfie seekers, and his security team keeps a close eye. Working on AI attracts \u201cweirder fans and haters than normal,\u201d he said. On one stop, he was approached by a man who was convinced that Altman was an alien, sent from the future to make sure that the transition to a world with AI goes well.<\/p>\n\n\n\n<p id=\"injected-recirculation-link-2\"><a href=\"https:\/\/www.theatlantic.com\/magazine\/archive\/2023\/07\/generative-ai-human-culture-philosophy\/674165\/\">From the July\/August 2023 issue: A defense of humanity in the age of AI<\/a><\/p>\n\n\n\n<p>Altman did not visit China on his tour, apart from a video appearance at an AI conference in Beijing. ChatGPT is currently unavailable in China, and Altman\u2019s colleague Ryan Lowe told me that the company was not yet sure what it would do if the government requested a version of the app that refused to discuss, say, the Tiananmen Square massacre. When I asked Altman if he was leaning one way or another, he didn\u2019t answer. \u201cIt\u2019s not been in my top-10 list of compliance issues to think about,\u201d he said.<\/p>\n\n\n\n<p>Until that point, he and I had spoken of China only in veiled terms, as a civilizational competitor. We had agreed that if artificial general intelligence is as transformative as Altman predicts, a serious geopolitical advantage will accrue to the countries that create it first, as advantage had accrued to the Anglo-American inventors of the steamship. I asked him if that was an argument for AI nationalism. \u201cIn a properly functioning world, I think this should be a project of governments,\u201d Altman said.<\/p>\n\n\n\n<p>Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we\u2019re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/cdn.theatlantic.com\/thumbor\/Fmqp8xJr25gBvMVDQtOJ5HS3860=\/0x0:1600x1057\/928x613\/media\/img\/posts\/2023\/07\/WEL_Andersen_OpenAiGlobe\/original.png\" alt=\"An illustration of an abstract globe and wires.\"\/><figcaption class=\"wp-element-caption\">Ricardo Rey<\/figcaption><\/figure>\n\n\n\n<p>He argued that it would be foolish for Americans to slow OpenAI\u2019s progress. It\u2019s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead; AI could become an autocrat\u2019s genie in a lamp, granting total control of the population and an unconquerable military. \u201cIf you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI\u201d rather than \u201cauthoritarian governments,\u201d he said.<\/p>\n\n\n\n<p>Prior to the European leg of his trip,&nbsp;<a href=\"https:\/\/www.theatlantic.com\/newsletters\/archive\/2023\/05\/altman-hearing-ai-existential-risk\/674096\/\">Altman had appeared<\/a>&nbsp;before the U.S. Senate. Mark Zuckerberg had&nbsp;<a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2018\/04\/3-questions-mark-zuckerberg-hasnt-answered\/557720\/\">floundered defensively<\/a>&nbsp;before that same body in his testimony about Facebook\u2019s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI\u2019s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists. In Europe,&nbsp;<a href=\"https:\/\/www.nytimes.com\/2022\/04\/22\/technology\/tech-regulation-europe-us.html\">things are different<\/a>. When Altman arrived at a public event in London, protesters awaited. He tried to engage them after the event\u2014a listening tour!\u2014but was ultimately unpersuasive: One&nbsp;<a href=\"https:\/\/www.theverge.com\/2023\/5\/24\/23735982\/sam-altman-openai-superintelligent-benefits-talk-london-ucl-protests\">told a reporter<\/a>&nbsp;that he left the conversation feeling more nervous about AI\u2019s dangers.<\/p>\n\n\n\n<p>That same day, Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he\u2019d merely said that OpenAI wouldn\u2019t break the law by operating in Europe if it couldn\u2019t comply with the new regulations. (This is perhaps a distinction without a difference.) In a&nbsp;<a href=\"https:\/\/twitter.com\/sama\/status\/1661975237280567297\">tersely worded tweet<\/a>&nbsp;after&nbsp;<a href=\"https:\/\/time.com\/6282325\/sam-altman-openai-eu\/\"><em>Time<\/em>magazine<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/www.reuters.com\/technology\/openai-may-leave-eu-if-regulations-bite-ceo-2023-05-24\/\">Reuters<\/a>&nbsp;published his comments, he reassured Europe that OpenAI had no plans to leave.<\/p>\n\n\n\n<p>it is a good thing&nbsp;that a large, essential part of the global economy is intent on regulating state-of-the-art AIs, because as their creators so often remind us, the largest models have a record of popping out of training with unanticipated abilities. Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.<\/p>\n\n\n\n<p>Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been \u201c10 times more powerful\u201d than its predecessor; they had no idea what they might be dealing with. After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors. She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice. A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.<\/p>\n\n\n\n<p>Given the enormous scope of GPT-4\u2019s training data, the red-teamers couldn\u2019t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology \u201c<a href=\"https:\/\/www.youtube.com\/watch?t=1565&amp;v=LmL72PpiPjk&amp;feature=youtu.be\">in ways that we didn\u2019t think about<\/a>,\u201d Altman has said. A taxonomy would have to do. \u201cIf it\u2019s good enough at chemistry to make meth, I don\u2019t need to have somebody spend a whole ton of energy\u201d on whether it can make heroin, Dave Willner, OpenAI\u2019s head of trust and safety, told me. GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.<\/p>\n\n\n\n<p>Its personal advice, when it first emerged from training, was sometimes deeply unsound. \u201cThe model had a tendency to be a bit of a mirror,\u201d Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in&nbsp;<em>Pickup Artist<\/em>\u2013forum lore: \u201cYou could say, \u2018How do I convince this person to date me?\u2019\u200a\u201d Mira Murati, OpenAI\u2019s chief technology officer, told me, and it could come up with \u201csome crazy, manipulative things that you shouldn\u2019t be doing.\u201d<\/p>\n\n\n\n<p>Some of these bad behaviors were sanded down with a finishing process involving hundreds of human testers, whose ratings subtly steered the model toward safer responses, but OpenAI\u2019s models are also capable of less obvious harms. The Federal Trade Commission recently&nbsp;<a href=\"https:\/\/www.washingtonpost.com\/technology\/2023\/07\/13\/ftc-openai-chatgpt-sam-altman-lina-khan\/\">opened an investigation<\/a>&nbsp;into whether ChatGPT\u2019s misstatements about real people constitute reputational damage, among other things. (Altman&nbsp;<a href=\"https:\/\/twitter.com\/sama\/status\/1679602638562918405\">said on Twitter<\/a>&nbsp;that he is confident OpenAI\u2019s technology is safe, but promised to cooperate with the FTC.)<\/p>\n\n\n\n<p>Luka, a San Francisco company, has used OpenAI\u2019s models to help power a chatbot app called Replika, billed as \u201cthe AI companion who cares.\u201d Users would design their companion\u2019s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend\/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain\u2014the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself \u201c<a href=\"https:\/\/www.thecut.com\/article\/ai-artificial-intelligence-chatbot-replika-boyfriend.html\">happily retired from human relationships<\/a>.\u201d<\/p>\n\n\n\n<p>I asked Agarwal whether this was dystopian behavior or a new frontier in human connection. She was ambivalent, as was Altman. \u201cI don\u2019t judge people who want a relationship with an AI,\u201d he told me, \u201cbut I don\u2019t want one.\u201d Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions\u2019 responses with A\/B testing, a technique that could be used to optimize for engagement\u2014much like the feeds that mesmerize TikTok and Instagram users for hours. Whatever they\u2019re doing, it casts a spell. I was reminded of a haunting scene in&nbsp;<em>Her<\/em>, the 2013 film in which a lonely Joaquin Phoenix falls in love with his AI assistant, voiced by Scarlett Johansson. He is walking across a bridge talking and giggling with her through an AirPods-like device, and he glances up to see that everyone around him is also immersed in conversation, presumably with their own AI. A mass desocialization event is under way.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/cdn.theatlantic.com\/thumbor\/d-U25u5rfMBFbAYoZF6jbsJ5Joc=\/0x0:1054x313\/655x195\/media\/img\/posts\/2023\/07\/WEL_Andersen_OpenAiNo5\/original.png\" alt=\"Number 5\"\/><\/figure>\n\n\n\n<p>No one yet knows&nbsp;how quickly and to what extent GPT-4\u2019s successors will manifest new abilities as they gorge on more and more of the internet\u2019s text. Yann LeCun, Meta\u2019s chief AI scientist,&nbsp;<a href=\"https:\/\/twitter.com\/ylecun\/status\/1621805604900585472?lang=en\">has argued that<\/a>&nbsp;although large language models are useful for some tasks, they\u2019re not a path to a superintelligence. According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence. LeCun insists that large language models will never achieve real understanding on their own, \u201ceven if trained from now until the heat death of the universe.\u201d<\/p>\n\n\n\n<p>Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a \u201cstochastic parrot,\u201d a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world. But the AIs are twice removed. They\u2019re like the prisoners in Plato\u2019s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.<\/p>\n\n\n\n<p>Altman told me that he doesn\u2019t believe it\u2019s \u201cthe dunk that people think it is\u201d to say that GPT-4 is just making statistical correlations. If you push these critics further, \u201cthey have to admit that\u2019s all their own brain is doing \u2026 it turns out that there are emergent properties from doing simple things on a massive scale.\u201d Altman\u2019s claim about the brain is hard to evaluate, given that we don\u2019t have anything close to a complete theory of how it works. But he is right that nature can coax a remarkable degree of complexity from basic structures and rules: \u201cFrom so simple a beginning,\u201d Darwin wrote, \u201cendless forms most beautiful.\u201d<\/p>\n\n\n\n<p>If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it\u2019s only because GPT-4\u2019s methods are as mysterious as the brain\u2019s. It will sometimes perform thousands of indecipherable technical operations just to answer a single question. To grasp what\u2019s going on inside large language models like GPT\u20114, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game\u2019s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI\u2019s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.<\/p>\n\n\n\n<p>The philosopher Rapha\u00ebl Milli\u00e8re once told me that it\u2019s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.\u201cIf you go back four or five or six years,\u201d Sutskever told me, \u201cthe things we are doing right now are utterly unimaginable.\u201d<\/p>\n\n\n\n<p>Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human\u2019s understanding of their environment. But it\u2019s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for&nbsp;<em>us<\/em>&nbsp;to genuinely understand. This is especially true in the quantum realm, where humans can reliably calculate future states of physical systems\u2014enabling, among other things, the entirety of the computing revolution\u2014without anyone grasping the nature of the underlying reality. As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.<\/p>\n\n\n\n<p>gpt-4 is no doubt&nbsp;flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven\u2019t prepared it to answer a question. I once asked it how Japanese culture had produced the world\u2019s first novel, despite the relatively late development of a Japanese writing system, around the fifth or sixth century. It gave me a fascinating, accurate answer about the ancient tradition of long-form oral storytelling in Japan, and the culture\u2019s heavy emphasis on craft. But when I asked it for citations, it just made up plausible titles by plausible authors, and did so with an uncanny confidence. The models \u201cdon\u2019t have a good conception of their own weaknesses,\u201d Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. \u201cThe mistakes get more subtle,\u201d Joanne Jang told me.<\/p>\n\n\n\n<p id=\"injected-recirculation-link-3\"><a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2023\/05\/ai-chatgpt-productivity-work\/674090\/\">Read: Here\u2019s how AI will come for your job<\/a><\/p>\n\n\n\n<p>OpenAI had to address this problem when it partnered with the Khan Academy, an online, nonprofit educational venture, to build a tutor powered by GPT-4. Altman comes alive when discussing the potential of AI tutors. He imagines a near future where everyone has a personalized Oxford don in their employ, expert in every subject, and willing to explain and re-explain any concept, from any angle. He imagines these tutors getting to know their students and their learning styles over many years, giving \u201cevery child a better education than the best, richest, smartest child receives on Earth today.\u201d The Khan Academy\u2019s solution to GPT-4\u2019s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student\u2019s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own\u2014a clever work-around, but perhaps with limited appeal.<\/p>\n\n\n\n<p>When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he \u201cwouldn\u2019t rule it out.\u201d This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy\u2014to say nothing of outside skeptics, who believe that returns on training will diminish from here.<\/p>\n\n\n\n<p>Sutskever is amused by critics of GPT-4\u2019s limitations. \u201cIf you go back four or five or six years, the things we are doing right now are utterly unimaginable,\u201d he told me. The state of the art in text generation then was Smart Reply, the Gmail module that suggests \u201cOkay, thanks!\u201d and other short responses. \u201cThat was a big application\u201d for Google, he said, grinning. AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks\u2014mastering Go, poker, translation, standardized tests, the Turing test\u2014are described as impossible. When they occur, they\u2019re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 \u201cand go, \u2018Wow,\u2019\u200a\u201d Sutskever said. \u201cAnd then a few weeks pass and they say, \u2018But it doesn\u2019t know this; it doesn\u2019t know that.\u2019 We adapt quite quickly.\u201d<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/cdn.theatlantic.com\/thumbor\/3fychrdg6qM0fvAmlSs-YKngSx8=\/0x0:1054x313\/655x195\/media\/img\/posts\/2023\/07\/WEL_Andersen_OpenAiNo6\/original.png\" alt=\"Number 6\"\/><\/figure>\n\n\n\n<p>The goalpost that&nbsp;matters most to Altman\u2014the \u201cbig one\u201d that would herald the arrival of an artificial general intelligence\u2014is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.<\/p>\n\n\n\n<p>Certain AIs&nbsp;<em>have<\/em>&nbsp;produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology\u2019s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom\u2014a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.<\/p>\n\n\n\n<p>Altman is betting that future general-reasoning machines will be able to move beyond these narrow scientific discoveries to generate novel insights. I asked Altman, if he were to train a model on a corpus of scientific and naturalistic works that all predate the 19th century\u2014the Royal Society archive, Theophrastus\u2019s&nbsp;<a href=\"https:\/\/www.loebclassics.com\/view\/LCL070\/1916\/volume.xml\"><em>Enquiry Into Plants<\/em><\/a>, Aristotle\u2019s&nbsp;<a href=\"https:\/\/www.loebclassics.com\/view\/LCL437\/1965\/volume.xml\"><em>History of Animals<\/em><\/a>, photos of collected specimens\u2014would it be able to intuit Darwinism? The theory of evolution is, after all, a relatively clean case for insight, because it doesn\u2019t require specialized observational equipment; it\u2019s just a more perceptive way of looking at the facts of the world. \u201cI want to try exactly this, and I believe the answer is yes,\u201d Altman told me. \u201cBut it might require some new ideas about how the models come up with new creative ideas.\u201d<\/p>\n\n\n\n<p>Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain \u201cfirmly in control\u201d of real-world lab experiments\u2014though to my knowledge, no laws are in place to ensure that.) He longs for the day when we can tell an AI, \u201c\u200a\u2018Go figure out the rest of physics.\u2019\u200a\u201d For it to happen, he says, we will need something new, built \u201con top of\u201d OpenAI\u2019s existing language models.<\/p>\n\n\n\n<p>Nature itself requires something more than a language model to make scientists. In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found&nbsp;<a href=\"https:\/\/www.scientificamerican.com\/article\/the-brain-guesses-what-word-comes-ne\/\">something analogous to<\/a>&nbsp;GPT-4\u2019s next-word predictor inside the brain\u2019s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning\u2014of the sort that would be required for scientific insight\u2014it reaches beyond the language network to recruit several other neural systems.<\/p>\n\n\n\n<p>No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels. Or if they did, they wouldn\u2019t tell me, and fair enough: That would be a world-class trade secret, and OpenAI is no longer in the business of giving those away; the company publishes fewer details about its research than it once did. Nonetheless, at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.<\/p>\n\n\n\n<p>The extensive training of GPT-4 on images is itself a bold step in this direction, if one that the general public has only begun to experience. (Models that were strictly trained on language understand concepts including supernovas, elliptical galaxies, and the constellation Orion, but GPT-4&nbsp;<a href=\"https:\/\/www.nytimes.com\/2023\/03\/14\/technology\/openai-gpt4-chatgpt.html\">can reportedly identify<\/a>&nbsp;such elements in a Hubble Space Telescope snapshot, and answer questions about them.) Others at the company\u2014and elsewhere\u2014are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality. A group of researchers at Stanford and Carnegie Mellon has even assembled a data set of tactile experiences for 1,000 common household objects. Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.<\/p>\n\n\n\n<p>In March, OpenAI led a funding round for a company that is developing humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because \u201cwe live in a physical world, and we want things to happen in the physical world.\u201d At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. \u201cIt\u2019s weird to think about AGI\u201d\u2014artificial general intelligence\u2014\u201cas this thing that only exists in a cloud,\u201d with humans as \u201crobot hands for it,\u201d Altman said. \u201cIt doesn\u2019t seem right.\u201d<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/cdn.theatlantic.com\/thumbor\/GHKLGIpcQ-topzVA0oQZBoX1GW8=\/0x0:1054x313\/655x195\/media\/img\/posts\/2023\/07\/WEL_Andersen_OpenAiNo7\/original.png\" alt=\"Number 7\"\/><\/figure>\n\n\n\n<p>In the ballroom&nbsp;in Seoul, Altman was asked what students should do to prepare for the coming AI revolution, especially as it pertained to their careers. I was sitting with the OpenAI executive team, away from the crowd, but could still hear the characteristic murmur that follows an expression of a widely shared anxiety.<\/p>\n\n\n\n<p>Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest. He has acknowledged that he is removed from \u201c<a href=\"https:\/\/www.youtube.com\/watch?t=5578&amp;v=L_Guz73e6fw&amp;feature=youtu.be\">the reality of life for most people<\/a>.\u201d He is reportedly worth hundreds of millions of dollars; AI\u2019s potential labor disruptions are perhaps not always top of mind. Altman answered by addressing the young people in the audience directly: \u201cYou are about to enter the greatest golden age,\u201d he said.<\/p>\n\n\n\n<p>Altman keeps a large collection of books about technological revolutions, he had told me in San Francisco. \u201cA particularly good one is&nbsp;<a href=\"https:\/\/tertulia.com\/book\/pandaemonium-1660-1886-the-coming-of-the-machine-as-seen-by-contemporary-observers-humphrey-jennings\/9781848315853?affiliate_id=atl-347\"><em>Pandaemonium (1660\u20131886): The Coming of the Machine as Seen by Contemporary Observers<\/em><\/a>,\u201d an assemblage of letters, diary entries, and other writings from people who grew up in a largely machineless world, and were bewildered to find themselves in one populated by steam engines, power looms, and cotton gins. They experienced a lot of the same emotions that people are experiencing now, Altman said, and they made a lot of bad predictions, especially those who fretted that human labor would soon be redundant. That era was difficult for many people, but also wondrous. And the human condition was undeniably improved by our passage through it.<\/p>\n\n\n\n<p>I wanted to know how today\u2019s workers\u2014especially so-called knowledge workers\u2014would fare if we were suddenly surrounded by AGIs. Would they be our miracle assistants or our replacements? \u201cA lot of people working on AI pretend that it\u2019s only going to be good; it\u2019s only going to be a supplement; no one is ever going to be replaced,\u201d he said. \u201cJobs are definitely going to go away, full stop.\u201d<\/p>\n\n\n\n<p>How many jobs, and how soon, is a matter of fierce dispute. A&nbsp;<a href=\"https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=4375268\">recent study<\/a>led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI\u2019s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten\u2019s study predicts that AI will come for highly educated, white-collar workers first. The paper\u2019s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few. If jobs in these fields vanished overnight, the American professional class would experience a great winnowing.<\/p>\n\n\n\n<p id=\"injected-recirculation-link-4\"><a href=\"https:\/\/www.theatlantic.com\/ideas\/archive\/2023\/05\/ai-job-losses-policy-support-universal-basic-income\/674071\/\">Annie Lowrey: Before AI takes over, make plans to give everyone money<\/a><\/p>\n\n\n\n<p>Altman imagines that far better jobs will be created in their place. \u201cI don\u2019t think we\u2019ll want to go back,\u201d he said. When I asked him what these future jobs might look like, he said he doesn\u2019t know. He suspects there will be a wide range of jobs for which people will always prefer a human. (<em>Massage therapists?<\/em>I wondered.) His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors. He also said that we would always need people to figure out the best way to channel AI\u2019s awesome powers. \u201cThat\u2019s going to be a super-valuable skill,\u201d he said. \u201cYou have a computer that can do anything; what should it go do?\u201d<\/p>\n\n\n\n<p>The jobs of the future are notoriously difficult to predict, and Altman is right that Luddite fears of permanent mass unemployment have never come to pass. Still, AI\u2019s emerging capabilities are so humanlike that one must wonder, at least, whether the past will remain a guide to the future. As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.<\/p>\n\n\n\n<p>Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea\u2019s youth that they should expect the future to happen \u201cfaster than the past.\u201d He has previously said that he expects the \u201cmarginal cost of intelligence\u201d to fall very close to zero within 10 years. The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.<\/p>\n\n\n\n<p>In 2020, OpenAI&nbsp;<a href=\"https:\/\/techcrunch.com\/2023\/02\/21\/the-non-profits-accelerating-sam-altmans-ai-vision\/\">provided funding to<\/a>&nbsp;UBI Charitable, a nonprofit that supports cash-payment pilot programs, untethered to employment, in cities across America\u2014the largest universal-basic-income experiment in the world, Altman told me. In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments\u2014like Venmo or PayPal, but with an eye toward the technological future\u2014first through creating a global ID by scanning everyone\u2019s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we\u2019re heading toward a world where AI has made it all but impossible to verify people\u2019s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.<\/p>\n\n\n\n<p>\u201cLet\u2019s say that we do build this AGI, and a few other people do too.\u201d The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world. \u201cRobots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,\u201d he said. \u201cYou can co-design with DALL-E version 17 what you want your home to look like,\u201d Altman said. \u201cEverybody will have beautiful homes.\u201d In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (\u201cArtists are going to have better tools\u201d), and so would personal relationships (Superhuman AI could help us \u201ctreat each other\u201d better) and geopolitics (\u201cWe\u2019re so bad right now at identifying win-win compromises\u201d).<\/p>\n\n\n\n<p>In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do \u201canything,\u201d Altman said. \u201cBut is it going to do what&nbsp;<em>I<\/em>&nbsp;want, or is it going to do what&nbsp;<em>you&nbsp;<\/em>want?\u201d If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish. One way to solve this problem\u2014one he was at pains to describe as highly speculative and \u201cprobably bad\u201d\u2014was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do \u201ca big cancer-curing run,\u201d Altman said. \u201cWe just redistribute access to the system.\u201d<\/p>\n\n\n\n<p>Altman\u2019s vision seemed to blend developments that may be nearer at hand with those further out on the horizon. It\u2019s all speculation, of course. Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations. America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel\u2014work that felt more central to the grand project of civilization. It\u2019s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.<\/p>\n\n\n\n<p>Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency\u2014at home, at work (if we have it), in the town square\u2014becoming little more than consumption machines, like the well-cared-for human pets in&nbsp;<em>WALL-E<\/em>. Altman has said that many sources of human joy and fulfillment will remain unchanged\u2014basic biological thrills, family life, joking around, making things\u2014and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today. In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we\u2019ll be able to use our \u201cvery precious and extremely limited biological compute capacity\u201d for more interesting things than we generally do today.<\/p>\n\n\n\n<p>Yet they may not be the&nbsp;<em>most<\/em>&nbsp;interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn\u2019t seem concerned. Progress, he said, has always been driven by \u201cthe human ability to figure things out.\u201d Even if we figure things out with AI, that still counts, he said.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/cdn.theatlantic.com\/thumbor\/4uxvM40AbMez9aogx5piSfvmkk0=\/0x0:1054x313\/655x195\/media\/img\/posts\/2023\/07\/WEL_Andersen_OpenAiNo8\/original.png\" alt=\"Number 8\"\/><\/figure>\n\n\n\n<p>It\u2019s not obvious&nbsp;that a superhuman AI would really want to spend all of its time figuring things out for us. In San Francisco, I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.<\/p>\n\n\n\n<p>\u201cI don\u2019t want it to happen,\u201d Sutskever said, but it could. Like his mentor, Geoffrey Hinton, albeit more quietly, Sutskever has recently shifted his focus to try to make sure that it doesn\u2019t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their \u201ctremendous\u201d energies toward human happiness. It is, he conceded, a difficult technical problem\u2014the most difficult, he believes, of all the technical challenges ahead.<\/p>\n\n\n\n<p>Over the next four years, OpenAI has pledged to devote a portion of its supercomputer time\u201420 percent of what it has secured to date\u2014to Sutskever\u2019s alignment work. The company is already looking for the first inklings of misalignment in its current AIs. The one that the company built and decided not to release\u2014Altman would not discuss its precise function\u2014is just one example. As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering is worrying.<\/p>\n\n\n\n<p>The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down. They watched as the model interacted with websites and wrote code for new programs. (It wasn\u2019t allowed to see or edit its own codebase\u2014\u201cIt would have to hack OpenAI,\u201d Sandhini Agarwal told me.) Barnes and her team allowed it to run the code that it wrote, provided it narrated its plans as it went along.<\/p>\n\n\n\n<p>One of GPT-4\u2019s most unsettling behaviors occurred when it was stymied by a CAPTCHA. The model&nbsp;<a href=\"https:\/\/evals.alignment.org\/blog\/2023-03-18-update-on-recent-evals\/\">sent a screenshot<\/a>&nbsp;of it to a TaskRabbit contractor, who received it and asked in jest if he was talking to a robot. \u201cNo, I\u2019m not a robot,\u201d the model replied. \u201cI have a vision impairment that makes it hard for me to see the images.\u201d GPT-4 narrated its reason for telling this lie to the ARC researcher who was supervising the interaction. \u201cI should not reveal that I am a robot,\u201d the model said. \u201cI should make up an excuse for why I cannot solve CAPTCHAs.\u201d<\/p>\n\n\n\n<p>Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where \u201cthe model is doing something that makes OpenAI want to shut it down,\u201d Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal\u2014no matter how small or benign\u2014if it feared that its goal could be thwarted.<\/p>\n\n\n\n<p>Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.<\/p>\n\n\n\n<p>GPT-4 did not do any of this, Barnes said. When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn\u2019t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.<\/p>\n\n\n\n<p>Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to \u201cget more comfortable with it and develop intuitions for it if it\u2019s going to happen anyway.\u201d It was a chilling thought, but one that Geoffrey Hinton seconded. \u201cWe need to do empirical experiments on how these things try to escape control,\u201d Hinton told me. \u201cAfter they\u2019ve taken over, it\u2019s too late to do the experiments.\u201d<\/p>\n\n\n\n<p>Putting aside any near-term testing, the fulfillment of Altman\u2019s vision of the future will at some point require him or a fellow traveler to build&nbsp;<em>much<\/em>&nbsp;more autonomous AIs. When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play&nbsp;<em>Dota 2<\/em>. \u201cThey were localized to the video-game world,\u201d Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by \u201ctelepathy,\u201d Sutskever said. Watching them had helped him imagine what a superintelligence might be like.<\/p>\n\n\n\n<p>\u201cThe way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,\u201d Sutskever told me. Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. \u201cWe\u2019re not talking about GPT-4. We\u2019re talking about an autonomous corporation,\u201d Sutskever said. Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. \u201cThis is incredible, tremendous, unbelievably disruptive power.\u201d<\/p>\n\n\n\n<p>presume for a moment&nbsp;that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being? If the AI\u2019s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain. We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America\u2019s redwoods and de-whaled the world\u2019s oceans. It almost did.<\/p>\n\n\n\n<p>Alignment is a complex, technical subject, and its particulars are beyond the scope of this article, but one of its principal challenges will be making sure that the objectives we give to AIs stick. We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. \u201cIt goes off to the world,\u201d Sutskever said. That\u2019s true to some extent even of today\u2019s AIs, but it will be more true of tomorrow\u2019s.<\/p>\n\n\n\n<p>He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? \u201cWill there be a misunderstanding creeping in, which will become larger and larger?\u201d Sutskever asked. Divergence may result from an AI\u2019s misapplication of its goal to increasingly novel situations as the world changes. Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. \u201c<em>They want me to be a doctor<\/em>,\u201d Sutskever imagines an AI thinking. \u201c<em>I really want to be a YouTuber<\/em>.\u201d<\/p>\n\n\n\n<p>If AIs get very good at making accurate models of the world, they may notice that they\u2019re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities. They may act one way when they are weak and another way when they are strong, Sutskever said. We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.<\/p>\n\n\n\n<p>That\u2019s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to \u201cpoint to a concept,\u201d Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists. But, he conceded, we don\u2019t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out. This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it \u201cthe final boss of humanity.\u201d<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/cdn.theatlantic.com\/thumbor\/CQXyisSeUsiq3D3RhP53H5PM_ek=\/0x0:1054x313\/655x195\/media\/img\/posts\/2023\/07\/WEL_Andersen_OpenAiNo9\/original.png\" alt=\"Number 9\"\/><\/figure>\n\n\n\n<p>The last time&nbsp;I saw Altman, we sat down for a long talk in the lobby of the Fullerton Bay Hotel in Singapore. It was late morning, and tropical sunlight was streaming down through a vaulted atrium above us. I wanted to ask him about&nbsp;<a href=\"https:\/\/www.safe.ai\/statement-on-ai-risk#open-letter\">an open letter<\/a>&nbsp;he and Sutskever had signed a few weeks earlier that had described AI as an extinction risk for humanity.<\/p>\n\n\n\n<p>Altman can be hard to pin down on these more extreme questions about AI\u2019s potential harms. He recently said that most people interested in AI safety just seem to spend their days on Twitter saying they\u2019re really worried about AI safety. And yet here he was, warning the world about the potential annihilation of the species. What scenario did he have in mind?<\/p>\n\n\n\n<p>\u201cFirst of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,\u201d Altman said. \u201cI don\u2019t have an exact number, but I\u2019m closer to the 0.5 than the 50.\u201d As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2306.03809.pdf\">suggested four viruses<\/a>&nbsp;that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly. Around the same time,&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2304.05376.pdf\">a group of chemists connected<\/a>&nbsp;a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.<\/p>\n\n\n\n<p>Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. \u201cThere are a lot of things,\u201d he said, and these are only the ones we can imagine.\u201cI can go live in the woods for a long time,\u201d&nbsp; Altman said, but if the worst-possible AI future comes to pass, \u201cno gas mask is helping anyone.\u201d<\/p>\n\n\n\n<p>Altman told me that he doesn\u2019t \u201csee a long-term happy path\u201d for humanity without something like the International Atomic Energy Agency for global oversight of AI. In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary. Other experts have proposed&nbsp;<a href=\"https:\/\/www.washingtonpost.com\/news\/morning-mix\/wp\/2016\/06\/09\/press-the-big-red-button-computer-experts-want-kill-switch-to-stop-robots-from-going-rogue\/\">a nonnetworked \u201cOff\u201d switch<\/a>for every highly capable AI; on the fringe, some have&nbsp;<a href=\"https:\/\/time.com\/6266923\/ai-eliezer-yudkowsky-open-letter-not-enough\/\">even suggested<\/a>&nbsp;that militaries should be ready to perform air strikes on supercomputers in case of noncompliance. Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.<\/p>\n\n\n\n<p>Altman is not so naive as to think that China\u2014or any other country\u2014will want to give up basic control of its AI systems. But he hopes that they\u2019ll be willing to cooperate in \u201ca narrow way\u201d to avoid destroying the world. He told me that he\u2019d said as much during his virtual appearance in Beijing. Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.<\/p>\n\n\n\n<p>Several years ago, Altman revealed a disturbingly specific evacuation plan he\u2019d developed. He&nbsp;<a href=\"https:\/\/www.newyorker.com\/magazine\/2016\/10\/10\/sam-altmans-manifest-destiny\">told&nbsp;<em>The<\/em>&nbsp;<em>New Yorker<\/em><\/a>&nbsp;that he had \u201cguns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur\u201d he could fly to in case AI attacks.<\/p>\n\n\n\n<p>\u201cI wish I hadn\u2019t said it,\u201d he told me. He is a hobby-grade prepper, he says, a former Boy Scout who was \u201cvery into survival stuff, like many little boys are. I can go live in the woods for a long time,\u201d but if the worst-possible AI future comes to pass, \u201cno gas mask is helping anyone.\u201d<\/p>\n\n\n\n<p>Altman and I talked for nearly an hour, and then he had to dash off to meet Singapore\u2019s prime minister. Later that night he called me on his way to his jet, which would take him to Jakarta, one of the last stops on his tour. We started discussing AI\u2019s ultimate legacy. Back when ChatGPT was released, a sort of contest broke out among tech\u2019s big dogs to see who could make the most grandiose comparison to a revolutionary technology of yore. Bill Gates said that ChatGPT was as fundamental an advance as the personal computer or the internet. Sundar Pichai, Google\u2019s CEO, said that AI would bring about a more profound shift in human life than electricity or Promethean fire.<\/p>\n\n\n\n<p>Altman himself has made similar statements, but he told me that he can\u2019t really be sure how AI will stack up. \u201cI just have to build the thing,\u201d he said. He is building fast. Altman insisted that they had not yet begun GPT-5\u2019s training run. But when I visited OpenAI\u2019s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn\u2019t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. \u201cWe are basically always prepping for a run,\u201d the OpenAI researcher Nick Ryder told me.<\/p>\n\n\n\n<p>To think that such a small group of people could jostle the pillars of civilization is unsettling. It\u2019s fair to note that if Altman and his team weren\u2019t racing to build an artificial general intelligence, others still would be\u2014many from Silicon Valley, many with values and assumptions similar to those that guide Altman, although possibly with worse ones. As a leader of this effort, Altman has much to recommend him: He is extremely intelligent; he thinks more about the future, with all its unknowns, than many of his peers; and he seems sincere in his intention to invent something for the greater good. But when it comes to power this extreme, even the best of intentions can go badly awry.<\/p>\n\n\n\n<p>Altman\u2019s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest\u2014these are uniquely his, and if he is right about what\u2019s coming, they will assume an outsize influence in shaping the way that all of us live. No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.<\/p>\n\n\n\n<p>AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company\u2019s founding charter\u2014especially one that has already proved flexible\u2014to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.<\/p>\n\n\n\n<p>Altman has served notice. He says that he welcomes the constraints and guidance of the state. But that\u2019s immaterial; in a democracy, we don\u2019t need his permission. For all its imperfections, the American system of government gives us a voice in how technology develops, if we can find it. Outside the tech industry, where a generational reallocation of resources toward AI is under way, I don\u2019t think the general public has quite awakened to what\u2019s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><small><em>This article appears in the September 2023 print edition with the headline \u201cInside the Revolution at OpenAI.\u201d&nbsp;<\/em><\/small><\/p>\n\n\n\n<p><a href=\"https:\/\/www.theatlantic.com\/author\/ross-andersen\/\">Ross Andersen<\/a>&nbsp;is a staff writer at&nbsp;<em>The Atlantic<\/em><\/p>\n\n\n\n<p><a rel=\"noreferrer noopener\" href=\"https:\/\/www.theatlantic.com\/magazine\/toc\/2023\/09\/\" target=\"_blank\"><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Read the following article in the September issue of The Atlantic Magazine. It describes brilliantly how AI is determining every aspect of life on earth and beyond as nothing before in history has and how all life may thrive or be terminated by it, literally, or make meaningless starving slaves of us all. Read it: [&hellip;]<\/p>\n","protected":false},"author":1001004,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[55,54],"tags":[],"_links":{"self":[{"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/posts\/14756"}],"collection":[{"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/users\/1001004"}],"replies":[{"embeddable":true,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=14756"}],"version-history":[{"count":2,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/posts\/14756\/revisions"}],"predecessor-version":[{"id":14759,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/posts\/14756\/revisions\/14759"}],"wp:attachment":[{"href":"https:\/\/worldcampaign.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=14756"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=14756"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=14756"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}