{"id":12840,"date":"2021-12-10T01:36:54","date_gmt":"2021-12-10T09:36:54","guid":{"rendered":"https:\/\/worldcampaign.net\/?p=12840"},"modified":"2021-12-22T20:39:46","modified_gmt":"2021-12-23T04:39:46","slug":"issue-of-the-week-134","status":"publish","type":"post","link":"https:\/\/worldcampaign.net\/?p=12840","title":{"rendered":"Issue of the Week: Human Rights, Personal Growth, Economic Opportunity, War, Environment, Disease, Hunger, Population"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-12845\" src=\"https:\/\/worldcampaign.net\/wp-content\/uploads\/2021\/12\/image-2.jpeg\" alt=\"\" width=\"320\" height=\"320\" srcset=\"https:\/\/worldcampaign.net\/wp-content\/uploads\/2021\/12\/image-2.jpeg 320w, https:\/\/worldcampaign.net\/wp-content\/uploads\/2021\/12\/image-2-150x150.jpeg 150w, https:\/\/worldcampaign.net\/wp-content\/uploads\/2021\/12\/image-2-300x300.jpeg 300w\" sizes=\"(max-width: 320px) 100vw, 320px\" \/><\/p>\n<p><span style=\"font-size: 8pt;\"><em>The Biggest Event In Human History<\/em>, BBC Radio 4, 1 December 2021<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>Here we go again.<\/p>\n<p>If this doesn&#8217;t get your attention, what will?<\/p>\n<p><em>The Biggest Event In Human History.<\/em><\/p>\n<p>That&#8217;s the title of the most recent Reith Lecture on the BBC World Service.<\/p>\n<p>Its the first lecture in a series titled, <em>Living with Artificial Intelligence<\/em>.<\/p>\n<p>So, the big picture of the topic is obvious.<\/p>\n<p>We&#8217;ll leave the details to the lecturer. Not the first, last or only word on the subject, to be sure. But one that is current as we write, along with its first follow-up, with two additional follow-ups coming momentarily, in a distinguished context.<\/p>\n<p><a href=\"https:\/\/www.bbc.co.uk\/programmes\/m00127t9\">You can access them here<\/a>\u00a0and listen or read as you please.<\/p>\n<p>The transcript of the first one follows:<\/p>\n<div class=\"row\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>Downloaded from www.bbc.co.uk\/radio4<\/p>\n<p>THIS TRANSCRIPT WAS TYPED FROM A RECORDING AND NOT COPIES FROM AN ORIGINAL SCRIPT. BECAUSE OF THE RISK OF MISHEARING AND THE DIFFICULTY IN SOME CASES OF IDENTIFYING INDIVIDUAL SPEAKERS, THE BBC\u00a0CANNOT VOUCH FOR ITS COMPLETE ACCURACY.<\/p>\n<p>BBC REITH LECTURES 2021 \u2013\u00a0LIVING WITH ARTIFICIAL INTELLIGENCE<\/p>\n<p>With Stuart Russell, Professor of Computer Science and founder of the Center for Human-Compatible Artificial Intelligence at the\u00a0University of California, Berkeley<\/p>\n<p>Lecture 1: The Biggest Event in Human History<\/p>\n<p>ANITA ANAND: Welcome to the 2021 BBC Reith Lectures. We\u2019re at the British Library in the heart of London and as well as housing more than 14 million books, we are also home here to the Alan Turing Institute, the national centre for data science and artificial intelligence. Set up in 2015 it was, of course, named after the famous English mathematician, one of the key figures in breaking the Nazi enigma code therefore saving countless lives. We couldn\u2019t really think of a better venue to place this year\u2019s Reith Lectures, which will explore the role of artificial intelligence and what it means for the way we live our lives.<\/p>\n<p>Our lecturer has called the development of artificial intelligence \u201cthe most profound change in human history,\u201d so we\u2019ve given him four programmes to explain why. He\u2019s going to be addressing our fears. He\u2019s going to be explaining the likely impact on jobs and the economy and, hopefully, he will answer the most important question of all: who is ultimately going to be in control, is it us or is it the machines?<\/p>\n<p>Let\u2019s meet him now. Please welcome the 2021 BBC Reith Lecturer, Professor Stuart Russell.<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>1<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 2\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>(AUDIENCE APPLAUSE)<\/p>\n<p>ANITA ANAND: Stuart, it\u2019s wonderful that we\u2019re going to be hearing from you. I just wonder, actually, when you first became aware of artificial intelligence because for many of us our introduction would have been through sci-fi, so at what point did you think, actually, this will be a real-life career for me?<\/p>\n<p>STUART RUSSELL: So I think it was when my grandmother bought me one of the first programmable calculators, the Sinclair Cambridge programmable, a little tiny, white calculator, and once I understood that you could actually get it to do things by writing these programs, I just wanted to make it intelligent. But if you\u2019ve ever had one of those calculators you know that there\u2019s only 36 keystrokes that you can put in the program, and you can do various things, you can calculate square roots and signs and logs, but you couldn\u2019t really make it intelligent with that much. So, I ended up actually borrowing the giant supercomputer at Imperial College, the CDC 6600, which was about as big as this room and far less powerful than what\u2019s on your cell phone today.<\/p>\n<p>ANITA ANAND: Well, I mean, this obviously marks you out as very different to the rest of us. We were all writing rude words and turning our calculator upside down.<\/p>\n<p>Can we even measure how fast AI is developing?<\/p>\n<p>STUART RUSSELL: I actually think it\u2019s very difficult. Machines don\u2019t have an IQ. This is a common mistake that some commentators make is to predict that machine IQ will exceed human IQ on some given date, but if you think about it, so AlphaGo, which is this amazing Go-Playing program that was developed just across the road, is able to beat the human world champion at playing Go but it can\u2019t remember anything, and then the Google search engine remembers everything, but it can\u2019t plan its way out of a paper bag. So, to talk about the IQ of a machine doesn\u2019t make sense.<\/p>\n<p>Humans, when they have a high IQ, typically can do lots of different things. They can play games and remember things, and so it sort of makes sense. Even for humans there\u2019s not a particularly good way of describing intelligence, but for machines it makes no sense at all. So, we see big progress on particular tasks. Machine translation, for example, speech recognition is another one, recognising objects and images, these were things that we, in AI, have been trying to do for 50 or 60 years, and in the last 10 years we\u2019ve actually pretty much solved them. That makes you think that the problems are not insolvable, and we can actually knock them over one by one.<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>2<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 3\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>ANITA ANAND: Well, I\u2019m really looking forward to your first lecture. It is entitled The Biggest Event in Human History. Stuart, over to you.<\/p>\n<p>STUART RUSSELL: Thank you, Anita. Thank you to the audience for being here. Thank you to the BBC for inviting me. It really is an enormous and a unique honour to give these lectures. We are at the Alan Turing Institute, named for this man who is now on the 50-pound note. The BBC couldn\u2019t afford a real one, so I printed out a fake one.<\/p>\n<p>In 1936, in his early twenties, Turing wrote a paper describing two new kinds of mathematical objects, machines and programs. They turned out to be the most powerful ever found, even more so than numbers themselves. In the last few decades those mathematical objects have created eight of the 10 most valuable companies in the world and dramatically changed human lives. Their future impact through AI may be far greater.<\/p>\n<p>Turing\u2019s 1950 paper \u201cComputing Machinery and Intelligence\u201d is at least as famous as his 1936 paper. It introduced many of the core ideas of AI, including machine learning. It proposed what we now call the Turing Test as a thought experiment, and it demolished several standard objections to the very possibility of machine intelligence.<\/p>\n<p>Perhaps less well known are two lectures he gave in 1951. One was on the BBC\u2019s Third Programme, but this is going out on Radio 4 and the World Service, so I\u2019ll quote the other one, given to a learned society in Manchester. He said:<\/p>\n<p>\u201cOnce the machine thinking method had started, it would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control.\u201d<\/p>\n<p>Let me repeat that: \u201cAt some stage therefore we should have to expect the machines to take control.\u201d<\/p>\n<p>I must confess that for most of my career I didn\u2019t lose much sleep over this issue, and I was not even aware, until a few years ago, that Turing himself had mentioned it.<\/p>\n<p>I did include a section in my textbook, written with Peter Norvig in 1994, on the subject of \u201cWhat if we succeed?\u201d but it was cautiously optimistic. In subsequent years, the alarm was raised more frequently, but mostly from outside AI.<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>3<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 4\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>But by 2013, with the benefit of some time to think during a sabbatical in Paris, I became convinced that the issue not only belonged in the mainstream but was possibly the most important question that we faced. I gave a talk at the Dulwich Picture Gallery in which I stated that:<\/p>\n<p>\u201cSuccess would be the biggest event in human history and perhaps the last event in human history.\u201d<\/p>\n<p>A few months later, in April 2014, I was at a conference in Iceland, and I got a call from National Public Radio asking if they could interview me about the new film Transcendence. It wasn\u2019t playing in Iceland, but I was flying to Boston the next day, so I went straight from the airport to the nearest cinema.<\/p>\n<p>I sat in the second row and watched as a Berkeley AI professor, possibly me, played by Johnny Depp, naturally, was gunned down by activists worried about, of all things, super-intelligent AI. Perhaps this was a call from the Department of Magical Coincidences? Before Johnny Depp\u2019s character dies, his mind is uploaded to a quantum supercomputer and soon outruns human capabilities, threatening to take over the world.<\/p>\n<p>A few days later, a review of Transcendence appeared in the Huffington Post, which I co-authored along with physicists Max Tegmark, Frank Wilczek, and Stephen Hawking. It included the sentence from my Dulwich talk about the biggest event in human history. From then on, I would be publicly committed to the view that success for my field would pose a risk to my own species.<\/p>\n<p>Now, I\u2019ve been talking about \u201csuccess in AI,\u201d but what does that mean? To answer, I\u2019ll have to explain what AI is actually trying to do. Obviously, it\u2019s about making machines intelligent, but what does that mean?<\/p>\n<p>To answer this question, the field of AI borrowed what was, in the 1950s, a widely accepted and constructive definition of human intelligence:<\/p>\n<p>\u201cHumans are intelligent to the extent that our actions can be expected to achieve our objectives.\u201d<\/p>\n<p>All those other characteristics of intelligence; perceiving, thinking, learning, inventing, listening to lectures, and so on, can be understood through their contributions to our ability to act successfully.<\/p>\n<p>Now, this equating of intelligence with the achievement of objectives has a long history. For example, Aristotle wrote:<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>4<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 5\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>\u201cWe deliberate not about ends, but about means. A doctor does not deliberate whether he shall heal, nor an orator whether he shall persuade. They assume the end and consider how and by what means it is attained, and if it seems easily and best produced thereby.\u201d<\/p>\n<p>And then, between the sixteenth and twentieth centuries, almost entirely for the purpose of analysing gambling games, mathematicians refined this deterministic view of \u201cmeans achieving ends\u201d to allow for uncertainty about the outcomes of actions and to accommodate the interactions of multiple decision- making entities, and these efforts culminated in the work of von Neumann and Morgenstern on an axiomatic basis for rationality, published in 1944.<\/p>\n<p>From the very beginnings of AI, intelligence in machines has been defined in the same way:<\/p>\n<p>\u201cMachines are intelligent to the extent that their actions can be expected to achieve their objectives.\u201d<\/p>\n<p>But because machines, unlike humans, have no objectives of their own, we give them objectives to achieve. In other words, we build objective-achieving machines, we feed objectives into them, or we specialise them for particular objectives, and off they go. The same general plan applies in control theory, in statistics, in operations research, and in economics. In other words, it underlies a good part of the 20th century\u2019s technological progress. It\u2019s so pervasive, I\u2019ll call it the \u201cstandard model.\u201d<\/p>\n<p>Operating within this model, AI has achieved many breakthroughs over the past seven decades. Just thinking of intelligence as computation led to a revolution in psychology and a new kind of theory, programs instead of simple mathematical laws. It also led to a new definition of rationality that reflects the finite computational powers of any real entity, whether artificial or human.<\/p>\n<p>AI also developed symbolic computation, that is, computing with symbols representing objects such as chess pieces or aeroplanes, instead of the purely numerical calculations that had defined computing since the seventeenth century.<\/p>\n<p>Also following Turing\u2019s suggestion from 1950, we developed machines that learn, that is they improve their achievement of objectives through experience. The first successful learning program was demonstrated on television in 1956. Arthur Samuel\u2019s draughts-playing program had learned to beat its own creator, and that program was the progenitor of Deepmind\u2019s AlphaGo, which taught itself to beat the human world champion in 2017.<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>5<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 6\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>Then in the sixties and seventies, systems for logical reasoning and planning were developed, and they were embodied to create autonomous mobile robots. Logic programming and rule-based expert systems supported some of the first commercial applications of AI in the early eighties, creating an immense explosion of interest in the US and Japan. The first self-driving Mercedes drove on the autobahn in 1987. Britain, on the other hand, had to play catch-up, having stopped nearly all AI research in the early seventies.<\/p>\n<p>Then, in the 1990s, AI developed new methods for representing and reasoning about probabilities and about causality in complex systems, and those methods have spread to nearly every area of science.<\/p>\n<p>Over the last decade, so-called deep learning systems appear to have learned to recognise human speech very well; to recognise objects in images, to translate between hundreds of different human languages. In fact, I use machine translation every year because I\u2019m still paying taxes in France. It does a perfect job of translating quite impenetrable French tax instructions into equally impenetrable English tax instructions. Despite this setback, AI is increasingly important in the economy, running everything from search engines to autonomous delivery planes.<\/p>\n<p>But as AI moves into the real world, it collides with Francis Bacon\u2019s observation from The Wisdom of the Ancients in 1609:<\/p>\n<p>\u201cThe mechanical arts may be turned either way and serve as well for the cure as for the hurt.\u201d<\/p>\n<p>\u201cThe hurt,\u201d with AI, includes racial and gender bias, disinformation, deepfakes, and cybercrime. And as Bacon also noted:<\/p>\n<p>\u201cOut of the same fountain come instruments of death.\u201d<\/p>\n<p>Algorithms that can decide to kill human beings, and have the physical means to do so, are already for sale. I\u2019ll explain in the next lecture why this is a huge mistake. It\u2019s not because of killer robots taking over the world; it\u2019s simply because computers are very good at doing the same thing millions of times over.<\/p>\n<p>All of these risks that I\u2019ve talked about come from simple, narrow, application-specific algorithms. But let\u2019s not mince words. The goal of AI is and always has been general-purpose AI: that is, machines that can quickly learn to perform well across the full range of tasks that humans can perform. And one<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>6<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 7\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>must acknowledge that a species capable of inventing both the gravitational wave detector and the Eurovision song contest exhibits a great deal of generality.<\/p>\n<p>Inevitably, general-purpose AI systems would far exceed human capabilities in many important dimensions. This would be an inflection point for civilisation.<\/p>\n<p>I want to be clear that we are a long way from achieving general-purpose AI. Furthermore, we cannot predict its arrival based on the growth of data and computing power. Running stupid algorithms on faster and faster machines just gives you the wrong answer more quickly. Also, I think it\u2019s highly unlikely that the present obsession with deep learning will yield the progress its adherents imagine. Several conceptual breakthroughs are still needed, and those are very hard to predict.<\/p>\n<p>In fact, the last time we invented a civilisation-ending technology, we got it completely wrong. On September 11, 1933, at a meeting in Leicester, Lord Rutherford, who was the leading nuclear physicist of that era, was asked if, in 25 or 30-years\u2019 time, we might unlock the energy of the atom. His answer was:<\/p>\n<p>\u201cAnyone who looks for a source of power in the transformation of the atoms is talking moonshine.\u201d<\/p>\n<p>The next morning, Leo Szilard, a Hungarian physicist and refugee who was staying at the old Imperial Hotel on Russell Square, 10 minutes\u2019 walk from here, read about Rutherford\u2019s speech in The Times. He went for a walk and invented the neutron-induced nuclear chain reaction. The problem of liberating atomic energy went from \u201cimpossible\u201d to essentially solved in less than twenty-four hours.<\/p>\n<p>The moral of this story is that betting against human ingenuity is foolhardy, particularly when our future is at stake. Now, because we need multiple breakthroughs and not just one, I don\u2019t think I\u2019m falling into Rutherford\u2019s trap if I say that it\u2019s quite unlikely we\u2019ll succeed in the next few years. It seems prudent, nonetheless, to prepare for the eventuality.<\/p>\n<p>If all goes well, it will herald a golden age for humanity. Our civilisation is the result of our intelligence; and having access to much greater intelligence could enable a much better civilisation.<\/p>\n<p>One rather prosaic goal is to use general-purpose AI to do what we already know how to do more effectively, at far less cost, and at far greater scale. We could, thereby, raise the living standard of everyone on Earth, in a sustainable<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>7<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 8\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>way, to a respectable level. That amounts to a roughly tenfold increase in global GDP. The cash equivalent, or the net present value as economists call it, of the increased income stream is about 10 quadrillion pounds or $14 quadrillion. All of the huge investments happening in AI are just a rounding error in comparison.<\/p>\n<p>If 10 quadrillion pounds doesn\u2019t sound very concrete, let me try to make this more concrete by looking back at what happened with transportation. If you wanted to go from London to Australia in the 17th century, it would have been a huge project costing the equivalent of billions of pounds, requiring years of planning and hundreds of people, and you\u2019d probably be dead before you got there. Now we are used to the idea of transportation as a service or TaaS. If you need to be in Melbourne tomorrow, you take out your phone, you go tap-tap-tap, spend a relatively tiny amount of money, and you\u2019re there, although they won\u2019t let you in.<\/p>\n<p>General-purpose AI would be everything as a service, or XaaS. There would be no need for armies of specialists in different disciplines, organised into hierarchies of contractors and subcontractors, to carry out a project. All embodiments of general-purpose AI would have access to all the knowledge and skills of the human race. In principle, politics and economics aside, everyone could have at their disposal an entire organisation composed of software agents and physical robots, capable of designing and building bridges, manufacturing new robots, improving crop yields, cooking dinner for a hundred guests, separating the paper and the plastic, running an election, or teaching a child to read. It is the generality of general-purpose AI that makes this possible.<\/p>\n<p>Now that\u2019s all fine if everything goes well. Although, as I will discuss in the third lecture, there is the question of what\u2019s left for us humans to do.<\/p>\n<p>On the other hand, as Alan Turing warned, in creating general-purpose AI, we create entities far more powerful than humans. How do we ensure that they never, ever have power over us? After all, it is our intelligence, individual and collective, that gives us power over the world and over all other species.<\/p>\n<p>Turing\u2019s warning actually ends as follows:<br \/>\n\u201cAt some stage therefore, we should have to expect the machines to take\u00a0control in the way that is mentioned in Samuel Butler\u2019s Erewhon.\u201d<br \/>\nButler\u2019s book describes a society in which machines are banned,\u00a0precisely\u00a0because of the prospect of subjugation. His prose is very 1872:<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>8<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 9\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>\u201cAre we not ourselves creating our successors in the supremacy of the Earth? In the course of ages, we shall find ourselves the inferior race. Our bondage will steal upon us noiselessly and by imperceptible approaches.\u201d<\/p>\n<p>Is that the end of the story, the last event in human history? Surely, we need to understand why making AI better and better makes the outcome for humanity worse and worse. Perhaps if we do understand, we can find another way.<\/p>\n<p>Many films such as Terminator and Ex Machina would have you believe that spooky emergent consciousness is the problem. If we can just prevent it, then the spontaneous desire for world domination and the hatred of humans can\u2019t happen. There are at least two problems with this.<\/p>\n<p>First, no one has any idea how to create, prevent, or even detect consciousness in machines or, for that matter, in functioning humans.<\/p>\n<p>Second, it has absolutely nothing to do with it. Suppose I give you a program and ask, \u201cDoes this program present a threat to humanity?\u201d You analyse the code and indeed, when run, it will form and carry out a plan to destroy humanity, just as a chess program forms and carries out a plan to defeat its opponent. Now suppose I tell you that the code, when run, also creates a form of machine consciousness. Will that change your prediction? No, not at all. It makes absolutely no difference. It\u2019s competence, not consciousness, that matters.<\/p>\n<p>To understand the real problem with making AI better, we have to examine the very foundations of AI, the \u201cstandard model\u201d which says that:<\/p>\n<p>\u201cMachines are intelligent to the extent that their actions can be expected to achieve their objectives.\u201d<\/p>\n<p>For example, you tell a self-driving car, \u201cTake me to Heathrow,\u201d and the car adopts the destination as its objective. It\u2019s not something that the AI system figures out for itself; it&#8217;s something that we specify. This is how we build all AI systems today.<\/p>\n<p>Now the problem is that when we start moving out of the lab and into the real world, we find that we are unable to specify these objectives completely and correctly. In fact, defining the other objectives of self-driving cars, such as how to balance speed, passenger safety, sheep safety, legality, comfort, politeness, has turned out to be extraordinarily difficult.<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>9<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 10\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>This should not be a surprise. We\u2019ve known it for thousands of years. For example, in the ancient Greek legend, King Midas asked the gods that everything he touch should turn to gold. This was the objective he specified, and the gods granted his objective. They are the AI in this case. And of course, his food, his drink, and his family all turn to gold, and he dies in misery and starvation.<\/p>\n<p>We see the same plot in the Sorcerer&#8217;s Apprentice by Goethe, where the apprentice asks the brooms to help him fetch water, without saying how much water. He tries to chop the brooms into pieces, but they\u2019ve been given their objective, so all the pieces multiply and keep fetching water.<\/p>\n<p>And then there are the genies who grant you three wishes. And what is your third wish? It&#8217;s always, \u201cPlease undo the first two wishes because I&#8217;ve ruined the world.\u201d<\/p>\n<p>Talking of ruining the world, let\u2019s look at social media content-selection algorithms, the ones that choose items for your newsfeed or the next video to watch. They aren\u2019t particularly intelligent, but they have more power over people\u2019s cognitive intake than any dictator in history.<\/p>\n<p>The algorithm\u2019s objective is usually to maximise click-through, that is, the probability that the user clicks on presented items. The designers thought, perhaps, that the algorithm would learn to send items that the user likes, but the algorithm had other ideas.<\/p>\n<p>Like any rational entity, it learns how to modify the state of its environment, in this case the user\u2019s mind, in order to maximise its own reward, by making the user more predictable. A more predictable human can be fed items that they are more likely to click on, thereby generating more revenue. Users with more extreme preferences seem to be more predictable. And now we see the consequences of growing extremism all over the world.<\/p>\n<p>As I said, these algorithms are not very intelligent. They don\u2019t even know that humans exist or have minds. More sophisticated algorithms could be far more effective in their manipulations. Unlike the magic brooms, these simple algorithms cannot even protect themselves, but fortunately they have corporations for that.<\/p>\n<p>In fact, some authors have argued that corporations themselves already act as super-intelligent machines. They have human components, but they operate as profit-maximising algorithms.<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>10<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 11\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>The ones that have been creating global heating for the last hundred years have certainly outsmarted the human race, and we seem unable to interfere with their operation. Again, the objective here, profit neglecting externalities, is the wrong one.<\/p>\n<p>Incidentally, blaming an optimising machine for optimising the objective that you gave it is daft. It\u2019s like blaming the other team for scoring against England in the World Cup. We\u2019re the ones who wrote the rules. Instead of complaining, we should rewrite the rules so it can\u2019t happen.<\/p>\n<p>What we see from these lessons is that with the standard model and mis- specified objectives, \u201cbetter\u201d AI systems or better soccer teams produce worse outcomes. A more capable AI system will make a much bigger mess of the world in order to achieve its incorrectly specified objective, and, like the brooms, it will do a much better job of blocking human attempts to interfere.<\/p>\n<p>And so, in a sense we&#8217;re setting up a chess match between ourselves and the machines, with the fate of the world as the prize. You don\u2019t want to be in that chess match.<\/p>\n<p>Earlier Anita asked me, \u201cDoes everyone in AI agree with me?\u201d Amazingly, not, or at least not yet. For some reason, they can be quite defensive about it. There are many counterarguments, some so embarrassing it would be unkind to repeat them.<\/p>\n<p>For example, it\u2019s often said that we needn\u2019t put in objectives such as self- preservation and world domination. But remember the brooms: the apprentice\u2019s spell doesn\u2019t mention self-preservation, but self-preservation is a logical necessity for pursuing almost any objective, so the brooms preserve themselves and even multiply themselves in order to fetch water.<\/p>\n<p>Then there\u2019s the Mark Zuckerberg\u2013Elon Musk \u201csmackdown\u201d that was so eagerly reported in the press. Elon Musk had drawn the analogy between creating super-intelligent AI and \u201csummoning the demon.\u201d<\/p>\n<p>Mark Zuckerberg replied, \u201cIf you\u2019re arguing against AI, then you\u2019re arguing against safer cars and being able to diagnose people when they&#8217;re sick.\u201d Of course, Elon Musk isn\u2019t arguing against AI. He\u2019s arguing against uncontrollable AI.<\/p>\n<\/div>\n<\/div>\n<div class=\"section\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>If a nuclear engineer wants to prevent the uncontrolled nuclear reactions<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>that we saw at Chernobyl, we don\u2019t say she\u2019s \u201carguing against electricity.\u201d It\u2019s not \u201canti-AI\u201d to talk about risks. Elon Musk isn\u2019t a Luddite, and nor was Alan Turing,<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>11<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 12\">\n<p>even though we were all jointly given the Luddite of the Year Award in 2015 for asking, \u201cWhat if we succeed?\u201d The genome editors and the life extenders and the<img decoding=\"async\" src=\"blob:https:\/\/worldcampaign.net\/4ea9a795-3f05-4e6b-a42b-d839fb4456d3\" alt=\"page12image886655696\" width=\"186.500000\" height=\"15.960000\" \/><\/p>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>What if we succeed? What then? In the case of AI, how do you propose to retain power, forever, over entities more powerful\u00a0than ourselves?<\/p>\n<p>One option might be to ban AI altogether, just as Butler\u2019s anti-machinists in Erewhon banned all mechanical devices after a terrible civil war. In Frank Herbert\u2019s Dune, the Butlerian Jihad had been fought to save humanity from machine control, and now there is an 11th commandment:<\/p>\n<p>\u201cThou shalt not make a machine in the likeness of a human mind.\u201d<br \/>\nBut then I imagine all those corporations and countries with their eyes on\u00a0that 10 quadrillion-pound prize, and I think, \u201cGood luck with that.\u201d<\/p>\n<p>The right answer is that if making AI better and better makes the problem worse and worse, then we\u2019ve got the whole thing wrong. We think we want machines that achieve the objectives we give them, but actually we want something else. Later in the series I\u2019ll explain what that \u201csomething else\u201d might be, a new form of AI that will be provably beneficial to the human race, as well as all the questions that it raises for our future.<\/p>\n<p>Thank you very much.<\/p>\n<p>(AUDIENCE APPLAUSE)<\/p>\n<p>ANITA ANAND: Stuart, thank you very much indeed. Before we open this up to the audience at the Alan Turing Institute, you touched on this chat we had beforehand about whether people agree with you. Can we drill down into that a bit more because you\u2019re based at Berkley, Silicon Valley is a stone\u2019s throw away.<\/p>\n<p>STUART RUSSELL: Yes.<\/p>\n<p>ANITA ANAND: The majority of people who work in your field, do they regard you as a sage, a Cassandra? I suppose what I\u2019m asking, are you a bit of a Billy No-Mates in Silicon Valley?<\/p>\n<p>STUART RUSSELL: One response is quite understandable, which is I am a machine-learning researcher working at the coalface of AI. It\u2019s really difficult to get my machines to do anything. Just leave me alone and let me make progress on solving the problem that my boss asked me to solve. Stop talking about the future. But the problem is, this is just a slippery slope. If you keep doing that, as brain enhancers should also ask:<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>12<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 13\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>happened with the climate, I\u2019m sure the people who produce petrol are saying, \u201cJust leave me alone. People need to drive. I\u2019m making petrol for them,\u201d but that\u2019s a slippery slope.<\/p>\n<p>I do think that there is a sea change in the younger generation of researchers. Five years ago, I would say most people going into machine learning had dollar signs in their eyes, but now they really want to make the world a better place.<\/p>\n<p>ANITA ANAND: Is that sea change enough if we carry on down this slope? You mentioned Chernobyl, I wonder whether you\u2019d go as far as to say that there needs to be a Chernobyl-type event in AI before everyone listens to you?<\/p>\n<p>STUART RUSSELL: Well, I think what\u2019s happening in social media is already worse than Chernobyl. It has caused a huge amount of dislocation.<\/p>\n<p>ANITA ANAND: Well, if that\u2019s a little bit to chew on, let us chew on it now. Let\u2019s take some questions from the floor.<\/p>\n<p>CLAIRE FOSTER-GILBERT: Claire Foster-Gilbert from Westminster Abbey Institute. Thank you very much indeed for your lecture. I wanted to ask you if you had any wisdom to share with us on the kinds of people we should try and be ourselves as we deal with, work with, direct, live with AI?<\/p>\n<p>STUART RUSSELL: I\u2019m not sure I have any wisdom on any topic, and that\u2019s an incredibly interesting question that I\u2019ve not heard before. I\u2019m going to give a little preview of what I\u2019m going to say in the later lecture. The process that we need to have happen is that there\u2019s a flow of information from humans to machines about what those humans want the future to be like, and I think introspection on those preferences that we have for the future would be extremely valuable. So many of our preferences are unstated because we all share them.<\/p>\n<p>For example, a machine might decide, okay, I\u2019ve got this way of fixing the carbon dioxide concentration in the atmosphere to help with the climate, but it changes the colour of the sky to a sort of dirty, green ochre colour. Is that all right? Well, most of us have never thought about our preferences for the colour of the sky because we like the blue sky that we have. We don\u2019t make these preferences explicit because we all share them and also because we don\u2019t expect that aspect of the world to be changed, but introspecting on what makes a good future for us, our families and the world, and noticing, I think, that actually we all share far more than we disagree on about what that future should be like would be extremely valuable.<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>13<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 14\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>I notice that there is now a really active intellectual movement, or even a set of intellectual movements, around trying to make explicit what does human wellbeing mean? What is a good life? And I think it\u2019s just in time because for almost all of history in almost all parts of the world the main thing has been how do we not die, and if things go well, that time comes to an end and we actually have a breathing space then if we\u2019re not faced with imminent death, starvation, whatever, we have a breathing space to think about what should the future be. We finally have a choice, and we haven\u2019t really yet had enough discussion about that.<\/p>\n<p>So that\u2019s what I would like everyone to do. If we have a choice, what should the future look like? If you could choose, if you weren\u2019t constrained by history or by resources, what would it be?<\/p>\n<p>ANITA ANAND: Let us take a question from this side?<\/p>\n<p>PAUL INGRAM: Paul Ingram soon to start at the Cambridge University Centre for the Study of Existential Risk. Stuart, I wanted to invite you to draw a comparison with another existential risk that you mentioned in your lecture, namely the emergence of splitting of the atom and the potential for nuclear war and the Cold War. We managed to survive, although looking back that was more luck than judgment, do you draw any analogies for the emergence of artificial intelligence?<\/p>\n<p>STUART RUSSELL: I think it is an absolutely fascinating subject. What happened after Leo Szilard had this inspiration, he actually was crossing at the traffic light at South Hampton Row, and I tried walking backwards and forwards across that crossing and I haven\u2019t had any inspiration at all.<\/p>\n<p>He realised very soon that this was a bad time to have had this discovery because there was already the beginnings of an arms race with Nazi Germany. He was a refugee. And he figured out how to make a nuclear reactor with all of its feedback control systems to keep the subcritical reaction going. He patented that in 1934 but he kept the patent secret because he did not want it to fall into the wrong hands, but fairly soon the Germans also figured this out.<\/p>\n<p>Otto Hahn, Lise Meitner, were German physicists who were, I think, the first to actually demonstrate a fission reaction, and when it happened in the US, I think Villard and Teller were able to get a fission reaction to happen in their lab, and he went home and wrote in his diary:<\/p>\n<p>\u201cTonight I felt that the world was headed for grief.\u201d<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>14<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 15\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>I think we have been incredibly lucky not to have suffered nuclear annihilation, and after the war the United States had a window of complete power and they set up the International Atomic Energy Agency and very strict standards for developing peacetime nuclear power, and that enabled the sharing of designs because we could be sure that the design safety rules would be followed, inspection and regulation and so on, and I think there\u2019s a lot of lessons in all of those phases for how we think about AI and a key is not to think of it as an arms race. That\u2019s what we\u2019re doing right now. We have Putin, we have US Presidents, Chinese, Secretaries, talking about this as if, \u201cWe are going to win. We\u2019re going to use AI and that will enable us to rule the world,\u201d and I think that\u2019s a huge mistake.<\/p>\n<p>One is that it causes us to cut corners. If you\u2019re in a race, then safety is the last thing on your mind. You want to get there first and so you don\u2019t worry about making it safe. But the other is that general purpose or super-intelligent AI would be, essentially, an unlimited source of wealth and arguing over who has it would be like arguing over who has a digital copy of the daily Telegraph or something, right? If I have a digital copy, it doesn\u2019t prevent other people from having digital copies and it doesn\u2019t matter how many I have, it doesn\u2019t do me a lot of good.<\/p>\n<p>So, I think we\u2019re seeing, on the corporate side, actually a willingness to share super-intelligent AI technology, if and when it\u2019s developed, on a global scale, and I think that\u2019s a really good development. We just have to get the governments on board with that principle.<\/p>\n<p>ANITA ANAND: Thank you very much. We have many hands going up but actually, my eye has been caught by one of the fathers of the World Wide Web, the father of the World Wide Web, Tim Berners Lee is with us, and I hope you don\u2019t mind, I\u2019m just sort of zeroing on you. Are you optimistic or pessimistic when it comes to the future of AI?<\/p>\n<p>TIM BERNERS LEE: I am hopeful about the power of it, but I think all of these concerns are very real. When things go wrong in terms of social network, the sort of same tipping point happens when people end up getting polarised and afterwards we take the pieces apart, but there are lots of other systems in the world where the world is very connected and some of them are in government, and some of them are in big companies. Some are in, for example, investment companies.<\/p>\n<p>If you\u2019re a fast trader, for example, humans need not apply because you have to be too fast. So, we\u2019ve already got some jobs, and a lot of jobs in banks,<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>15<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 16\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>you have to be fast and so therefore it has to be run by AI already. Could it be that we get AI suddenly much more quickly if we build competitive AI systems?<\/p>\n<p>STUART RUSSELL: It might. I would have to say that the whole field of evolutionary computation has been a field full of optimism for a long time. The idea that you could use nature\u2019s amazing process of evolving creatures to instead evolve algorithms hasn\u2019t really paid off yet. The drawback of doing things that way is that you have absolutely no idea how it works and creating very powerful machines that work on principles you don\u2019t understand at all seems to be pretty much the worst way of going about this.<\/p>\n<p>TABITHA GOLDSTAUB: Hello, Stuart. Thank you. I\u2019m Tabitha, the Chair of the government\u2019s AI Council. I can\u2019t help but ask, what should we be teaching in school?<\/p>\n<p>STUART RUSSELL: I mean, not everyone needs to understand how AI works any more than I need to understand how my car engine works in order to drive it. They should understand what AI can and cannot do presently, and I hope they will understand the need to make sure that when AI is deployed, it\u2019s deployed in a way that\u2019s actually beneficial.<\/p>\n<p>This is the big change, right, to think not just about how can I get a machine to do X, but what happens when I put a machine that does X into society, into schools, into hospitals, into companies, into governments, what happens, and there\u2019s really not much of a discipline answering that question right now.<\/p>\n<p>ANITA ANAND: Let\u2019s take some more?<\/p>\n<p>STEPHANIE: Hi, Professor Russell. This is Stephanie here. I\u2019m interested in your views on the role of regulation for artificial intelligence and how we get the balance right between regulating and not constraining innovation, particularly if we do that in a liberal democracy and other countries around the world that are not liberal democracies do not regulate? Thank you.<\/p>\n<p>STUART RUSSELL: I think it depends what you\u2019re talking about regulating. I think there are things that we should regulate now, and I\u2019m happy to say that the EU is in the process of doing that, such as the impersonation of human beings by machines. So, that could be, for example, a phone call that you get that sounds exactly like your husband or your wife or one of your children, asking you to send some money or they\u2019ve forgotten the password for your account or whatever it might be, that\u2019s quite feasible to do now. But generally, a machine impersonating a human is a lie and I don\u2019t see why we should authorise lies for<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>16<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 17\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>commercial purposes, and I\u2019m happy to say the EU is explicitly banning that in the new legislation, and that should be something that is a global agreement.<\/p>\n<p>There are other things we should be very restrictive of, such as deep fakes, material that convinces people that some event happened that didn\u2019t actually happen, but the question of safety, how we regulate to make sure that AI systems don\u2019t produce disastrous outcomes where humanity loses control, we don\u2019t know how to write that rule yet.<\/p>\n<p>ANITA ANAND: One of the phrases you used when you were doing your lecture was, \u201cGood luck with that.\u201d I mean, we can\u2019t get people to agree on most things, how are you going to agree a framework for this?<\/p>\n<p>STUART RUSSELL: When it\u2019s in their self-interest, right, so everyone agrees on TCP\/IP, which is the protocol that allows machines to communicate on the internet, because if they don\u2019t agree with that the machine at the other end doesn\u2019t understand them and so you can\u2019t send your message. So, everyone agrees on that protocol because it works. Same with wi-fi and standards for cell phones and all this stuff, so there\u2019s a huge process. It\u2019s invisible to almost everybody but there are giant committees and annual meetings that go on and on and on, and they argue about the most tiny details of all these standards until they hammer it all out and then that standard is incredibly beneficial.<\/p>\n<p>So, we could do the same thing for how you design AI systems to ensure that they are actually beneficial to humans, but we\u2019re not ready to say what the standards should be.<\/p>\n<p>ANITA ANAND: There is one here?<\/p>\n<p>JANE BONHAM CARTER: Jane Bonham Carter. I\u2019m a Liberal Democrat politician but I, for years, worked in television and when TV\/radio came along, and that was an intrusion into people\u2019s lives in a way that had never existed before, but it covered the ground of what I think you were talking about, which is what people shared. So, can AI not be directed towards a more benign curation, I suppose, is my question?<\/p>\n<p>STUART RUSSELL: Absolutely, and as I said, I think some of the social media companies are genuinely interested. I don\u2019t think it\u2019s just a window dressing or a self-washing or anything, it\u2019s that they are genuinely interested in how their products, which are incredibly powerful, how they can be actually beneficial to people. I can\u2019t say very much at the moment but we, among others, are developing research relationships and we\u2019re finding openness and willingness<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>17<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 18\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>to share data and algorithms so that we can actually understand how to do this right.<\/p>\n<p>It actually turns out to be one of the most difficult questions because if you think about driving, for example, it\u2019s difficult but probably not impossible to figure out how we should trade off safety versus getting to your destination, versus politeness to other drivers and so on, but what the algorithms are doing is actually changing our preferences, so it\u2019s changing what we want.<\/p>\n<p>The person who first ventures into social media, having never touched it before, might be horrified by the person that they have become 12 months later. But the person 12 months later isn\u2019t horrified by themselves, right, they are actually really happy that they\u2019re now a diehard ecoterrorist and they\u2019re out there doing this, that and the other, and we don\u2019t even have a basic philosophical understanding of how to make decisions on behalf of someone who\u2019s going to be different when those decisions have impact. Do I help the person achieve what they want now, or do I help the person achieve what they\u2019re going to want when I achieve it?<\/p>\n<p>It\u2019s a puzzle and philosophers have started writing about it, but we just don\u2019t have an answer and so this manipulation of human preferences by social media algorithms is actually getting at the hardest thing to understand in the AI problem, as far as I can see.<\/p>\n<p>ANITA ANAND: Let\u2019s take another question here from this row?<\/p>\n<p>STEVE McMANUS: Hi, my name\u2019s Steve McManus, a lifelong NHS employee. Arguably you are one of the thought leaders in this field, given also some of the other members of the audience we\u2019ve got here today; where do you draw your thought leadership on this subject?<\/p>\n<p>STUART RUSSELL: Another good question. I have found, actually, reading outside of my field, reading outside AI, in economics, particularly philosophy, has been enormously useful, although economics has this \u2013 it\u2019s called \u201cthe dismal science,\u201d I think that\u2019s a bit unfair. It\u2019s a very hard problem. It\u2019s, in many ways, a lot harder than physics and chemistry, but economists actually do try to think about this question: How should the world be arranged?<\/p>\n<p>I was really shocked going back to read Adam Smith, who\u2019s widely reviled as \u201cThe Apostle of Greed,\u201d and so on and so forth, but actually what Adam Smith says at the beginning of his first book is that:<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>18<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 19\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>\u201cIt\u2019s so obvious to everyone that each of us cares deeply about other people that it hardly merits saying it, but I\u2019m going to say it anyway,\u201d and then he says it.<\/p>\n<p>That\u2019s the beginning of his first book. So, I\u2019ve learned a great deal from economists, from philosophers, trying to understand a question that AI is now going to have to answer.<\/p>\n<p>If AI systems are going to be making decisions on behalf of the human race, what does that mean? How do you tell whether a decision is a good or a bad decision when it\u2019s being made on behalf of the human race, and that\u2019s something that philosophers have grappled with for thousands of years?<\/p>\n<p>ROLY KEATING: Roly Keating from British Library. It\u2019s wonderful to have you here. Thank you for the lecture. I was interested in the language and vocabulary of human intellectual life that seems to run around AI, and I\u2019m hearing data gathering, pattern recognition, knowledge, even problem solving, but I think an earlier question used the word \u201cwisdom,\u201d which I\u2019ve not heard so much around this debate, and I suppose I\u2019m trying to get a sense of where you feel that fits into the equation. Is AI going to help us as a species gradually become wiser or is wisdom exactly the thing that we have to keep a monopoly on? Is that a purely human characteristic, do you think?<\/p>\n<p>STUART RUSSELL: Or the third possibility would be that AI helps us achieve wisdom without actually acquiring wisdom of its own, and I think, for example, my children have helped me acquire wisdom without necessarily having wisdom of their own. They certainly help me achieve humility. So, AI could help, actually, by asking the questions, right, because in some ways AI needs us to be explicit about what we think the future should be, that just the process of that interrogation could bring some wisdom to us.<\/p>\n<p>ANITA ANAND: And the final question, apologies if we didn\u2019t get to you, so many fantastic questions, but the final one with you?<\/p>\n<p>GILA SACKS: Hi. Gila Sacks. It seems that one of the most scary things about this future is that if individuals feel powerless in the face of machines and corporations, it will be a self-fulfilling prophecy, we will be powerless. So, how can individuals have power in the future that you see playing out, either as consumers or as citizens?<\/p>\n<p>STUART RUSSELL: I wish that the entire information technology industry had a different structure. If you take your phone out and look at it, there are 50 or a hundred corporate representatives sitting in your pocket busily sucking out as much money and knowledge and data as they can. None of the things on your phone really represent your interests at all.<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>19<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 20\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>What should happen is that there\u2019s one app on your phone that represents you that negotiates with the information suppliers, and the travel agencies and whatever else, on your behalf, only giving the information that\u2019s absolutely necessary and even insisting that the information be given back, that transactions be completely oblivious, that the other party retains no record whatsoever of the transaction, whether it\u2019s a search engine query or a purchase or anything else.<\/p>\n<p>This is technologically feasible but the way the market has evolved where it\u2019s completely the other way around. As individuals, you\u2019re right, we have no power. You have to sign a 38-page legal agreement to breathe and that, I really think, needs to change and the people who are responsible for making that change are the regulators.<\/p>\n<p>Just to give a simple example, right, it\u2019s a federal crime in the US to make a phone call, a robocall, to someone who is on the federal Do Not Call list. I am on the federal Do Not Call List. I get 15 or 20 phone calls a day from robocalls. When you add that up, that is billions of crimes a day or trillions or crimes every year, trillions of federal crimes occurring, and there hasn\u2019t been a single prosecution, as far as I know, this whole year. I think there was one last year where they took one group down, but there is a total failure. We are in the wild west and there isn\u2019t a Sheriff in sight. So, as individuals, ask your representatives to do something about it.<\/p>\n<p>We are also responsible, the technologists are also responsible, because we developed the internet in a very benign mindset. I can remember, when I was a computer scientist at Stanford, we could actually map our screens to anybody else\u2019s screen in the building and see what was on their screen. We thought that was cool, right? It just never occurred to anyone that that might be not totally desirable. We built technology with just open doors and complete fictitious IDs and all the rest of it. I think on the technology side, allowing real authentication of individual\u2019s traceability, responsibility, and then regulations with teeth, would help a great deal.<\/p>\n<p>ANITA ANAND: Well, with thoughts of teeth, robocalls and crowded pockets, I\u2019m afraid we\u2019re going to have to leave it there. Next time Stuart is going to be asking: What AI means for conflict and war. That is from Manchester, but for now a big thanks to our audience, to the Alan Turing Institute for hosting us, and, of course, to our Reith Lecturer, Stuart Russell.<\/p>\n<p>(AUDIENCE APPLAUSE)<\/p>\n<\/div>\n<\/div>\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>20<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 21\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>END OF TRANSCRIPT<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>The Biggest Event In Human History, BBC Radio 4, 1 December 2021 &nbsp; Here we go again. If this doesn&#8217;t get your attention, what will? The Biggest Event In Human History. That&#8217;s the title of the most recent Reith Lecture on the BBC World Service. Its the first lecture in a series titled, Living with [&hellip;]<\/p>\n","protected":false},"author":1001004,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[55],"tags":[],"_links":{"self":[{"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/posts\/12840"}],"collection":[{"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/users\/1001004"}],"replies":[{"embeddable":true,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=12840"}],"version-history":[{"count":7,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/posts\/12840\/revisions"}],"predecessor-version":[{"id":12900,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/posts\/12840\/revisions\/12900"}],"wp:attachment":[{"href":"https:\/\/worldcampaign.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=12840"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=12840"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=12840"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}