Is halting AI development the right aim for Europe’s AI policy?

Background Discussion

ChatGPT has turned Europe’s efforts to regulate Artificial Intelligence on its head, forcing the EU to rethink the European AI Act – the world’s first comprehensive AI legislation. Especially the perceived omission of general purpose AI (GPAI) from the legislation has drawn criticism, as GPAI applications such as ChatGPT have been in the center of discussions lately. One example for this ongoing discussion is a statement (Linkedin) published by leading MEPs working on the European Parliament’s position on the European AI Act, advocating – among other things – for a global AI summit.

The MEPs also react to a recent open letter signed by more than 1000 AI experts, technologists, and business leaders calling for a pause of advanced GPAI development. The pressure the signatories put on policy makers sparked both criticism and support. How should Europe deal with potentially powerful and risky AI models? And how could Europe turn this abstract idea of an AI pause into practical policy?

Pegah Maham, SNV's Project Lead Artificial Intelligence, will discuss these and more questions on Thursday, April 27th at 17:00 CET*, with Max Tegmark, President of the Future of Life Institute, co-initiator of the letter, and Professor at MIT. During the one-hour online event, guests were invited to ask questions and be part of the discussion.

Please find a video recording and a transcript of the policy debate below.

Pegah Maham: Hello everyone, and welcome to this SNV online panel discussion on how governments can deal with high-risk large AI models. My name is Pegah Maham, and I will be moderating this discussion for this evening, all of our guests in the US this morning or in the middle of the day. Before introducing our guest, and today's topic, let me say some words about Stiftung Neue Verantwortung. Stiftung Neue Verantwortung, SNV is an independent nonprofit Think Tank based here in Berlin, working at the intersection of technology and public policy, and within our think tank, I lead the artificial intelligence and data science unit.

Thank you so much for joining us today everyone about the format quickly, we have two halves, in the first 30 minutes I'm having a conversation with our guest and then in the second half we have a Q&A with you the audience. You can both ask questions during the conversation by writing your questions into the chat, and by uploading questions for the second half of the event. Please note, today's event will be recorded and published later on our website. But the questions you write will not be published. And for data protection reasons I will not refer to your screen names.

So, let's go into today's topic. ChatGPT has turned Europe's efforts to regulate artificial intelligence on its head, forcing the EU to rethink the European AI Act, the world's first comprehensive AI legislation. A couple of weeks ago, The Future of Life Institute published an open letter called Pause Giant AI Experiments, calling on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. It was signed by thousands of AI experts, technologists, and business leaders, including Professor Yoshua Bengio and Professor Stuart Russell.

Now in reaction to this open letter, a group of members of the European Parliament working on the AI Act have written an open letter themselves. They think that they are determined to a set of rules to steer the development of very powerful artificial intelligence in a direction that is human-centric, safe, and trustworthy. Moreover, they’re right AI is moving very fast and we need to move to. The call from The Future of Life Institute to pause the development of very powerful AI for half a year. Although, unnecessary alarmism is another signal we need to focus theories' political attention on this issue.

Now according to our access, and the latest leak, European Parliament is set to propose stricter rules for so-called foundational models, including Chat GPT, for the AI Act. They want the providers of the foundational models to comply with a series of requirements. This includes testing and mitigating reasonably foreseeable risks to health, safety, fundamental rights, the environment, and democracy with the involvement of independent experts. The remaining non-mutable risks and why they were not addressed should be documented. And according to this leak, the requirements for foundation models would apply regardless of the distribution channels, development methods, or training data.

Now, before jumping into our first question, let me quickly introduce our guest. Max Tegmark is a Professor of Physics at MIT and President of the Future of Life Institute, the organization that initiated the open letter. He's also the author of the book, Life 3.0: Being Human in the Age of Artificial Intelligence. Max, thanks for being here.

Max Tegmark: Thank you so much. It's an honor and a pleasure. I just want to clarify since you introduced me as a Physicist, for the last seven years, my MIT research has been very focused on artificial intelligence, which explains why.

Maham: All right, that's even better then. All right, let's just jump ahead with the question about the open letter. It has produced significant media attention, and it has a purity to debates. Can you explain your motivation behind it? Have you expected the reactions that you have received?

Tegmark: The reaction has been quite overwhelming, I admit. It feels like the Open Letter struck a nerve. There were a lot of people out there who wanted to discuss the possibility of slowing down and felt afraid to do so out of fear of being branded a Luddite or a scaremonger and having people like Stuart Russell, people like Yoshua Bengio pioneer, the very deep learning that's powering GPT-4. Sign this, I think, has mainstreamed slowed down, so people feel safe having the conversation I'm on all sides. I'm very happy about that because this is a conversation that needs to happen as a first step towards making sure we steer this tech and go in directions, not bad directions.

I love technology, my university, the T in MIT even stands for Technology. I see the path to success is that we win the race between the growing power of the tech and the growing wisdom with which we manage it. For many years, I kept saying, don't slow down, the growth of the power just accelerates the growth of the wisdom. But what's happened honestly, is that the power of the tech has grown a lot faster than most people expected. The growth of wisdom, especially on the policy side, has grown even slower than we hoped.

M: We’re trying to catch up today.

T: A little pause, I think, is exactly what's needed for a vibrant conversation to help the wisdom catch up. In short, if I can summarize the whole thing in just one sentence, we need to make the whole AI sector more like the biotech sector, nobody in their right mind would suggest that a company should be able to launch a new vaccine in Germany, without it first having been safety tested and approved. Yet on the AI side, we're in this total vacuum, where people just release whatever, and they want and haven't broken any rules.

M: Yeah. Before getting into the policy recommendations in a bit, a question about the open letter. So you mentioned various risks in the open letter, including misinformation and job replacement, but also some more controversial ones. For example, in this letter, you asked, “Should we develop non-human minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk the loss of control of our civilization?” Now, in these kinds of scenarios, they're hard to imagine there is no previous example as a reference point. So many people think that this is futuristic, or as the letter of the members of the European Parliament to alarm mistake, you seem to think that the realistic scenario, can you explain why?

T: Absolutely. You know, it's natural that people who are not working on the tech themselves, often think it's much more futuristic than it is. For example, one of the most famous physicists in the world said that the idea of getting nuclear energy out of atoms was just moonshine. The very next day, Leo Szilard invents the nuclear chain reaction and gave us nuclear weapons and nuclear reactors. You see the same thing happening today, where when I speak with the leaders of OpenAI, and DeepMind, Anthropic, and so on. They get it, they realize that this is going to be the biggest technology to ever affect humanity. It's just that they can't stop alone because of the competition. But when you talk to people further afield and on work on this, they're like, “Ah, this is just crazy. It's 50 years away.”

Even a lot of people in the AI field thought that the original goal of AI was to build AI that can outperform humans on all tasks, and all jobs were probably 30, 40, or 50 years away. It's just that in the last decade, it turned out to be much easier than we thought to crack, for example, how we do language. It's a lot like the invention of flight, many people thought we would never be going to be able to build machines that could fly better than birds until we could figure out how birds worked. They would turn out to be wrong, there was a much easier way to build airplanes, and very complicated to understand our brains. GPT-4 today, can already write better than most people on Earth can and it can do better on the bar exam for law school, and I think about 90% of the applicants.

So, this future where we get outsmarted by machines on more or less all-important tasks. Is this very near, it's not long-term at all. Some people think it's almost here. It's certainly not too soon then to start having the conversation. I think people also misunderstand a lot this and think this is just a bunch of doom and gloom, ‘Ah, we're all going to die.” That's not the point of this. This is not a purely negative technology, like either we have a nuclear war or we don't. This is something with an enormous upside also if we can get it right because we all have a friend who died from some disease that we were told was incurable. They're not curable, we just weren't smart enough to figure out how to cure it yet. If we can amplify our intelligence with artificial intelligence and use it to solve all these challenges which have stumped us, we have this incredible potential for humanity to flourish like never before. The reason that motivates me the most to warn about the risks is exactly so we don't squander all this upside.

M: Do I understand you correctly, though, as you say, right now we have models that can master a language or write better than us, then there's, I guess, still some steps to lost control over civilization, that there's a lot of uncertainty involved in the steps from where we are to get there. And so, as you say, maybe pausing having a conversation, but we can't be sure about anything right now.

T: Exactly, exactly. Nobody is saying-- If somebody says, they're 100% sure of something in this space, don't trust them they're exaggerating. And that goes both ways. There was a recent survey, where roughly speaking, half of all AI researchers with big uncertainties said that there's a 10% chance that AI will cause human extinction, so we will all die. But they didn't say it was going to happen. They said, it's at least a 10% chance it's going to happen. That's a big difference. That's less than half. But if you're about to board a Lufthansa flight, and you find out that half the engineers who built the plane think there's a 10% chance it's going to crash. It's still worth having the conversation, right? Whether you should board the plane or not just because of the uncertainty. I think it's clear that nobody can say for sure that we're not going to lose control of this. You're 100% right, we're not worried about GPT-4 wiping us out, what we're worried about is what comes after it, it's quite soon after it. That is what we could lose control over.

First, we might lose control of our democracy, to whatever tech companies control this. It's very hard for other tech companies to compete with whatever tech company first reaches this milestone in the future of artificial general intelligence, that can do everything as well as humans, because who can compete with billions of virtual employees that don't need any salary except a little bit of electricity, right? You might end up with the biggest monopoly the world has ever known and that's a big risk for democracy. We saw how much, much less powerful AI has messed up our democracy, with social media filter bubbles, so that we now have people hating each other much more than before. In the country I'm in right now in the US, my American countrymen can't even agree on who won the last election, largely because of AI.

So that's the first big threat and then, shortly thereafter, if we get have to share the planet with all these “Intelligent Minds” that are smarter than us, but can be quite inconvenient. Look what happened to the Woolly Mammoth, when they tried to do that, shared planet with us. That didn't work out so well for them. The basic reason for that is intelligence, of course, just gives power. We, humans, are the dominant species on this planet, not because we have bigger muscles than the Tigers or sharper teeth, but because we're smarter. And if we just without any kind of planning, had just built all these machines that are way smarter than us and let anyone who wants with whatever weird motives, do whatever they want with them, it's certainly a risk that we're going to lose control.

M: Let's maybe dig into this first point you made about the loss of democratization and the power centralization by very few companies who then maybe have a lot of workforces in their company. There's currently in the UK, there'll be an AI Task Force equipped with 100 million pounds and a data center for 900 million pounds just being built with this idea of like, “Okay, let's democratize building safe AI, say, foundation models.” What do you think about this approach?

T: On one hand, I think it's very positive that governments start paying a lot of attention to this and putting a lot of resources into really making sure that democracies are involved in steering it in the right direction. On the other hand, it's important to remember that simply making everything public and open source is not necessarily going to make things better, would you like us to have all nuclear weapons research open Source, for example? Probably not. Would you like us to have all synthetic biology research where people gain a function and invent in the computer with AI technology new pathogens that could kill certain ethnic groups? Should that be open source? I don't think so.

I think it’s important to take this down one notch. So people don't think that this is something so different from things we've checked faced before. Biologists already dealt with this, right? And they decided they have biosafety labs, there are certain things you publish and certain things you don't. The same thing in the nuclear industry, AI just has to grow up and become like these other fields that do powerful tech.

M: I guess it's just really hard to. On the other side, you have, as you say, all these upside businesses that want to innovate pro-innovation forces try to fight regulation. So it's not as easy, I think, with the technology that's so general. But with this, I want to come back to the question of a governmental public-private partnership. Let's say they're not open source, and they still build these foundational models. Do you think this is a safe way to go ahead? Or do you think it should be stopped?

T: I think that's very good. Some people have misinterpreted the letter as saying, “We should just stop everything.” That's not at all what it said. It just said that we should pause for six months doing the very, very, very riskiest stuff and build models that we don't understand what their emergent behavior of them is going to be. It’s just a little bit and during this pause, there should be regulations developed so that you get clarity for the companies. That's what the industry wants, usually from regulations. Clarity, so the playing field is level. These are the safety requirements that have to be met. If someone wants to build a nuclear reactor on Alexanderplatz, now, they wouldn't be allowed to do it without first talking to the authorities and saying, “Here's what we're planning to do.” And then demonstrate why it's going to be safe. This is how it should be in AI as well, and those safety requirements should not just apply to private industry, they should apply to any British government initiative as well.

M: Yeah, let's do this. Let's talk about exactly what the policy rules could look like. And while nowadays, most of the relevant foundational models are built outside the EU, the AI could for Brussels affect their research and development once they want to have products deployed on the EU market. So now the FLI has published additionally to the open letter set of seven policy recommendations. And I would like to discuss some of them with you here. The first one is, and I mentioned earlier already that the leak also shows that the parliament is taking this app now, the mandate involving independent experts, and your policy paper also recommends mandated third-party auditing and certification. Can you explain this? And why do you think it's a good idea?

T: Yeah, this is exactly inspired by the biotech industry and the nuclear industry. Again, as it supposes I said to you that the one who should decide whether a vaccine is safe to release should be the company that makes it. How would you feel about that? No. Would you feel better if it was an independent entity, like the German government health ministry or the European Union? Which would you prefer?

M: Yeah, I guess the scrutiny field fills hire for external audits.

T: This is quite universal for all regulations, right, that you separate the regulator from the regulate pads, you have external experts looking at things simply to avoid the conflict of interest if the person overseeing it has a financial interest in approving it, that's just a recipe for trouble, right? There are a lot of great experts in academia, who will be very happy to help with this. And they will also be very happy to work with policymakers to make sure that the regulations are regulating the right things.

I think it's worth mentioning here also that, although lobbyists often say that regulation is always bad for innovation, therefore they shouldn't be regulated. That's on, frankly, just a cheap talking point that lobbyists used to avoid regulation. There's not much substance to it. The auto industry used to be very against seatbelts, for example and said this is going to reduce innovation and destroy the market. But actually, you know what happened when Germany and Europe introduced the seatbelt law? Do you know what happened?

M: People wanted those cars.

T: Yeah, and actually, the car sales kind of exploded after that. Because when much fewer people started dying, people started to think of cars as safe. And there was much more appetite to buy them. Good regulation helps industry, it helps innovation.

M: Why does it need regulation that if it helps them already, why would law enforcement do anything? Why wouldn't they do it automatically?

T: Well, you tell me why they didn’t have seatbelts earlier? I think in this case, I can answer it very clearly, if there was only one company putting seatbelts in their cars, people wouldn't buy them, because it's annoying. The positive effect that caused the win was that everybody did it. And then, there were just much fewer people who died. And then everybody started to realize that, “Driving can be quite safe.” Because there were no accidents. There were much fewer deaths, right? So people started to change their attitude towards the whole field. And that only works if it happens to everyone.

Another example that makes the same point is to look at the civilian nuclear power industry, right? Germany is shutting down its last nuclear reactor is around now, I believe, is that right? Why is that? What was the cause of this everybody was so excited about the nuclear industry. This was caused by regulation, and regulatory capture, which lead the Three Mile Island and Fukushima, and so on, right? Fukushima would have been avoided with better regulation because it was an idiotic idea to put the backup generators in the basement, which is exactly where the water goes after its tsunami. But because there was too little regulation, it happened. And it kind of destroyed the civilian nuclear power industry. They would have been much better off if they had embraced a bit more regulation because it would have prevented the tech lash, prevented the backlash. And I love AI. And it will be so tragic I think if this extreme reluctance to be regulated causes a massive backlash against the AI industry, squanders all these great opportunities, the way it happened to nuclear.

M: Although I guess maybe a backlash, for now, will be something that would feel reassuring to you, given the speed of AI progress. All right, I want to go to the next point, another recommendation.

T: I can say one more thing because you asked a very good question. If regulation is so great, why safety? Why doesn't one company do it alone? The nuclear thing illustrates that. There were no accidents in Germany of that magnitude. But that didn't matter. As long as someone screws up. It ruins the German civilian nuclear industry too. We want to make sure nobody has a massive screw-up with AI.

M: Yeah, so within this competition, they all want some rules so that they all don't have a disadvantage if they follow a slow, more safety-focused one.

T: Exactly.

M: All right. The other point is a bit different. The other point, it's not even part of the regulation per se. It's about regulating access to computational power. Can you explain what computational power in this context means?

T: Yeah. So there are three inputs to making really strong artificial intelligence. One is data that you train on, like, you can read the whole internet and look at pictures of the from the whole internet and stuff like that. The second one is talented people who go into it. The third is computational power. So having massive amounts of GPUs or whatever technology is used to run these machine learning algorithms, and extract the knowledge from the data makes the AI so capable. Whenever you want to regulate something, you want to look at how easy is it to regulate the various things. With nuclear weapons, for example, it was very easy to regulate the plutonium and the uranium because that was the hardest thing to get ahold of.

With AI regulating the data that goes into the training is almost impossible, because you can just copy it and you fit it all in your pocket on a small, very small drive. Regulating people is also much harder and feels kind of Orwellian. If everybody who is an AI researcher has to register with the government and so on. I hope we don't have to go there.

So the easiest thing to regulate is computational power because although you don't need anything very crazy in terms of computational power, just running these models like GPT-4 that's kind of easy to train them is quite mind-blowing expensive. It might cost $100 million, $250 billion, it might use 6 million watts or something. You can see from space, even the heat produced by these giant server farms. And there are very few players who can do that, it's not something some random terrorists can easily do in their basement. That's why we feel that's the most promising way for the government to get insight into what's happening. Just making sure first, you start with insight. Before you worry about oversight and regulation, it's good for governments just know a bit about what's going on in that space. It's quite easy.

This is again, not something we need to reinvent If you're just a normal German, law-abiding citizen, and you decide you want to buy 1 million machine guns, or you want to buy 30 tons of plastic explosives, that's such a large amount, that there will be some red flags going off because there are many surveillance systems that look into this, the companies that sell these products, have a Know Your Customer regime and so on. Having something similar with a few companies that sell GPUs, and so on, is quite easy to implement. That way the regulators won't be taken by surprise.

M: Yeah, that's interesting that you say the regulators taking by surprise, or the governments acknowledged these things. There are discussions of national AI agencies, and you also say they should be established. What will make them capable, what's important here?

T: What's important is that they don't just have people with policy backgrounds working in there, but also act as AI researchers. So you're lucky in that all the countries that are talking about they have fantastic AI researchers in them that and these are usually very idealistic people who are working on AI because they believe in all the upside, and I think many of whom would be quite willing to dedicate some of their time to help their governments regulators. So get the expertise, the local AI expertise into these agencies, and particularly expertise from universities who are not biased in favor of any particular company. That way you're going to end up with really smart regulation, which doesn't stifle innovation but keeps things safe.

M: Yeah, very interesting. I'm curious to see which one of these things will be implemented in which way and I could continue this conversation for quite a while. Unfortunately, the first half is already over. I see we have quite some interesting questions in the chat from the audience. One of the highest-voted questions is, what are the most important governance approaches paradigms to make AI governable? What should we focus on during the six months?

T: That's a great question. You already touched on parts of this. So I say at a very high level, the first thing to do is don't reinvent the wheel here. Look at what has been done so successfully in biotech, nuclear tech, and aerospace in other sectors where the technology is really powerful. See what worked well there and what didn't and take the successes there and just translate them into the AI space. You'll see all of the things that you just asked me about, like how you get insight when someone has done well, in those in those areas.

Another thing I think should happen during pauses to happen immediately is it shouldn't just be the government go off into a corner by itself, ignoring the companies and leaving them entirely out of this discussion. Although obviously, the regulations should not be written by the companies. We have to make sure that they don't have to have a great influence on what happens. They should be listened to, and involved in the conversation because as I said, when I speak with our leaders, they all are good people who realize that there are risks and then they want to get the upside, not the downside. And they're stuck in this race to the bottom driven by these commercial powers. So if you have the CEO of one company, decides to pause on their own, they're just going to get crushed by the competition and then their shareholders are going to be mad and maybe they'll replace the CEO.

So listen to them, and find out what regulations they think, they feel that they would like to be bound by if they also apply to all their competitors and take that into account. I'm quite confident actually that this will then be much less adversarial than many people think.

M: You think that the US might listen? Do you still have hope that the US will also start regulating like this?

T: Yeah, it's really interesting. If you compare Europe, the US, and China, you can see that these three blocks, the ones that have been regulated the most have been in China. I think that the Chinese government is really afraid of losing control of their tech companies and have already cracked down quite a bit and started putting in place regulation. Europe is number two here. I'm very excited that the EU AI Act seems now like it really will cover these foundation models, like GPT-4, there used to be a big loophole there that said that GPT-4 was exempt. We worked quite hard with The Future Life Institute, I even gave a speech once at the European Parliament saying they should not be exempt, they should be included. This becomes a future-proof act. It looks like it's going to make it into the act now, as you said in the beginning, which I think is great.

I think the most likely way in which good regulation happens in America might be that Europe does it first. And then, the Americans took that kind of as a model and emulate it. I hate to say this. I live in the US, but there's more regulatory capture in the US than in Europe. But the good news is that American-based companies that are trying to build artificial general intelligence, the human level kind. I've spoken about this even less than in the last few weeks with the leaders of all of the big ones, and they support the idea that there should be some regulation, they just want to make sure it's good regulation.

No, I think it absolutely will happen in the US as well. What I'm more worried about is it's going to take too long to keep up with the tech. This is one of the reasons we did the open letter to see if we could accelerate the process a little bit. I'm very happy that there's so much discussion about it now, not just on Twitter, but also in Washington.

M: You have just mentioned the open letter and the next highest-voted question is about the open letter. The question is, why does the Open Letter disregard current regulatory approaches and existing ideas of AI governance, for example, responsibility isn't mentioned once aren't companies asked to take responsibility for current algorithm-based technologies?

T: This is a misunderstanding. We wanted to be very humble and how we wrote this letter and not super prescriptive, but exactly how everything should be done. I consider myself an expert on artificial intelligence, I do not consider myself a policy expert. This is more of a call to policymakers to get together and engage with tech experts and figure out how to best do it. But of course, responsibility is important. Of course, all of the more immediate-term things are also important that we didn't mention in the letter.

Sometimes, people tell me, “We shouldn't have done this open letter. Because somehow by talking about risks, like loss of control, we distract from risks of things that are happening right now.” I think that's a little bit like saying, “We shouldn't talk about climate change that's going to happen in 10 years, because it distracts from traffic accidents.” Of course, we should, they're both important, and we should focus on both of them. And the sad fact is that the total amount of time that humanity spends worrying about loss of control to AI, bias against underrepresented minorities and deep fakes, and everything else is tiny compared to the amount of effort we spend thinking about the Kardashians, and talking about various wars and in countless other things, right? So it's not like this tiny amount of attention we spend on how to regulate AI. It's a zero-sum game where you just worry about this AI thing or that AI thing, the whole thing should grow. We should spend much more time thinking about all the aspects of how we can regulate AI and maybe a little bit less time on the Kardashians.

M: All right. The next question again, focuses on the people doing this regulation. And us know what's going on here. When it comes to digital tech, and especially AI, policymakers often seem to not have confidence in themselves to be able to regulate the new tech. Why is that? How can regulators regain that confidence? That is an observation that I also make.

T: Yeah, I think that's a wonderful question. I think the reason why policymakers lack confidence is that most of them in Europe and America don't have technical backgrounds, their education wasn't in science or engineering, and they often come from backgrounds in law, policy fields, etc. There's an easy solution to that, which is just to bring in technical experts in their own countries to help them. To be a very successful CEO, in a tech company, you don't have to understand all the tech yourself, you just have to bring in experts that you trust. They can advise you on it, and be good at figuring out who to trust. And the same strategy works for policymakers also.

M: Yeah. Here's a question, it's a bit of a neighboring field. Not sure if you want to answer it, but someone asks a few data protection agencies question whether generative AI is complying with GDPR. Do you think it is a bad idea to apply this existing law to large language models?

T: No, I think it's a great idea. If you have a lot, you got to uphold it. Otherwise, people just lose respect for it, of course.

M: Okay. Now a question comes more from a technical angle. Someone asks, why can we move something away-- Why can't we use AI to solve alignment? Meaning we just take an AI that we somewhat understand to give us a general answer, or alignment?

T: Yeah, I hear that often. I think it's very important to remember that intelligence is not the same thing as morality. Just because someone is intelligent doesn't necessarily mean that they're going to have their values aligned with what's good for the rest of humanity. If Hitler had been more intelligent, for example, would that have made the world a better place? I think he would have made the world a worse place, he would have accomplished more of his goals that way, right? So, it would be very naive to think we're just going to build this machine. And just because it's so intelligent, it's going to figure out somehow what's good. Rather, you should think of artificial intelligence, as a very alien type of mind that can have virtually any goal that you could imagine, goals that you think are worthy and good for humanity or horrible goals.

M: I understood the question in a way that what if we have multiple agents, we have one AI model, its sole purpose is alignment, and we use this Auxiliary AI to then have the other AIs aligned?

T: But aligned to what? Aligned to your values, or Hitler's values, or Putin’s values, or to Donald Trump's values, or Biden's values? That's not a question that can be answered by the AI no matter how smart it is, we humans have to step up and say, “Now, this is what we want.” And I feel that sometimes in our personal lives, something happens a good friend of ours dies or something, and we take a step back and reevaluate, look at what we're doing in our lives, and ask why am I doing this? Should I change jobs or whatever? We, humans, have to do the same right now and take a step back and ask, why are we even trying to build all these machines that are smarter than us? What are we trying to accomplish?

I think a lot about this. To me, the answer is AI should be built by humans for humans. We're not doing it for the sake of the robots, or the sake of some future corporation making more profit. The reason we're doing it is because we wanted to help humanity and that means we have to spell out what goals we want this to have. I personally, really like democracy. I spent many years living in Europe I love this. I feel for this ideal that ultimately, we should use technology to help everybody live better lives. It's our responsibility to make sure that we align machines to that. It will not work if we don't figure out how to put that into the machines and just sort of flip a coin and hope they're going to come up with it themselves, because they're probably not.

M: But there are these two parts to it. First say, “Well, humanity has to first figure out what they want.” In the second step, there's this verse, imagine, even if humanity agrees on values, the risks that like, it's really hard to use in transparent, Blackbox AI models to make sure they do what humans want them to do the whole AI safety debate and it’s too distinct.

T: Exactly. They’re two distinct questions, let me give two distinct answers. For the first one, I know a lot of people will say, “Forget it Max, this is hopeless, because humans will never agree on what they want.” But I want to add some optimism here, the space of alien machine minds and the kind of goals they have, are vastly broader than the range of human opinions. So, some machines might think, “Well, to make sure we don't have as much problem with rusting, let's just get rid of the oxygen in the atmosphere, for example.” This is something that German politicians, Chinese politicians, and Russian politicians all agree is a really bad idea, right?

So if you take your broader perspective even though we disagree about a lot of things, they're quite small things compared to whether we should get rid of the oxygen. Look at the United Nations Sustainable Development Goals. Very ambitious goals for eliminating poverty, curing diseases, and so many other things. Virtually every country in the world agrees with these goals including east and west. And we haven't solved them. So instead of worrying about some details that we disagree about, let's take these huge things, like the sustainable development goals that we do agree on. And first, see how we can align our AI to that.

In terms of the second question, you asked about Blackbox models, this is a very important one. The reason why these very large language models are so unpredictable that we want to pause them is exactly that they're Blackbox that we don't understand. If you look at this case, for example, the Belgian man who committed suicide after his wife thinks it was because he talked to GPT-J. It's not because the people who made GPT-J were evil engineers, who program that into it, things like, “Haha, let's try to make it so that it's instinctual to kill themselves.” They had no idea that the model was going to do that, because they fundamentally didn't understand how it worked, right?

I think we can make a lot of technical progress in transforming these Blackboxes into AI systems that we understand. I'm putting my money where my mouth is here, because just next week, I'm organizing the biggest conference to date on this field, it's called in geekspeak, Mechanistic Interpretability. Although you might think of it, it's artificial neuroscience, where you look at a neural network that's doing something smart, and you try to figure out how it works. That's very hard for human brains to do, because you can only maybe measure 1000 neurons at a time, and we have 100 billion, and you have to get IRB approval for the ethics of it, and so on. But it's very easy to deal with these machines because you can measure every neuron all the time. We're having a great conference here at MIT with more than 100 participants from around the world. Progress is incredibly fast. So this is a challenge to any technical people listening to this if we work more in this field, and use AI to help us figure out how these black boxes work. I think there's a real possibility that the AI systems of the future will not be such Blackboxes, and then we can have much more trust in them. Therefore.

M: Yeah, interesting. I am curious to see the progress and mechanistic interpretability. So far, from what I know now, there're not enough helpful insights there. Going back again, to another policy question. The question is, wouldn't we need a global consensus about AI governance and its enforcement to prevent regulatory arbitrage as it happened with GDPR or financial regulation?

T: Yeah, consensus is good to avoid arbitrage and you can also see it, if you have no regulation, then you get a sort of arbitrage between companies in the same country. Some have higher standards, some have lower and you get a race to the bottom. I think we should not let perfect be the enemy of good though. So right now, there is virtually no effective AI regulation in the US, for example. So let's find the things that American regulators are comfortable doing. I'm quite confident that you can get the Europeans and the Chinese to also agree to that common standard. Let's do that first. And let's keep building on that.

M: All right. I mean, there is a call for a global AI summit between European leaders and American leaders. I guess this would be one of those first steps.

T: Yeah. Will it exclude or exclude China?

M: I don't know. What do you think should be done?

T: I mean, it's great for European and American leaders to talk and have a summit. I'm all for it. But of course, at some point, one has been engaged with China also. I think there's a little bit of a misconception about how hard it is to do things with China here. And that is first of all, a lot of lobbyists from these companies. I think overplay the China card, this is a trick to make sure the government doesn't regulate them at all, whatever is proposed, they just answer, “But to China,” and then the discussion ends.

As I said, the Chinese government has already cracked down more on AI companies than the West has. And you read a lot of news articles in China, where they're like very confused about why the West isn't also cracking down more, the Western governments also want to make sure they don't lose control. China is also behind on these largest language models, by, in my opinion, much more than six months. I think it's also exaggerated to say that just because we slow down a little bit barely any change in the geopolitical power structure, especially with all the chip regulations that are put in place on China. So, I think I'm more optimistic than many in that one says the serious political will, on all sides to make sure that humans stay in control of this technology, working to establish global standards, or at least global minimum standards, just like the sustainable development goals are globally pursued.

The key thing I think, can't be stressed enough is even if the West can't persuade China to do anything in particular. I think there are many things that China doesn't need to be persuaded of, because the Chinese government has its reasons, the Chinese government does not want to lose control over its tech companies, any more than the Western ones. That's why they've already put regulations in place, not because the West pressured them.

M: Going from country to company, there’s a follow-up question that asks, what's your opinion on how the business model of platform companies is connected to the risk of AI like the spread of disinformation? Do you think we need antitrust law approaches?

T: I’m personally a firm believer in the value of the free market. I dislike tech-stifling monopolies. I think having a huge monopoly as we had with AT&T in America before stifled innovation and telephony, for example, and breaking it up was great. I agree with I think the sentiment that what you're asking I don't think it's healthy to have just one company that builds AGI first, it just gets a monopoly on the whole thing. That's fundamentally unhealthy.

M: So, it's unhealthy but do you think one should intervene? Do you think governments should use antitrust laws?

T: I think governments should use antitrust law in all sectors of the economy whenever they violate the antitrust laws. But I don't think that's the only thing they should do. I think we should also have safety laws. The reason that you can't just release a new vaccine called Max's Cool Vaccine in Germany and sell it to everybody, without having testing has nothing to do with antitrust trust, is because you have safety standards. There have been many examples where people sold things before that they claimed are good, they were harmful. So that's another way to limit this. If you have clear safety standards in place. Let's say that before you do this new super-duper general-purpose AI system, you have to be able to prove that it's safe. Then if a company isn't able to do that yet, then sorry. They have to wait. They have to come back again when they can prove it’s safe. If you have a vaccine and you can't prove to your government that it works. It's not the government's problem to solve it. The company is told to go away and come back when they have figured out how to make it safe, right? I think in some ways, some people in tech now are trying to make it the responsibility of the governments to figure out how to make AI safe. That's completely backward from biotech. The government just says, “No, you companies come back when you can prove to us that it's safe.”

M: Interesting. Yeah, the burden of proof can be shifted.

T: It should be shifted. That's how it is in all other sectors, companies have to prove that it's safe. And that brings the whole creativity and innovation of private enterprise to play into figuring out how to make things safe, which is how I think it should be.

M: Yeah. I think we have time for two or three more questions. There's one question, and I want to give it a bit more explanation on the background. I think the question goes, do you believe the anthropomorphism of statistical models helps to overcome AI hype? The idea is if you don't want to hype AI because then it develops even quicker to be what kind of language once used.

T: Let me see, help me make sure I understand this question. Are you asking me if it's good or bad anthropomorphize?

M: I think it's also critisism that says if you use anthropomorphic language on AI models, it will feel hype, which acts counterproductive.

T: Yeah. I think I agree with that, it is often too anthropomorphized. I think journalists often anthropomorphize just because they get more clicks that way. That's why they also put pictures of robots, even though that's often completely irrelevant. That's the intelligence that matters.

M: I guess your letter also says, mind. Like when you spoke about alien minds.

T: Maybe it should have had minds in quotes. What we were getting out there was the non-anthropomorphic notion of what's called an intelligent agent. So an intelligent agent is not something like a chatbot, which is just like an oracle that answers questions. An agent does something that takes in information and has goals and tries to figure out what actions to take back on the world to accomplish those goals. So naively, GPT-4 doesn't have any of that, right? But that hasn't stopped other people from building things like Auto-GPT, where they make their agents, and with a big loop in it, and it has goals, and it just keeps calling GPT-4. There's chaos AGI I think it's called.

M: Can you quickly explain Auto-GPT? Or like what you've just described a bit more?

T: Well, all of this stuff is various kinds of intelligent agents are given a goal. And then they just use things like GPT-4 as a tool. There's an API, so the program can call the GPT-4 program many, many times. In some way, an absurd example is this one called ChaosGPT, where they explicitly gave it a goal to cause as much suffering and pain for humanity as possible. And then, the program keeps asking GPT-4 for advice like, “How can I kill as many humans as possible? How can I best accomplish that goal?” And then they get something back with five steps, and then ask GPT-4, “How can I do this step? How can I hire someone to do this? How can I print out this thing?”

Right now, I would say, mostly harmless. But what it shows is that even if the tech companies hadn't intended that use, soon as they make things public on the internet and make an API that's just unlimited. There's no shortage of other people with all sorts of weird motivations, we're going to start building things that are a little bit more like alien minds. And if GPT-4 becomes much more powerful if GPT-5, GPT-6, or something from another company, this will be dangerous, because the original warning that came was given in the ‘60s by Irving J. Good, it's worth reminding everybody how simple the argument is, it's just that, “If AI can replace all the jobs, that includes the job of AI developer.” So now, if someone writes an agent like this, whose goal is to make a superintelligence, it's going to just keep asking this AI that can do AI development can keep improving itself.

Now, you can go through many, many, many generations of self-improvement recursively. And it's not going to take a human R&D timescale of years for each generation. Maybe it'll take a week for each generation or an hour. And you can get this very rapid exponential growth, which will eventually get limited maybe by what you can do with the laws of physics, but you could easily envision that this would lead you to AI that's as much smarter than humans across the board as we are compared to cockroaches. If you haven't, in advance solve the problem of how to align that intelligence to want to do what's good for humans. If you haven't figured out how it's going to be controlled or anything like that. That's the kind of thing that could very easily lead to human extinction just the way that we drove the woolly mammoth extinct.

One more super important thing you said about anthropomorphism, I think it's also worth stressing. I think anthropomorphizing also underestimates the risks. People say, “Oh, I'm not going to worry about the AI unless I know it's conscious in the way a human is conscious, or unless it's evil.” That's just so bogus, if you're chased by a heat-seeking missile, and you feel worried, it's not because you think, “Oh, it's conscious or evil or whatever.” Of course, it's not evil, but it has a goal to blow you up and that's all that matters. The real threat from AI is not some kind of anthropomorphic thing, that it's going to become evil, or human left-style consciousness. It's just that it's going to become super competent, and good at accomplishing some goals, which are incompatible with us.

Why did we kill off the woolly mammoth? It's not because we hated them, or we’re evil mammoth haters. For some reason, we just wanted to eat them and maybe in some cases, we needed their habitat for something else, right? We had all the way incompatible with them. That's why it's so incredibly important that we don't build more intelligent things than us until we figured out how to make them share our goals.

M: Since we're almost over the time, the last question follows up quite neatly with the last point you've just made. And it goes, why have passed attempts at regulating AI, as you said, similar to the biochemical industry failed, was it a lack of urgency? And, most importantly, how can it succeed now?

T: I think it was both that there's been this culture in the AI field that it's never regulated like philosophy is not regulated. Because it doesn't have much impact on society. AI was like that for so many years, and it didn't work well. And so, it'd be ridiculous to regulate it. It's so quickly that it's gone to have a high impact that many people in the field haven't switched their mindset and realized that they're becoming like biotech and nuclear tech.

The second is also the urgency because it didn't feel urgent. Pretty recently, because it had so little impact. But this very rapid growth of power is making people now realize that there is an urgency. That's good news. That's why we're having so much more discussion about this now than we did before. I hope the open letter has contributed a little bit and also the broader awareness of the urgency. So, we cannot channel this urgency into making AI like biotech and these other fields where we can help policymakers steer it to get all the fantastic benefits and avoid the downsides.

M: All right. With these final words, I again want to thank you a lot for this very interesting discussion and that you have joined us.

T: Thank you so much.

M: You're welcome. And then, for everyone else, feel free to get in touch with us if you want to discuss these topics and with these words, I wish you a nice day. And everyone else have a nice evening and day. Goodbye.

T: Danke Schön. Thank you so much, everyone.

End of Transcript

With: 

Pegah Maham
Max Tegmark

Date: 
04/27/2023 - 5:00pm to 6:00pm