Transkript zur SNV Policy Debatte "Der Brüssel-Effekt: Wird Europas KI-Verordnung globalen Einfluss erzielen?"

Transkript

In Brüssel wird weiterhin am Entwurf des Gesetzes zur Regulierung von Künstlicher Intelligenz (AI Act; AIA) gearbeitet. Dabei handelt es sich um den weltweit ersten umfassenden Rechtsrahmen für Künstliche Intelligenz (KI) und eines der wichtigsten digitalpolitischen Vorhaben der EU.

Während in Brüssel noch über grundlegende Definitionen diskutiert wird, streiten Expert:innen bereits darüber, ob das Gesetz nach Inkrafttreten auch über die EU hinaus globalen Einfluss haben wird – der sogenannte Brüssel Effekt. 

Um was geht es in der KI-Verordnung? Was steht im Gesetzesentwurf? Warum könnten gerade Europas Regeln für Künstliche Intelligenz globalen Einfluss erzielen? Und was spricht gegen einen Brüssel Effekt? Darüber sprach Pegah Maham, Lead Data Scientist (SNV), am Mittwoch, 09. November in einer virtuellen Podiumsdiskussion mit gleich zwei ausgewiesenen Experten. Alex C. Engler ist Fellow bei der The Brookings Institution. Markus Anderljung ist Research Fellow beim Center for the Governance of AI.

 


– Beginn des Transkripts –  

Pegah Maham, Lead Data Scientist (SNV): Hello, everyone, and welcome to this SNV online panel discussion on the global effects of the European AI Act. My name is Pegah Maham and I will be moderating this discussion of this evening – or for our guests from the US this morning or in the middle of the day.  

And before introducing our guests, let me say some words about Stiftung Neue Verantwortung and myself. The Stiftung Neue Verantwortung, SNV is an independent non-profit think tank, based here in Berlin working at the intersection of technology and public policy. And within our think tank, I lead the Data Science Unit where we integrate data science and AI methods into our work and produce data-driven analysis and products. This work includes, for example, data-driven monitoring of AI trends, such as its resources, questions of AI brain drain, for example, or international semiconductor research developments. Thanks so much for joining us today on this discussion of the Brussels Effect of the European AI Act. 

In April 2021, that's one and a half years ago, the European Commission proposed the first-ever comprehensive legal framework of Artificial Intelligence, also known as AI. It aims to address risks of specific uses of AI – a technology, including automated decision-making, computer vision or text summarization tools, that can be found in almost all areas of our life. The European Commission describes its importance with the words “The way we approach Artificial Intelligence (AI) will define the world we live in, in the future.”  

Now, where are we at? There are three institutions involved. The commission is now waiting for the European Parliament and the Council to finalize their positions. And while the European Parliament's vote on their position will not take place, probably by the end of this year, probably even early 2023. The council is aiming to finalize its position in the next couple of weeks. And once those institutions have found an internal compromise, all three institutions will start their negotiations to finalize the bill.  

And while Brussel is negotiating the bill, the international community is already discussing the potential effects of the AI Act on a global scale and the so-called Brussel’s effect. A potential global impact through the Brussel’s Effect (which we will be explaining in a minute) of the European AI regulation not only would mean that even more people are affected but also has strategic implications for the design of the bill. How likely the Brussel’s effect is, and its implications, are what we will be discussing today. 

Alex Engler is a fellow in Governance Studies at the Brookings Institution where he studies the Implications of Artificial Intelligence and emerging data technologies on society and governance. He is also an Associate Fellow at the Center for European Policy Studies in Brussels. And Alex also teaches classes on Data Science and Visualization at Georgetown's McCourt School of Public Policy, where he is an adjunct professor and affiliated scholar. Alex has published a paper called “The EU AI Act will have a global impact, but a limited Brussels Effect”. Thanks for being here, Alex.  

Alex Engler, Fellow bei der The Brookings Institution: Thanks for having me.  

Pegah Maham: You're welcome. Our second guest, Markus Anderljung is the Head of Policy of the Center for the Governance of AI-based in Oxford, and his research focuses on the potential global diffusion of EU AI policy, regulation of AI in general, compute governance, and responsible research norms in AI. He was previously seconded to the UK Cabinet Office as a senior policy specialist and has published a paper together with his co-author Charlotte Siegmann on the Brussels Effect and Artificial Intelligence arguing for a likely Brussels effect. Thank you both for joining us today. Thanks for joining us, Markus.  

Markus Anderljung, Research Fellow beim Center for the Governance of AI: Thank you so much.  

Pegah Maham: Let's introduce our audience to today's topics. And before I ask you Markus to explain the Brussels effect to us, Alex, can you introduce our audience to the AI act? 

Alex Engler: Sure, excited to. And again, thanks to Pegah and thanks to SNV for hosting today. So, the EU AI act is going to create a set of EU-wide requirements and restrictions on many prominent uses of algorithms, different requirements for applications, and a different perceived level of risk. This is what people mean when they say it's a risk-based approach, which is pretty important to the framing here. So, this means disclosure requirements for chatbots. If you're interacting with an algorithm that's talking to you on a website, you would have to disclose that it is an algorithm under EU law. It also means regulatory requirements for high-risk AI, I'm going to come back to that in a second because it's a really important and big part of this bill. It also means some bands that are a little bit vaguely defined at the moment, but for things like social scoring, which you may have heard of in reference to China, or to certain deceptive uses, and potentially also types of predictive policing, these could get banned under the AI Act. Notably, it doesn't touch on military applications – it's completely civilian. And it also may lead to new regulatory capacity, things like funding for testing infrastructure, such as AI sandboxes, as well as new funding for government oversight.  

So, to come back to this high-risk piece, this is a really big and critical part of the proposed act. I think of high-risks as three broad categories that are worth considering separately. One is the private sector, or commercial, digital human services. This is stuff like using algorithms to hire people, determining how much education costs or who gets access to education, or financial services, like mortgage approval, or credit access, all that and that first category of digital human services, and those are high risk, and there's a bunch of other categories that fall into there. The second is on the government’s use of AI. So, a bunch of public services that use algorithms also fall into high risk. This includes government access to public benefits, whether that's cash transfers, loans or some support service, judicial decisions (a lot of decisions made in courts), and also things that are related to critical infrastructure. All of those are typically government-run and government controls. But if they use certain algorithms, they'll have to meet these high-risk criteria. And then the third is more about physical products that are already regulated in the EU. This includes things like cars, medical devices, elevators, and anything that's already a regulated product is going to have to essentially pay more attention and follow some specific rules for the algorithms that they use. So, those are the three high-risk categories, what they have to do, is they have to follow some new guidelines on accuracy on robustness. They need to be more careful about what data they use and how they may need to create more thorough technical documentation and they need record keeping. They may also need a risk management system, what to do when things go wrong, and potentially some level of human oversight. So, lots of new standards. Now, who actually writes exactly what the standards are that's going to come down probably to EU standards bodies like CENELEC and also possibly a new creation called an EU AI board or essentially EU AI office, though, that's still up for debate.  

Lastly, all of this will lead to a giant EU-wide government database of high-risk AI, which actually will be interesting, it'll be the most thorough documentation of the role of algorithms in this society that exists anywhere. So, actually, it would be really impactful just to know what's out there. And that includes both the private sector ones and the commercial ones. So, to do all this, member states have to create a market surveillance authority, which will enforce the EU AI act.  

But other than that, there are still a lot of open questions. As Pegah mentioned, there are still debates, it's being reworked by the Council, the EU, and many amendments in Parliament. Things like the definition of AI, whether it narrowly means new modern methods like machine learning, or if it’s much broader and includes many algorithmic decision-making systems, are still unclear. The role of an effect on facial recognition, is still a little unclear, whether to include general purpose AI, also, or what sometimes is called foundation models, the big models, you might see creating beautiful images on Twitter. Still a little unclear how this law is going to touch on all of those. So, lots to debate. But we're just starting to get a sense of how it's going to affect the rest of the world, too. And I'll leave it there. 

Pegah Maham: Thank you so much for managing to get such a complex topic into three minutes and giving such a nice overview. So now, the Brussels effect is being debated with regard to the AI act that Alex has just described. Markus, can you explain what is meant by this including the de jure Brussels effect and the de facto Brussels Effect? 

Markus Anderljung: Yeah, of course. First of all, thank you so much for inviting me. I'm really excited to have this conversation and to chat more with Alex. Alex is one of the few people in the world that has spent a ton of time thinking about the extent to which we will see a Brussels effect from the act. 

Generally, Brussels effect can be used to just refer to any cases where EU regulation has an effect outside of its borders. There's some debate about this, people use the word slightly differently. But I think that's the simplest, easiest definition to use. And then you might think about it as a de facto effect and a de jure effect. So, the de facto effect is just when you have this regulatory diffusion, where this regulation has an effect outside of the EU without laws outside of the EU changing. And so, the dynamic here is something like, you have some new EU regulation, you have a company that's providing products to the EU, and rather than deciding to leave the EU market in light of these new regulations, they decide to stay in the market. The EU market is large, so it makes sense to stick around. And so, therefore, you need to develop an EU-compliant product. And then once that's done, you're faced with another choice, which is whether to provide that product globally, or whether to provide that product also outside of the EU, or just have different product lines inside and outside of the EU. 

And then there might be a bunch of reasons that might make it profitable, or haven't made kind of makes sense for you as a company to provide the same product globally. There are a few reasons for this, we'll go through that later. But pump the intuition, in the case of cars, for example, we've seen this kind of dynamic happening with regards to the regulation of car emissions in California, where California imposes higher requirements on the kinds of emissions your cars can produce. And then at that point, you need to change your entire factory to comply with these requirements. And then you might as well just provide that car globally or in all of the US because it seems very, very tricky and very, very complicated to have two of these production lines. And you might see similar dynamics in the AI case. So that's the de facto effect. 

And then the de jure effect is just the opposite. Or it's when this kind of regulatory diffusion, that EU regulation having an impact outside of the EU happens by other countries, adopting EU-like regulation, because of what the EU has adopted. That may happen for a number of different reasons: So, other countries or other jurisdictions might think that EU regulation is particularly well crafted, or that it actually meet some needs that they also have, the EU might change what's on the agenda, and other jurisdictions as well, or what kinds of problems policymakers attend to. And there might also be some effects that kind of connect to the de facto effects and might be the case that it's beneficial for other jurisdictions to make sure that their rules with regard to how AI can be used, are compatible with the EU and particularly if you want to trade a lot with EU, for example. 

Pegah Maham: Thank you. To summarize the de facto effects, it would be voluntarily companies have to go into this incentives and NAFTA situation in a way, they're going to adapt to what the EU thinks should be done. And "de jure" means it actually becomes implemented in the legal systems of other countries. And there is a connection between these two, which makes sense. And we'll go into that.  

So, before we discuss why you have these different attachments on why the Brussels Effect might be more significant or less, let's actually discuss the question of why we're debating this now. Because the question of whether or not in the future once the European institutions have agreed on saying the effect will happen or not, is a prediction for the future. And so why not just wait and see what happens, though? Where's the point of doing this now? So, Markus, what's the motivation to speculate about it and write a paper about it? 

Markus Anderljung: Great question. I think a decent amount of the motivation from me and my co-author Charlotte Siegmann was, one thing was just “This is a claim that EU policymakers have been making”. So, when they talk about pushing for the AI Act, quite often, they do it in the context of saying, “Oh, we are going to be the first mover in the world in producing this comprehensive set of AI regulations.” That's going to be really important because it's going to allow us to have a global impact in the AI space. And you might even use terms like this as a chance for the EU to become a regulatory superpower in AI or something like this. And so this claim being made, it just seemed like something that deserved a lot more attention. Because at least when we were looking around, it didn't seem like there was any sustained study or looking into this question. So, that's one thing.  

And then, how AI ends up being regulated across the world just seems very important to me. AI seems like it's probably one of the most important technologies of this coming century. And so if it's the case that this regulation that is put in place now by the EU, if that's going to have long-lasting impacts, both in the EU and outside of the EU, it seems like something very important to figure out. And then if that ends up being the case, one of the biggest things that might suggest – at least to my mind –, is that we just should really make sure that this piece of regulation is doing what it needs to do. That you're not putting in any specific mistakes, or that you're distorting the market in ways that it ends up being undesirable or not. And so at least for me, in doing this work, it really made me feel more strongly that policymakers, and also civil society, and researchers engaging more and thinking about the nitty-gritty of getting this regulation right. And something really, really important. 

Pegah Maham: I see both fact-checking if the claim is true, but also an even higher responsibility than already for the EU alone. And also civil society and science signaled to step in. 

Markus Anderljung: I'm a bit more unsure about it, but I think there are ways in which you could choose to design this regulation to increase or decrease the chance that you see some kind of de facto Brussels Effect like a Brussels Effect. And I think that might also be something that policymakers wants to or should start taking into account when they design this regulation. So, sometimes there's talk about data localization, or that they are trying to make sure that your models are trained on local data. And that seems like one of these things where – if you as the EU think – it's very important that you see this global effect, then that might be a way to strongly undermine your chances at having a global de facto effect. For example. If it's the case that, someone needs to retrain their entire model only on EU-level data for them to be able to provide the product in the EU. And so, that I think there's some of those also pretty nitty-gritty concrete things that might change as well. 

Alex Engler: Marcus identified absolutely the two most important parts of this. There's certainly this somewhat honestly vague insinuation, from EU policymakers that being this first mover and setting this global standard is really important. And I think it does seem sometimes like that's used as a justification for moving quickly. I personally am a little bit more invested in making sure that the EU AI Act is a really good mechanism for regulatory protections, especially around the Civil Rights terms or fundamental rights terms of AI. There is a trade-off between moving fast and dealing with some of these open questions. So I do think that's worth evaluation as Markus was getting at/.  

And the other his second point is also completely right. I'll just mention that if you look at the broad state of EU digital governance between GDPR, the Digital Services Act, the digital markets Act, the DATA Act, the data governance Act, the AI Act, AI civil liability directive, and I'm sure I'm missing some. That is going to be very, very impactful on global digital governance. And so taking the time to evaluate how it does or doesn't constrain or enable other countries policymaking. I mean, it's really important, even if we might disagree a little bit on the impact of this specific law. But there's no question that the broad state of European Digital governance is going to be enormously impactful. And I would say probably, especially for platforms and websites. And just look how we're still working out just GDPR between the US and the EU. Right? So, I think the lesson there is to really think critically about what we think happens in international outcomes of digital legislation. 

Pegah Maham: So, I see you both agree about the global responsibility of this Act. Now, I would be curious to zoom into the mechanisms of why you believe that the effect might be stronger or less strong. And I would ask you Markus to start and maybe show with an example why you believe it will be likely insignificant. We've already touched upon this a bit. 

Markus Anderljung: So, I guess my overall take in this report that we wrote is that it seems likely that we will see some kind of Brussels effect. And at the very least we will see like a de facto effect in certain industries and with certain kind of AI systems. I guess, as Alex is kind of indicating, maybe it's the case that we agree on all the things and we just disagree about how to summarize our views. We'll see the extent to which that's the case. 

So, in particular, I think it seems likely that we'll see some kind of de facto effect when it comes to certain high-risk systems. And as an example, we could talk about maybe a less interesting example. It seems likely to me that we'll see when it comes to this third category that Alex was talking about. So, existing products that are already covered by a bunch of product safety regulation in the EU. These products often have this dynamic of that I was mentioning, with regards to car manufacturing, where you have this production line, and ideally, you just have one that you have to provide your product globally. And so it seems likely in those kinds of cases, that we will see a de facto effect. And so concretely, for example, with medical devices, it seems pretty likely to me that we will see a global diffusion of these requirements. And then I think it also seems, potentially likely that we will see this for foundation models. 

Maybe the easiest way for me to give you a better sense of how this mechanism works is by going through some of these criteria that we talked about in our report that make a de facto effect more likely. So, one of them is just making sure that you have what we call favorable market properties. So, basically, if it's the case that your product is provided by multinational companies, and the EU market for this kind of product, or this kind of service is large. That's pretty important. If it's the case that the industry is fully regionalised, so EU products are provided by EU providers and US products provided by US providers, you're not going to have the chance of some kind of de facto effect arising. 

Another criteria is just that the EU regulation is more stringent than that of other jurisdictions. Again, if that's not the case, then how are you going to change the requirements imposed in other countries, if those are higher? Another one is making sure that there's high regulatory capacity in the EU, which in various ways might mean that regulation is enforced correctly. These are more capacity-oriented questions. And then I think there have been two more interesting criteria, one is what we call inelasticity. So, it has to be the case that the demand for certain potential products, for example, is not particularly responsive to the changes that the regulation has in place. So, these kinds of requirements are put in place. So, a classic example in thinking about regulatory diffusion and the global effects of regulation is when it comes to taxes. And in the case of corporate taxes, for example, you have this general problem where if you tax corporations, the problem is that corporations don't find it that difficult to change their jurisdiction. And so, they can just register somewhere else. And then you're going to have problems instead of making sure that your policies actually had an impact on the taxes that you were aiming for. And so, if you have that kind of dynamic, you're going to have an issue. The kinds of regulation that we have going on here are benefits from that kind of regulatory flight – that kind of just moving that consumption to another jurisdiction. That's kind of difficult in this case, because a lot of what AI does, just applies to individuals who are engaging with AI systems that are in the EU, or people who are in the EU who are engaging with AI systems. And so that kind of predatory flight dynamic becomes a little bit more difficult. But the thing that might matter, is if it's the case that consumption of these products goes down as a result of the regulation.  

And then another important dynamic here is also how there’s inelasticity in other jurisdictions. So, if it's the case that the regulation as such makes these products better, in some sense, or that makes them more desirable for consumers, then it seems much more likely that you will see a de facto effect as well. Because you've already made the better product on the EU market, then you might as well push it up to you to other markets.  

And then another thing that we talked about as mattering quite a lot is this thing that we call cost of differentiation. So, this is this dynamic that I talked about earlier of how costly it is to provide two different products. Or is it more than one product for the global market? And one really important dynamic there is what you might call where the tech tree, or where your tech stack, you need to make changes. So, do you need to make a change at the very base level of your technology or your production process that then reverberates through your entire production process? If you need to retrain your AI model from scratch, then that means that the cost of differentiation and the cost of providing two different products is going to be a lot higher than in other cases where say it's, for example, the case that you can comply with the requirement by just changing the top layer or whatever. Oftentimes these AI systems have something like a filter that decides what kinds of outputs are not allowed or allowed from the AI systems. If you can just change that filter, then it seems like the cost of differentiation will be a lot lower. And so then it seems more likely that you can choose nondifferentiation than differentiation. And if you take all that kinds of things together, then we present some arguments of specific high-risk uses that seemed likely to see a Brussels effect. Maybe one interesting case are these worker management systems.  

And so yeah, we were just talking before this call about the extent to which Ubers algorithms decide who gets what ride and whether that would see a de facto effect? And I think it might, but it depends a lot on the details. So, if it's the case that Uber has one and only one algorithm, in some sense, or one AI system that makes these decisions make globally speaking, or in a lot of different markets, then it seems much more likely that you will see this. Whereas if it's already the case that they have different algorithms or different AI systems that make these decisions, then it seems like this global effect is much, much less likely than it was otherwise. 

Pegah Maham: Well, okay, that's a lot of different factors, a lot different mechanisms, and maybe I just sum up the differentiation point because I find it very interesting. The harder it is to have separate products, and if you're invested in the EU market, the more likely it is to just stick to the one that's regulating you more. And I guess, in the future, if foundational models will be included, if you need a multi-million endeavor to train them, that would be quite affected by this. 

Now I'm very curious, about this de facto effect, Alex. And given all the five different mechanisms in which it could play in, which examples are going to be less affected and why?  

Alex Engler: I should first say that I learned a ton from Markus's report, and I do recommend people who want to get to read it, it is really, really I think, both like theoretically and intellectually thorough, especially in breaking down like these categories. And even in places where we might land differently, like taking the categories from their report and applying them to the specific area of AI that you're concerned with is like a really good model for making your thinking more robust. And I should endorse that. 

So, it’s hard to talk about these different types of AI and say anything that's consistently meaningful about products and about higher services. So, I'm going to try to do a little on both. I think the products in medical devices point is a really interesting one. So, while I absolutely agree that there is a high cost of differentiation for products that were on a factory floor and have a specific design process and have specific testing, you really don't want to shove a bunch of different algorithms into them. But you're ideal for a company is to really build one process. There will be mitigating impacts on that de facto Brussels Effect. And I think medical devices are a really interesting one. Because the US is the largest creator of medical technology in the world, and its biggest export market is the EU. So it's an interesting example from a transatlantic perspective.  

And the success of the US medical devices has basically partially meeting and partially affecting the EU standards. And, for instance, they largely weathered a much more dramatic change than this, which was the first update in 20 years: the EU medical device directive, which just happened a few years ago. I am thinking that will be much more impactful in that space than the AI Act, though they will have made changes there. And so it's not really that there won't be an effect, it's that the US government and its many medical technology firms are already deeply invested in what happens to AI to any standard on medical devices. And that will probably be somewhat mitigating on how big the changes are and how much the EU really is unilaterally setting the rules. To what extent you decide that this is still a Brussels effect, I think there's lots of room to disagree – it's a definitional issue. But that is the case for I think a lot of these already existing markets for products that have algorithms in them, they're already regulated, and there's already a bunch of international companies that are very engaged in the process. I don't think there's going to be a huge number of rules that are set that suddenly exclude these companies, or really dramatically change those markets, even if there are more algorithmic rules. So, there is room to quibble. But I think that's actually an interesting example. And again, where the risk of differentiation is very hard, where you really don't want to do that, right? 

Now, when you turn to these AI in high-risk Human Services, this “differentiation” conversation is a little more confusing, it's a little more uncertain. I'll try one example where I think it's pretty low: AI hiring systems. If you go into a lot of these companies that do AI hiring, they're building specific models, which would be considered as an AI system under the law, sort of individual unit of regulatory compliance. They're building them for each individual job posting within an individual company, which means they have many different models. They're already localized to local data, to that specific company, to that specific job, they already might be specific to a language or local hiring laws, right? And so because they're already doing that, the cost of creating different models that meet different standards and different places, is much, much lower. Your conveyor belt – a metaphorical conveyor belt – is already different in these very places, right? So, they're set up to have lots and lots of different versions of models created and thus adapting some of them, you might get different results.  

Now, sometimes a company might say to Markus's credit, “Hey, listen, it's easier if we just keep all of our records the same way so we'll adapt the EU law for all of our data storage, right?” I think that's totally likely in some cases for all of the record-keeping. At the same time, they might say, “It's kind of expensive for us to adapt a risk management system and human oversight to all of these uses so we're not going to because no one’s holding our feet to the fire”. And so, parsing out which of those two things can happen, I mean, it really might come down to individual rules and individual applications of AI. 

I will throw one last little piece of confusion here to Markus’ point about what happens when these become more platformized, or what happens when they become more international? Interesting examples here are LinkedIn, which has high-risk AI hiring algorithms. That's a platform and it will be much harder for them to build two versions of their algorithms because it's a global website. Microsoft VIDA, which is a worker management software used by multinational companies that has employee management algorithms, that's not even public, right? But if you're a multinational company, do you want different algorithms doing different productivity scores for your different employees? That can be a big problem. And then Uber, I think it's another example where it's not clear how localized versus how global some of their algorithms are, the more localized the less of a Brussels effect, the more global, the more the Brussels effect. 

Pegah Maham: Thanks a lot. I think I got a good understanding of which factors are relevant. And then I guess it depends on case by case; the AI system: where it falls into. So, I guess the database that will be produced would be a good start to see where these things fall into. And then we can do the integral over all of them and then can calculate the Brussels effect. But I guess it's also becoming quite obvious how difficult it is to make this prediction.  

So, now let's move a bit away from the mechanism and the prediction itself and move onwards to what follows from this: the implications. So, I'm curious to hear what you would recommend to the current policymakers in Brussels. Alex, maybe we can start, what is your recommendation for what should be done?  

Alex Engler: There's like two broad categories here. One is, don't rush and take the problems that the EU AI Act has very seriously. It's really important. Because even if it has no global ramifications, it will be really important in Europe. And so, I think making sure that the definition of AI is targeted and specific and means what they intend it to mean, is really important. And that debate is still ongoing. 

Really thinking about who's setting standards, I share concerns from others that private sector dominated standard bodies should not be setting the standards for AI systems used in these human services like in hiring educational access or financial technology. Finding some other way to do that, maybe it's a central EU AI board, maybe it's a consortium of the member state regulators. But the current process, I think, would be very bad there. Clarifying how does the market surveillance work, the enforcement, all these things – I think this could use some more attention. And I would not rush them for the sake of a first mover, regulatory, superpower intended effect. I think that's really short-sighted of the EU, to the extent that that is prevalent. 

And the international aspects? I have a couple of concrete suggestions, I wouldn't regulate the making models open source, but you should still regulate when they're used for high-risk purposes. But just building a model and putting it out into the world for free, I think advances open science and advances our understanding of AI in very valuable ways. It's also a check on corporate power in the space. So, that's an example where you could see something that has an external effect by regulating open source AI models, but I actually think it would be to the world's detriment and into the EU’s detriment. 

The other was the TTC: And the EU is already doing this. There's already a really engaged discussion with the US on the Trade and Technology Council, but maybe also inviting some other comments from the rest of the world. That does seem very US-focused at the moment. So, I'm wondering what if there are other concerns the rest of world might happen with the US doing? And I think, taking into consideration those concerns, again, might be worth it at this stage if they're worried about excluding or making it more difficult to access and work in the EU market. 

Pegah Maham: Thanks a lot both for the object-level recommendations and the process recommendations that you've just given. Yeah, Markus, I'm curious to hear your thoughts if they're similar - do you have any agreement?  

Markus Anderljung: I mean, a lot of agreement. I think what important is this “first mover” thing: I think I agree with Alex that this seems to be over emphasized. I think even if you're the EU, and you're like, “Oh, we really, really want to produce this Brussels effect.” I think that even if that's true, then rushing might be a mistake, just because what are the other jurisdictions that are as likely to see a global, for example, a de facto effect from their regulation? That is on its way to produce comprehensive regulation on the on a par with the EU. The US will most definitely not produce something similar timescales to the EU. And so, I think you're okay with taking your time here. 

And I think another thing that I really agree with is, a lot of the most important decisions I think, will be in terms of what do these requirements actually mean in practice? So, I think some of the most important things in this regulation are just one word that will need to be clarified by the standards-setting body. So, you talk about data being appropriately representative, and for the context, or the model is appropriately accurate, and that kind of thing. And what does that even mean, is going to be a really, really important question going forward. And so making sure that the standard bodies are set up in the right way, and that they have the breadth of expertise and interests, that they need to be able to actually get this right, I think seems really, really important. 

One think I take away from this: I spent a lot of my time trying to figure out what the AI's impact will be on society or globally on the world. With that kind of view, I think this kind of reasoning suggests to me that we're more likely to see a higher level of regulatory burdens imposed on AI systems globally than you might have thought naively before thinking through these kinds of considerations. Where otherwise, you might imagine that there’s this pressure towards lower regulatory burdens by competition between different jurisdictions. And I think this is a really important dynamic that pushes against that. And I think that's important to think about.  

I think another thing is other jurisdictions: I think it seems likely that they're going to have to pay attention to what's happening in the EU. And in particular, I think that seems very likely is, if you're a different jurisdiction, that you want to say that you're doing something slightly different from the EU. And so, for example, this is happening in the UK and Canada. I still think that you're going to have to ensure certain bits of compliance or certain bits of coherence with the EU regime, and in particular, on the state of what kinds of requirements you're imposing on certain, particularly, I guess what you would usually term high-risk systems. I think that's going to end up being something really important that these other jurisdictions are going to have to think about. Because if you impose inconsistent requirements, that's when you impose higher regulatory burdens and where you cause to force actors to provide different products with different markets than it would be otherwise. Whereas the way in which you choose what kinds of systems end up being affected, that that you can have a little bit more freedom with. 

I think another really important question that's still being debated that I'll just highlight in the design of the AI Act is this bit that prohibits manipulative uses of AI of various kinds. And I think it's currently the way that's phrase, it's been it's been debated a whole time. But I think one thing that's very, very interesting and potentially important is it's not obvious to me that the current formulations even invested the council's recent proposal, it's not clear that the recommender algorithms used by social media companies, but those might not count as engaging in some kind of manipulation. At least by my read, maybe I'm wrong on this. But if that's true, then that could have tremendous, really, really large impacts on the world. And we'll need to figure out the answers to questions like what does it mean for a recommender system to be manipulating people to do things that are not in their interest, which is going to be a huge, huge question to figure out and might be incredibly important to figure out how to get that formulation rate.  

Pegah Maham: I hear you both agree that should not be rushed. There are a lot of clarifications and definitions that are needed to be fleshed out more. We have a lot of questions in the Q&A. So, I would like to now go into the audience questions..  

Let's start with the first question: “Is there such a thing as an authoritarian misuse of the Brussels effect in the context of the AI Act? In the context of NetzDG, Network Enforcement Act, parts of it were used as a blueprint and Singapore in the run-up to the elections to silence the opposition? Is it possible for example, if real time biometric surveillance of public spaces isn't excluded in the AI Act that this could lead to problems and other parts of the world?”  

Alex Engler: I think, a somewhat consistent fear from especially foreign policy people in the US and also some in the EU, that setting up a whole bunch of rules around digital technologies, including things like online platforms, and the use of AI will inspire authoritarians to do the same type of approaches, but go much further and the restriction to political opposition. I recognize that fear. At the same time, authoritarians have been very creative and innovative all on their own. And so, I no longer really think they need to be inspired or led by the EU or anyone else to go ahead and run really digitally oppressive states. 

So, I mostly honestly think that democracies should make informed and intelligent choices about how to maintain their own economies and maintain their own civil society protections and maintain their own democratic institutions, and not worry so much about whether or not authoritarians are going to adapt the same rules and then misuse them. I just think that's the wrong approach. And I think that hesitancy to regulate, and to create rules within strong democratic institutions .. you know that we set rules that require and maintain strong democratic institutions, and you're much more likely to maintain them than if you've got no rules and let all of this run rampant. And then you find yourself closer to the US which is losing some real ground here and is threatened by autocratic tendencies. And now, because of that, it's actually harder for us to regulate, it's harder for us to create executive powers around digital governance, because we waited too long, and because of our broader democratic status weakened. So, I understand that fear and largely would not succumb to it. I would say, build strong democratic preserving institutions and governance. Don't worry about it.  

Markus Anderljung: I haven't thought a whole ton about this. Another thought that I might offer is, it might be the case that the effect goes the other way. So, there will be some regimes that will be teetering at the edge of deciding which way to go. And if it's the case that you as the as the EU, you can figure out providing an alternative to “Okay, you want to solve this problem with figuring out how to how to do surveillance or help law enforcement do their job better, which is very fine, very important goal.” If you can be first in the world or among that the actors in the world that can show a path that does both that and protect civil liberties, then that could be a way to push some actors that like otherwise would just go for the alternative that meets their goal of law enforcement and whatnot but doesn't protect the civil liberties. And so, it could be the case that if you won't get this right, you could actually increase the extent to which civil liberties are protected and some nations are set about the edge between going more authoritarian versus less authoritarian.  

Alex Engler: It's a great point.  

Pegah Maham: Another question from the audience: “You mentioned foundation models, and that is not clear how they will be regulated by the AI Act. Aren't these a sign that the EU is trying to regulate AI too early?” And I might add to this that I would be curious if you think if you think foundation models should be regulated or not.  

Markus Anderljung: I don't know if I'll be able to do a short answer. My guess is that, at some point, we will need regulation that specifically targets foundation models, I think that seems plausible to me. In the future, I imagined that we might end up with a state where we could think of foundation models that they might look more like utilities or something like this. So, there's this basic building block that is used in a whole bunch of different systems in the world. And just because of their space in the supply chain, it might be particularly important to regulate certain things. So, for example, if their cyber-security is too low, then that's going to impose a whole bunch of externalities on the market as a whole. And there might be certain reasons that might provide you some reasons to regulate that as well. 

I think that seems plausible. I think it seems you're pointing out something true the fact that you're having these kinds of discussions now. And the fact that in the first formulations of the AI Act it didn't quite pay attention to this entry, because the supply chain does suggest something about we're quite early here. And so this is a moving target that we're trying to regulate. I do think that in the current proposal from the recent proposal from council I think that is getting things better. And I think that might be the right approach in terms of how to deal with general-purpose AI systems or what you might call foundation models. I don't agree with everything, but I think maybe it's moving in the right direction. 

Alex Engler: Well, here we have some disagreement. I think the proposal of GPAI and from the council is deeply confused. So, one, it has no real definition of general purpose AI or foundation models, which would have to include that they deal with multiple types of data, or it can do many different tasks or larger, but all of those are moving targets. So we have a law where we haven't yet defined AI well, and they want to add a new, more confusing, less well-understood category. They're also trying to do that without a clear justification of why. This moves away from the whole risk-based approach of the AI Act, it's just all of a sudden saying the existence of these models is dangerous. You can kind of see why it might be bad if someone's using them for a higher-risk purpose. But I think the approach that they're taking if that's your concern is wrong. It's really based around creating standards for these models, create standards for those models all you want, there'll be used in two diverse ways to really necessarily improve their downstream use for the high-risk purpose. I'm just not convinced by that at all. I have an op-ed coming out tomorrow in tech policy press about this very issue that Markus is very kind enough to critique, frankly, give us some nice feedback on. But that's what I'd point you towards if you're more as I'll be posting on Twitter as well. 

And then there's a different category of harms, that I think of proliferation harms things like non-consensual pornography, deep fakes, hate speech that can be generated with these models. And the unfortunate answer there is the council proposal also does nothing for those. Right now I'm not convinced. There's not a clear relationship between the council proposal and a real outcome that makes sense to me. 

Pegah Maham: I see. But you think it could be a good idea, but the way it's implemented at the moment will not achieve that. 

Alex Engler: I think we should be focusing on how different organizations come together to build AI systems. So, I'm more focused on if you have one company that builds part of an AI system, and then sells or leases it to a second company, then that second company, if it's using it for something that is regulated, like a higher risk purpose, has all the information it needs. They're really the ones making the decision. And they should both have the information and what they need from the first company, and then also bear the regulatory responsibilities largely with maybe there's some contract language that goes back and forth. But I think of this less is an engineering process, less like pieces of an airplane, and more like a decision-making process where you really want to ensure the people making the decision have the information they need, and bear the regulatory responsibilities for it.  

Markus Anderljung: Yes, I think I definitely agree with this. There's a problem with this phrase, “general purpose AI systems”, I think that's probably the wrong regulatory target. I think the thing that has some regulatory need is where in the supply chain the thing sits. And so maybe the need that you're actually getting at is not whether a system can be used for many different things. There are two things that like I care about this: One is, if I have a system that can do a whole bunch of different things, then I want to make sure that people aren't using it for high-risk things without other actors, or without the market surveillance authorities being aware if they are being non-compliance. And so cooperation from these producer foundation models seems important to me.  

The other thing is this thing of how you divvy up responsibilities in the supply chain and making sure that the provider or ultimately the deployer of the high-risk system, that they have the information they need? My guess is that some of the things in this proposal will help that process. 

Pegah Maham: Alright, thanks for this very quick answer. It's a very complicated topic. Our next question is concerned with something we haven't touched upon yet: One person from the audience asks about ecological perspective: “The EU proposes an ecological perspective, let’s say, of the use of AI system. To what extent is that likely to cause a negative Brussels effect?” Alex, I give this question to you.  

Alex Engler: Negative Brussels effect. Do you mean literally like the ecological impacts of AI? 

Pegah Maham: Lacking regulation with regard to ecological impacts, training, etc.  

Alex Engler: I'm maybe in a weird place on this. And I could be convinced otherwise. I want to first say I haven't thought that deeply about this. I am broadly unconvinced that you need environmental protections that are specific to AI, I just think we should be taxing carbon. And like the use of AI and computation and that all kinds of … It's not like, obviously, meaningfully distinct to me in a way that I would specifically regulate at the model buildings, it just doesn't fall into this framework in my head, I'm all for pro-environmental choices. I would just like tax carbon and fund renewable energy sources and not worry so much about model training. People say sometimes, like, oh, training of the model is equivalent to this, like transatlantic flight and that makes me like not care very much. The estimates, they show have suggested to me that it is not a total sea change in the amount of carbon impact and so on. It's the argument has been clear to me about why you would create its own set of rules, rather than just like the normal environmental interventions. I could be convinced though. I'm not as deep here as I am on the other issues.  

Pegah Maham: It makes sense, though, what you're saying. All right, if that's okay, for you, Markus, I would go to our next and last question, probably: “Some might fear that the AI Act might be a too heavy burden for smaller companies, and much easier to deal with for larger companies in Germany with its large amount of small and middle-sized businesses, this is a concern. Is that the possible effect?”  

Markus Anderljung: I think that's a very, very legitimate concern. I think it's very important. It just seems likely that this is true. Big companies have a lot of resources spent on making sure that they can remain compliant with varying complex regulatory regimes. And I think this generally is going to be being effect, I guess, is generally something that is a cost that the EU is imposing on itself in terms of trying to have a more, well developed and more intense regulatory climate. And I guess that there are some ways to remedy this that are trying to be included here. And so, maybe you could imagine cases where, for example, where you have certain exceptions and whatnot. I think that's been discussing some examples of companies below a certain size, that don't have as high burdens. Things like that sound reasonable to me to consider. 

Pegah Maham: With this being said, we're reaching the end of this hour. I have learned that it very much depends on the specifics of AI to estimate how large the Brussels effect will be. There will likely be some Brussels effect. You both agree that should not be rushed, you both agree that definitions have to need to be taken care of and that there are still serious problems that need clarification, I hear from both of you. And also, you emphasized international cooperation, all of this to invite the whole world but also to other countries starting to pay attention to this more so not just the EU but also the other jurisdictions. And yeah, let's say I'm curious to see how this will play out in the next couple of months and years. And with this being said, a huge thank you to our two panelists today. And with this, I wish you a nice morning, in the US, and a nice evening, and thanks for being here.  

– Ende des Transkripts –  

 

 

 


 

Erschienen bei: 
SNV
16. November 2022