Transkript zum Hintergrundgespräch "Wie soziale Medien reguliert werden könnten"

Transkript

Um gegen Desinformation und Hetze in sozialen Netzwerken, Suchmaschinen und Videoportalen vorzugehen, haben Regierungen in Europa in den vergangenen Jahren vor allem auf freiwillige Maßnahmen der Plattformen gesetzt. Da diese aber viele Schwächen aufweisen, wird nun über regulatorische Eingriffe nachgedacht, etwa bei der Europäischen Kommission, in Frankreich und Großbritannien. Am weitesten gehen bisher Ideen einer französischen Regierungskommission, die unter anderem die Schaffung einer Aufsichtsbehörde für soziale Medien vorsehen (der englische Report ist hier nachzulesen).

Der Informatiker Serge Abiteboul war Teil dieser Expert:innengruppe. In einem SNV-Hintergrundgespräch am 27. November 2019 stellte er die französischen Regulierungsvorschläge detailliert vor, diskutierte, welche Herausforderungen damit tatsächlich adressiert werden und plädierte für einen europäischen Ansatz.

Es folgt das Transkripts des Hintergrundgesprächs, das auf Englisch geführt wurde. Der Text wurde zur besseren Lesbarkeit bearbeitet. Es gilt das gesprochene Wort.

Einen Audiomitschnitt (ca. 40 Minuten; ohne die Fragen der Gäste) finden Sie hier.


- Start of transcript. -

Julian Jaursch: Hello everyone, welcome to SNV. My name is Julian. I'm leading a project here at SNV on the topic of disinformation and strengthening the digital public sphere. I'm very happy to have Serge Abiteboul here with us tonight. This evening there will be very brief introductory remarks from my side, in which I will just give a bit of context. I have a couple of questions for Serge after that. But then there'll be a lot of time and room for you guys to ask questions and engage in discussion with the expert here tonight.

So, we'll just jump right into it. Europe has been struggling with online disinformation, with hate speech, with maliciously amplified political campaigns and mass opinions for a while now. This is not necessarily a new problem: Disinformation has been around since the printing press and TV. But the way that we experience it online is different. The way that it's algorithmically curated and spread widely, but also at the same time to a very fragmented audience, is different. For a long time, the reaction to that was that we let platforms such as Google, YouTube, Facebook, Twitter but also smaller platforms handle that by themselves. That view has changed in the last couple of months and years. It’s really not a debate anymore about whether we need rules for social media, but really about how we do that? And that's part of the reason why we're here tonight. Just a very quick rundown of what's already out there. In Germany, the federal states are trying to establish some sort of oversight mechanism for social media platforms. We also have a fairly well-known law, the Network Enforcement Act, NetzDG, which deals with hate speech and content that's illegal under German law. The UK, for example, has a white paper on online harms ranging from revenge porn, to disinformation, to terrorist content, so very broad in scope. In the US, there are several bills talking about political ad transparency. And on the EU level, there's the voluntary Code of Practice against disinformation, which is a self-regulatory mechanism.

Right now, there are developments in France as well, which will be the topic of tonight's debate. There was an expert group of ten people who wrote a report. They were able to take an inside look at Facebook's content moderation practices and they made some recommendations based off of that. So, I will not go into a lot of detail here because we have the expert sitting right next to me, but the basic premise was the proposal of a core regulatory body that is not in place to regulate specific pieces of content and make rules to delete them, but is supposed to check the content moderation processes of the company. This is very different from the German NetzDG, which makes the rules for deletion. And it's also very different from the EU's Code of Practice, which is self-regulatory. So, that's what I want to talk about tonight.

I'm very happy to have Serge with us here tonight. He was one of the experts of the group that I was just talking about. He led the so-called Facebook mission based on his many talents and professional expertise as a researcher. He's an often-cited computer scientist at France's National Institute for Research in Computer Science and Automation. He's also an activist for ethical data management. And he's also a regulator because he’s on the executive board of ARCEP, the French independent agency for regulating telecommunications. That's somewhat similar to the U.S. Federal Communications Commission and somewhat similar to Germany's Bundesnetzagentur, the Federal Network Agency. So, Serge has a lot of experience from a lot of different perspectives and that's what I want to tap into tonight and ask you to share some of these perspectives with us.

Before I have some questions for you, Serge, I would like to give you the chance to introduce yourself. Please tell us what background you brought to this Facebook mission and how you approached it.

Serge Abiteboul: [04:54 in the audio recording] Thank you, Julian. So, a bit of background. First of all, I'm a computer scientist. I'm a researcher in computer science and, of course, everything that I'm going to say is biased by that. I'm a strong believer in technology, but at the same time I realize that technology sometimes can be harmful. So, I'm not a blind believer, I believe that we should fight to build the world we like with this technology. That being said, when working on the topic of hate speech and social media, you immediately run into conflict. On one hand, there is freedom of speech. On the other hand, there is illegal content, content that can be harmful. On the one hand, freedom of speech is a religion in some countries. And on the other hand, it's easy to say the government should control content and we get rid of the problem. I tend to believe that it's a more complicated issue that will require a fight from civil society. So, essentially the conclusion is that we have to fight together to make this happen.

For some people, social media is the devil, Facebook is the devil, Twitter is the devil. Personally, I don’t think so.

Now, I want to come back to this idea that it's not all positive or all negative. For a number of people, social media is the devil, Facebook is the devil, Twitter is the devil. Personally, I don't think so. I have an account on Facebook. I'm very happy to meet my friends on Facebook. I also realize that Facebook and these social media are the only means for some people to be heard. We come from a civilization, from a culture where only the powerful people could say something on the radio, on TV or in the newspapers. With social media, you can be the head even as a small association. You can be a group of people in a suburb, where nobody has heard of you, but you have a good idea and it spreads. I strongly believe that this is a great achievement of social media together with the fact that I can remain in touch with my friends and I can meet new people. So, I'm a strong believer in social media. That's point one.

Point two. I cannot agree with what you find on social media sometimes. I cannot agree to see terrorist using the social media. I cannot get used to the idea that pedophiles use social media. I cannot accept the fact that a kid is using social media and is harassed. Now, some suggest getting rid of social media. But personally, I think that they are rather something that we should fight for.

Now, the context of the Facebook mission. Our President, Emmanuel Macron, met with Mark Zuckerberg, the head of Facebook, and they agreed there was a problem with social media. Mark Zuckerberg said, yes, okay, we should fix that together instead of fighting each other. He was willing to open his company to a team of people from our administration. So, France put together a group of people. There were computer scientists, who understand the technology. Lawyers, because a lot of it is how to have the law respected on the internet. The internet cannot be a space where everything is permitted. And there were also representatives from the police for fighting cyber criminality and so on. In total, there were ten people. And Facebook said, you can ask all the questions you want, you can go wherever you like, you can get all the data you like.

In fact, it was not all that easy to get in the company. At the beginning, we decided that we're just going to observe. We forced ourselves not to be aggressive. But after a month, we started to challenge them, as we had established some kind of relationship.

Of course, they gave us the official story: That in the moderation centers all the moderators are happy. When we went to Barcelona to visit their moderation center with very happy moderators and then read the New York Times story about moderators in Phoenix, we asked ourselves which is the truth?

At first, it wasn’t easy to get into Facebook’s company structures, but we challenged them and established a relationship.

We got convinced after a while that Facebook was really trying hard to fix the problem, as otherwise it can damage their business. We got most of the information we wanted, while sometimes we had to insist a little bit. It was only three months and we were not enough people to evaluate the code in-depth. But they opened up their processes and showed us how they work. And I think we got a pretty good idea of that.

Then the government started insisting that our report would go out quite fast. So, we put together the report and we actually came up with a new way of doing the moderation. But before I tell you what it is, I want to tell you what I will not talk about, but which is critically important: It's education. I think we have to educate people how to use social media. I’m not leaving this out because it’s not important but because I’m using the lens of a regulator.

Now, what else can you do? So, first, there is the idea that social media will self-regulate. This is a story that we've been told since the beginning of social media. That they are the good guys and don't do evil. They are going to fix the problems. I think it’s proven that this doesn't work, for a number of reasons. I think fundamentally, even if they were doing their job perfectly – and they are very far from that – there are issues with that. Who defines perfectly? And definition is very important, because it is about defining our society in a way as we are living with these things. We are looking at them all the time, we get information from them, we connect with people through them. Is it the role of a private company that has not been elected and that doesn’t represent the people, to define our society? Even if they were excellent, there are good reasons why society should not accept that a private company defines what a society should be. That's self-regulation, and I think that it doesn't work.

An alternative would be an extremely powerful government deciding everything. Which is even worse in my opinion: to have a government decide what's true, what's not true, what you can say, what you cannot say. In a way, that is the end of democracy. If you rule out these two solutions, our President put it that between the free American way and the Chinese or Russian totalitarian way, we have to design a European solution a bit more carefully. Because this solution should also satisfy countries in South America, it should satisfy many countries in the world. Europe can lead the way here the same way that Europe lead the way for GDPR.

Let’s get into some moderation issues. The regulation is performed together by the platform and regulator. First of all, the platform has to be completely transparent. At the moment, they are not. They publish things but we can’t verify what they are really doing. It's very difficult for an outsider to know what is going on. So, transparency is really critical.

The second point is, together, the regulator and the platform set up some objectives. Say, well, this is what we want to do. And this can change regularly. Why does it have to change regularly? These social media are extremely agile; they change all the time. For instance, I had a PhD student who did a study on Facebook, and Facebook changed the privacy policy three times during her study. For a researcher, studying a moving object like that is impossible. So, we need transparency and a discussion with the regulator about objectives. And the objectives are objectives on statistics. The regulator shouldn’t decide which content is no good. We have legal systems that are defining values and only the law can decide if something is valuable or not.

Now, what the regulator could do is to tell the platform that when the content is obviously wrong, take it out. And I'm not saying, take it out within a certain given time frame, in a week or in a day. If content is viewed by 100,000 people in an hour, what good does it do if it's removed after 23 hours and 59 minutes? If it's obviously wrong, as soon as the platform knows about it, they should remove it. Of course, it's not possible to remove it immediately as there are millions of contents arriving every second. But we are talking about obligations of action to get the best results.

Even if self-regulation worked, there are good reasons why society should not accept that a private company defines what a society should be.

You put together the objectives and the regulator should have the means to evaluate the results. This is something that's defining our society more and more and governments look at that and say oh, yes, that's interesting. Those things shouldn't be corporate secrets, they are important for society. The regulator should have access to the information, and they should be able to do a very serious evaluation. And in the case that the platform does not properly act to solve the issue it should be able to punish – either give heavy penalties or block the platform.

There is something we all have to realize. It's very easy to put the light on one platform and say, you're not doing correctly because this particular content has been moderated wrongly. Because maybe they did very well on the rest. And what do you do about the bad guys? Twitter – to change the target – has difficulties because they are not as rich as Google or Facebook but with their means, they could still do better if you set objectives together with them. Now, there are other platforms that are much, much worse. vk.com, to name one, where all the fascists in France moved when they got problems with the big platforms. If you want to signal that a particular content is illegal to vk.com, be my guest. Even the people in charge of cyber criminality in France don’t have a phone number to call. They can send emails, which are simply ignored.

The question is, do we want to fix the problems? To fix the problems, you sit down with the platform and tell them, this is the problem, what do you propose, what can you do about it? Can you improve the situation? If they don't, I would have no problem if some of the platforms are closed.

Julian Jaursch: You mentioned some of the things that this regulatory body should do. You said that it should have access to the company and that there should be transparency towards the regulator.

Serge Abiteboul: The means to evaluate what's going on.

Julian Jaursch: Can you specify that? What are the powers that this body would have to have in order to be able to evaluate properly?

Serge Abiteboul: [22:46 in the audio recording] It's an excellent question and that's what the regulator will have to discuss with the platform, because it changes all the time. I can give you an example. One of the problems of social media is that they keep saying, we're just offering content that other people create, we're not editors. For me, you're an editor when you decide that a certain content is going to be pushed to a million people. Even when you decide that on Twitter this one is on top and this one is second, we all know that we're going to read the first one. We're never going to go to the end of the list. This gives you a responsibility.

And to come back to your question, how do you push content? What makes you prefer one to another? These companies change that all the time. This talk is not about business, but for businesses, this is very difficult because they change some policy and suddenly your content disappears, and you stop doing business. They can basically get you out of business by pushing you out of the first pages.

Julian Jaursch: Would it also be helpful for the proposed regulator to look at these algorithms that push stuff to the top and leave out other things then, is that the company's thing or is it something that the company should decide together with the regulator?

Serge Abiteboul: Do you think deciding that something is pushed can be part of harming people?

Julian Jaursch: Yeah, I think so.

Serge Abiteboul: Then the regulator should have access. Of course, that should mean that the regulator has the means to protect this information, because this information is business secret. But we do that all the time. As you said, I'm a telecom regulator in France. We constantly get information about the telecom companies that I am not allowed to share. This is covered as a business secret, perfect. But if you think that it is critical information the regulator should have the means to access it.

Julian Jaursch: And this decision of what can be important or what can be harmful, who would make that? What I'm trying to get at is that in your report you envision a very large role for civil society and for academia. Maybe you can talk a little bit about how they would interact with the regulator and with the companies? Why did you think this role is important and who gets to decide what should be covered?

Serge Abiteboul: [26:13 in the audio recording] So, this goes back to something I said at the beginning. It's not sufficient that you do a good job, it's also critical that you are accepted by the citizens. Because, again, this is shaping our society and citizen should feel comfortable about it. It’s important that there is a discussion between the platform and the regulator to setup the objectives. But the objectives are for the benefit of this society. So, society should be involved in the process.

Now, who in the society? Not everybody, of course, as not every citizen has the time to go and look at those things. But associations at large should be involved in the process. Of course, this is not easy, because associations tend to have biases, like citizens. So, if you have an association for defense of the civil rights, you're going to look at freedom of speech as the primary goal and be obsessed by that. Which is okay, because that is part of the picture. And if you're an association for the defense of gay people or for the defense of Muslims or Jews, then you're going to be obsessed by prohibiting any kind of racism or sexism or whatever on the internet. Then that's your business and that's correct.

The objectives for content moderation should be set by the regulator and the platform, but society should be involved in the process.

So, all these associations should sit together and define together. Because essentially what you're talking about is the definition of the world we want to live in. That is no simple business, but it's a very serious one.

You also mentioned researchers. That's also very essential. And now we get into a bit of technology. How do you detect that content is bad? How do you detect that content should be observed, erased or blocked? First thing is by notification. If you go to social media and you see some content that you think is harmful, you can click and say, it shouldn't be tolerated. You can report it to the German authority under the NetzDG. And that's one flow of information to the platform.

Another flow of information is algorithms. Algorithms look at the content and detect harmful content in real-time. It's very difficult to get – and that's information that we couldn't get from Facebook – an estimation of the quality of both. I can tell you one thing that someone told me off the record, an engineer, not from Facebook. At the moment, the algorithms they run are actually better than human beings for notification. Reports of hate speech or fake news by web users are actually very bad. These people are not lawyers, they don't know the law, they just report. The algorithm is less biased. It's making less mistakes from that point of view. There is also the advantage of being able to detect the harm of a message very early, before it can spread. This works very well for some content, pedo-pornography or terrorism, for instance. With hate speech, the quality is not as bad as a human’s. But it's a more difficult task. So, we have to be a bit more precise there.

There is a zone that obviously is illegal. But then there is a grey zone, and I'll give you an example of that. Facebook explained to us which rules they were using and they showed us content. And there was one post, where we were asked what to do about this content? So, as a human being I would say, I don't want to see that on the internet. Then I analyzed the content by the rules they gave us and I say, Facebook should block that. But is this content be considered illegal under by French law or not? And we have two lawyers, so we asked them, what do you think? One lawyer said it's clearly illegal under French law. And the other one said I think it can be defended. And they start showing each other common law. And after ten minutes we stopped them and asked what's the summary of your discussion? The summary of the discussion was that this goes in front of a court and the judge decides.

The grey area is very big. And I can tell you other stories. We all have stories of mistakes that a platform did. So, it's difficult for human being, it's difficult for a lawyer, it's difficult for an algorithm. While the algorithm can work much faster and keeps improving, don't expect perfection. And that's why there is a common belief that once an algorithm decides that this content should be blocked, there is a human being behind that still. To my knowledge, it's not blocked directly. As I said, there are exceptions with terrorism and pedo-pornography.

Now, let's come back to civil society. How do the algorithms work? Pretty much all the same. Machine learning algorithms learn from tons of cases. What's a case? It's content with an annotation. This content is good. This content is bad. This content is blocked. This content is good. This content is blocked. And so on. You train the algorithm and then when you give it another content, it tries to find similarities and based on these similarities it says the content should be blocked or not. You have to realize the importance of the training data. The quality of the result is basically going to proportional to the quality of the training data. Who can get the training data? For the moment, Facebook, Twitter and Google. The big guys.

It would be great if researchers in philosophy or psychology could help. The difficulties we're talking about here are about culture. Things might be different in different countries. I also want to mention another problem related to this: How about the small guys, how about the small companies? If you are too strict, these small companies are basically going to be out of business very fast. They won't be able to do it. How can you do it for the small companies then? You can decide that training data sets are essentially public interest data that should be shared. This is an interest beyond the big companies, it's an interest for everybody. It should be built together with the civil society and with researchers and it should be accessible by the small companies.

The other thing that we proposed in our report are the three levels. We're very proud that they are original. The idea is that there are systemic platforms. The big guys with millions of users. Those should be controlled by the regulator. Under a certain threshold, you are not. That does not mean that this is an open door. We have laws in countries like France and Germany. And the law can punish a company that does not try to fix the problem. And in-between we're going to assume a priori, that you are good guys and you're not going to be regulated. But if you misbehave regularly, then in that case the regulator can decide that you're going to be part of the regulations as well.

Essentially what we want to do is avoid some platforms, vk.com, for instance. After a certain level of bad behavior, the total number of users in Europe gets evaluated and if it is below the threshold, the regulator says, too bad, you're going to be regulated now.

Julian Jaursch: With that idea in mind and this three-pronged approach to it, we can open it up for everyone now. Thank you very much for the insights that you already gave. I personally thought there were a lot of points worthy of discussion. So, if there's any first questions, state your name and your background, please, and let’s hear it.

Guest 1: How do you define terrorism and do you think that it's so easy to detect terrorism? For example, we here in Europe see the Dalai Lama as a Nobel Peace Prize winner, but in China he’s considered a terrorist Or a bit more specific, when I look at the Hezbollah, even we in Europe don’t have a clear definition, if this is a terrorist organization or not. And we define that there's a political arm and that there's a military arm, which is terrorist. How can an algorithm define which is still the political arm and which is already the military arm, so that the content can be dropped?

Serge Abiteboul: That's an excellent question. I do not have an answer, but I tell you how we can collectively get an answer. Collectively, that's where the society comes in, that's where the discussion should be an open discussion, a team discussion. Now, there's something that I completely forgot to say before: I do not believe the answer should be national. Writing this report, we essentially had Europe in mind. Leaving the definition of terrorism – there will not be any worldwide definition of terrorism that would be accepted as your examples show. But I believe that in Europe we have a common ground. I'm not going to say it's easy because there are differences between the countries. It's not going to be easy to get to these definitions, to the definition of what's fake news, for instance. But I think we can get reasonable definitions if we sit together at the European level.

Finding common definitions isn't easy, but I think if we work together on a European level, we can get it done.

There are two reasons, why this should be at the European level. The first reason is very practical. If you discuss with these big guys, a great country like France doesn't count, a great country like Germany doesn't count. Together, Europe is a big market and Europe can say something that basically scares social media companies. As I said, to start, this is going to be a fight. It's not going to be something that's given to us. It's something that we have to accomplish together. What I'm saying is that, first, Europe has a granularity to face the big platforms. The second reason is that we are a little afraid of what a national government could decide. Why? Because there is a lot of feeling involved in politics and politicians. When they are insulted on the web, they feel it, and they research it, and they say let's make a law. I don't like it that they insult my work over the internet. And I've done these great things, I don't like it that they insult it over the internet, let me block that.

To come back to your question. Terrorism at the world level is not something that's going to be easy to define, even at the European level, it's not going to be easy to define. I actually think that at some point there might be consensus. For example, if I show you terrorist material I have seen, I'm sure that everybody will agree that this is terrorist material. Actually, some social media platforms have thought about presenting some content and say, this content is controversial. Again, there is a grey area that we have to define and discuss.

Guest 2: Thank you very much. I just wanted to get back to you on your three-pronged approach. I would disagree with you that under a certain threshold you don't need regulation. I think everybody should be regulated, that all social media companies should be regulated. But how we regulate them and the standards they need to face should be different. Now, there was test just concluded where we bought tens of thousands of fake accounts on all platforms and one of the takeaways we've seen is that size actually doesn't matter. Twitter is actually the most effective when it comes to blocking malicious content and removing it. Within the Facebook ownership, Instagram is significantly worse off than Facebook. They're owned by the same company but the amount of resources they put in makes a difference. So, I don't necessarily think that size of the platform and should define the consequences of the platform. Same way you will regulate fire emergencies is that, what are the consequences if this goes bad and set standards based on that. Have you thought of this different design set-up already?

Serge Abiteboul: We have thought of it. I'm a regulator, I know that there are so many Silicon Valley companies that we cannot keep track of everything, we cannot verify everything. There is a limit to the amount of effort and money the government can put in such a process. Now, this being said, I totally agree with you, that you can’t do everything you like just because you’re small. And that's not at all what we're proposing. Of course, if you're small, you still have to obey the law and if you don't obey the law, you will be prosecuted. Like I said, it's not open door, you still have to obey the law.

So, in a world with infinite resources, I would regulate everybody. In a world where this is not the case, I focus on the systematic platforms, those that can potentially create the most harm. Then I say, in some kind of intermediary level, we give you credit that you do behave righteous. If we realize there are too many cases of illegal behavior, you are being promoted to the regulated ones.

I give you an example. A group of four people has a common hobby, and they are putting videos on the internet that their participants are providing. I know an example like that and when the GDPR arrived, they couldn't check all the videos, because they didn't have the means to check all this. Essentially, they had to stop accepting videos from their members. That's not good, because I want these small associations to be able to continue. So, if they don't misbehave, if they don't do anything illegal, they don't have to be scrutinized all the time. Again, the question is: Where do you put that threshold? Personally, as a regulator, I was thinking of the job of the regulator. We tried to put together a proposal that should be reasonably implementable.

Guest 3: My question is about the part of the Loi Avia bill that says, we want you to show good behavior by removing clearly illegal content within 24 hours. And if you show a pattern of bad behavior, a pattern of not doing that, we might levy a fine on you. And that might be, say, a percentage of your revenue, say, four percent. And there are two problems with that. One is to get accurate information about what it is they're doing to make sure whether their behavior is good or bad. The second one is deciding when you start to fine. Is it when they're already doing 60 percent removal within 24 hours, 70 percent, 80 percent? So, we put together another proposal, what would be a safe thing to say, what we want you to do is to make a reasonable effort to remove content within 24 hours. And what we want you to do is spend a percentage of your revenue on moderation. So, don't give us that four percent in fines. Make sure you're putting that money into revenue and make a reasonable effort. That seems like a win-win. The regulator gets them to be making a reasonable effort and the internet platform gets to put revenue into moderation, not fines. What are your thoughts on that?

Serge Abiteboul: So, this is very close to the kind of things we're proposing. We don't want the regulator to give you a fine because you missed some content, but we want to make sure that there is basically a discussion between the platform and the regulator and set up some target together. As I said, this is not new. This is what's done in the financial world, for instance, or in the telecom business. Which is, you regulate by controlling the process of the companies. This is also an agile method, because you can adapt to the company as well.

Basically, a big fine is not going to solve this issue. The point is making sure these platforms invest sufficient amounts of money and resources to get results.

About the fining. The regulator has to decide how bad your behavior is, how far you're from the target. So, for instance, in telecom in France, we rarely give penalties. It's usually sufficient to make it official and say what you did is not proper, we agree that you have to fix this problem and give us, within six months, the plan that you're proposing to fix it. And if you fix it, there's no reason to fine you. Basically, a big fine is not going to solve this issue. The point is making sure these platforms invest sufficient amounts of money and resources to get results. We don't care about giving them fines, that's only necessary when they refuse to play the game.

What you said was, that we never proposed imposing a fine if the content is not removed within 24 hours. We never said that. We said, we want to set up goals together to make sure that the company invests enough. We didn't go as far as stating what percentage of the profit company should pay, because it's difficult. Some of these companies don't make profits.

Guest 4: I like that you express the necessity of choosing an EU-level approach to this. Even if we regulate processes as well as the content, processes should be regulated against the backdrop of existing law, right? The national law, when it comes to content is extremely diverse. So, I'm wondering how will you envisage any kind of EU level targeting mechanism to look like and what would its responsibilities be?

Serge Abiteboul: The vision that we propose is that every country has a national regulator and then there is the European regulator who coordinates the work of the different countries, sets up some directives and checks that the laws, which are being enacted by national governments, are in agreement with the European law. We want to avoid that a government over-regulates, because there is a temptation to over-regulate. Basically, we want to apply checks and balances.

Guest 5: I had a question with the training data. What were the conversations around access to that training data like? I imagine it was rather adversarial based on your description. And what type of access to training data would you envision moving forward if this were to be encoded in the regulatory framework?

Serge Abiteboul: That's a critical point you have, I'm not sure I have an answer. The bottom line is, we want this training data to be used beyond the company that developed it. Possibly, we may want the training data to be defined and obtained as a cooperation between companies. I think, for instance, for terrorism and child pornography, Google and Facebook both coordinate. So, it's feasible. With respect to the access to this training data, there is the difficulty of privacy and GDPR. So, you may have to adjust the GDPR for that. Basically, you want this to be accessed while maintaining the privacy of the content, which is a difficulty.

Guest 5: Quick follow-up question. Was that the conversation you all had with Facebook? You said, we are interested in access to the training data; they said, GDPR?

Serge Abiteboul: No. We didn't ask for the access to the training data because it's too big, we couldn't use it and we didn't know the machine learning algorithm that was used on it. They just showed us cases.

Guest 6: Do you think that your proposal could strengthen the cyber sovereignty of France or the EU? If so, what would be the consequence for global internet governance and if not, which would be the difference to the Chinese position on internet sovereignty?

Serge Abiteboul: If you mean by cyber sovereignty the fact that Europe is responsible for defining its own future, I think our proposal does help. Because when you define what social media does, you try to define what the society is doing. And that's why it's difficult, defining society is not easy.

Guest 6: Could it be comparable to what China proposes for the Chinese cyberspace?

Serge Abiteboul: There is this issue of defining the terminology of building your own cyberspace. Our proposal doesn't aim at all in a Chinese direction of locking down etc., it's just making sure that your value and your laws are respected for the users in Europe.

Guest 7: We talked a lot about how important it is to remove most or all of the illegal and hateful content. But we've also established that there are legal grey areas. And as you've also said in your introduction, it's an important space for many people and the only way for many people to connect, to make their voices heard. So, when you're talking about automatic algorithms, for example machine learning algorithms, these are deterministic systems, they're very well educated in most cases and you might even be able to get the accuracy to a very high percentage value. But there will always be an error. So, my question would be, how do you make sure that this error doesn't negatively affect the lives of people who are depending on being able to connect via those platforms?

Serge Abiteboul: Algorithms detect content that the algorithm believes is not okay, say even illegal. It's forwarded to a human moderator who decides whether it's okay or not. The algorithm is only flagging the content. Now, to be honest, there is a catch, because after a while the human who is exposed to a good working algorithm is just going to follow the algorithm's decisions. That's one problem. Now, let's say the process does not end there. Your content has been blocked, but you can look into it and appeal. And this appeal is going to be processed by a different human being which you can address and share your argument with. Algorithms do make mistakes, we know that. Less than humans, but they do make mistakes. There is this grey area, so appeal is very important. Now, there is another side to that. As you said, these algorithms have a tendency to work in a rather deterministic way. So, it's very important to verify, if there is a bias. And that's the job of the company, but it's also the job of the regulator to verify that there is no bias in this algorithm. That, for instance, doesn't penalize particular parties.

As a side issue, whenever you're using a machine learning algorithm for something important, you better have a process to verify that there is no bias in your algorithm! There are machine learning algorithms that are used in the penal system in the U.S. to decide about parole and it's been shown that they are biased. It's okay to live in a world where judges are biased, because they cannot do better; human judges are biased. But when you start using algorithms, you better check them properly.

Guest 8: I've got a question on the grey zone you mentioned, because the starting point for the whole discussion, as you said, and I think rightly so, is that we fear that social media platforms have a negative effect on our society and democracy, if you let them loose as we are doing it right now. There is terrorist content or child pornography content that's clearly illegal and there's the grey zone where we're not quite sure whether the content is illegal or not. But there's also a lot of discussion about content that is basically not illegal, but should be taken down, moderated or reduced nevertheless, because in masses, it can hurt society. Do you think we need special new rules on not illegal but still worrisome content?

Serge Abiteboul: The platforms are already blocking some content. The question is, should they alone decide what should be blocked or not? I think they shouldn't face this responsibility on their own. If a platform decides that it's not going to show nudity, it's the company's choice. It's a private company, it can do what it likes. But if the platform decides that it's going to block hate speech, it's defining what counts as hate speech in our society. Therefore we should be part of defining what they mean by hate speech. And they do that all the time. For example, when we were at Facebook, they introduced a new category regarding hate speech. They decided to add migrants as a subcategory of protected categories. They have those for Muslims or Jews, for instance. You may agree or disagree with that decision to add some protections for migrants – I agree in this case – but is it really the company’s responsibility to decide, that’s the real question. For fake news, it's the same thing.

If the platform decides that it's going to block hate speech, it's defining what counts as hate speech in our society. Therefore we should be part of defining what they mean by hate speech.

Guest 9: You were talking a lot about unlawful content. There's this whole subject of political advertisement, especially if it's coming from obscure sources. And we know that different platforms took different stances recently. During your project at Facebook, did you come across that subject and what's your opinion on it from a regulator's perspective?

Serge Abiteboul: In our work, we focused on hate speech on Facebook, but I think the topic you mentioned is another big problem for democracy. I also think that it's a very delicate problem, because politicians are closely involved. And that's an example where I would like a regulator to basically control and limit politicians who are pushing these ideas.

Guest 10: It's a tremendous report you worked on and it's quite an astonishing proposal you're making, but how likely is this going to see the light of day in France or on the European level? Can you give us any context on that? Is there going to be a regulatory body being introduced in the next months, is there a discussion around this in France?

Serge Abiteboul: So, I think there are different sides to it. Society, in my opinion, is asking for something. You can hear more and more people being angry about different aspects. Whether it's political advertisements, fake news or hate speech, the platforms are being targeted more, because people feel like they are not good for society. On the other side, I think that the platforms really understood the message after laws like NetzDG or Loi Avia that's being prepared in France.

They understand the risks. A couple of years ago, when you would have asked a platform to come and tell you what they are doing, they wouldn't have paid attention. After a few bad articles in the press and after a couple of laws like NetzDG, they now listen to us. I think we now need a regulation in Europe and I insisted on the fact that it has to be a European one. I have met with members of the European parliament, for instance, who really understood the need for that.

But, again, in the end, this has to come from the different countries.

Julian Jaursch: Well, I think that's an optimistic note to end on. To call for a push from research, from civil society, from national regulators to find a European solution to this. I want to thank you very much for laying out what that push could be. I think there are ideas on the table that we discussed today. So, thank you very much Serge, and thanks to all of you for your curiosity and your questions.

Serge Abiteboul: It was my pleasure.

 

- End of transcript. -