Regulatory Reactions to Disinformation

Policy Brief

Download the Paper (PDF)

Deliberate deception, spreading falsehoods and hate speech are not new phenomena of the digital media environment but rather age-old societal problems. Disinformation in the digital sphere reaches a new dimension, however, because social networks, video portals and search engines can be easily used and abused for such purposes. The risk arises that political attitudes become distorted, extremist sentiments reinforced and confidence in institutions such as elections, parliaments and the media undermined. That poses serious challenges for democratic societies.

At the German state and federal levels, and at the EU level, political decisionmakers are reacting to the disinformation dilemma. Attempts at crafting regulations and political solutions, however, have so far been hardly suitable for constraining disinformation. The German Network Enforcement Act (NetzDG), for example, which essentially aims to more quickly remove illegal content according to criminal law – such as an inciting tweet – does not apply to most disinformation online. This is mainly due to the fact that disinformation often operates in a legal borderline area, which makes it unclear exactly what is covered by freedom of expression, and what is in fact illegal.

Existing measures by the European Commission are not a suitable means for curbing disinformation, either. The voluntary EU Code of Practice Against Disinformation, for example, called on platforms to disclose online political advertisements. But the publicly accessible databases offered by Facebook, Google and Twitter were underdeveloped and incomplete, and the Code of Practice does not stipulate any sanctioning mechanisms. The EU itself already recognized these weaknesses in self-regulation and is now considering a “Digital Services Act” – a legislative package that could lay down clear rules for platforms, as well as means to sanction.

Effective measures against disinformation could also emerge in other regulatory areas such as media oversight, though this has not been the case so far. One example is the Interstate Media Treaty, which Germany’s federal states are developing at the moment. According to the draft treaty, some social networks, search engines and video portals will be put under some type of regulatory oversight for the first time. The proposed reporting obligations for companies to explain their search and sorting algorithms could be helpful in addressing the problem of disinformation. Regulations governing algorithm transparency could, in theory, provide a better understanding of how the algorithm-managed news environment works. But even those rules are not concrete enough, and the same could be said about transparency rules for online political ads.

Taken as a whole, the approach to tackle disinformation has been uncoordinated and piecemeal. It does, however, mark a first attempt to deal with a societal problem that touches on questions of freedom of expression, the influence of information and communication technologies on political decision- making, the weakening of journalistic gatekeeping and the market power of big global corporations. In the future, precisely because so many difficult topics are concerned, it makes sense to tackle disinformation by combining solutions from different legal and political fields.

First, it is necessary in the short-term to enforce existing rules more stringently. That not only pertains to criminal law, but also to privacy law, because the personalized, attention-propelled news environment of social networks, video portals and search engines – places where disinformation spreads particularly well – only functions by way of extensive tracking and profiling. Strict enforcement of privacy rules, from the EU’s General Data Protection Regulation (GDPR), for example, can at least limit such data collection. But for party-political communications on the internet, including political ads, clear guidelines have yet to be created that combine the law on political parties, media regulation and data protection.

In the medium term, a closer integration of data protection and competition law, under consideration now for some time, could account for how the data and market power of some large companies facilitates the spread of disinformation. That begs the question of how appropriate oversight mechanisms for digital services like social networks and search engines could be designed in the future. Such debates are already taking place in other countries. A specialized agency for social networks is already being discussed in France and in the United Kingdom, for example. Such an agency could focus on the ways in which disinformation is spread (as opposed to removing individual pieces of content) and oversee whether and how companies have established the necessary processes.

Any political solution must be evidence-based. This means that research institutions and regulatory authorities must be able to analyze the extent and impact of disinformation in the digital realm. At the current juncture, such analyses are often impossible – large companies rarely allow researchers access to their data. In order to create useful studies, academics and regulatory decisionmakers must have better access to data in accordance with privacy rules.

October 15, 2019
Authors: 

Dr. Julian Jaursch, Project Director "Strengthening the Digital Public Sphere | Policy"