Securing Artificial Intelligence

Policy Brief

Download the paper (PDF)

In the last five years, many large companies began to integrate artificial intelligence systems into their IT infrastructure with machine learning as one of the most widely used technologies. The spread and use of artificial intelligence will grow and accelerate. According to forecasts by IDC, a market research firm, worldwide industry spending on artificial intelligence will reach $35.8 billion in 2019 and is forecast to double to $79.2 billion in 2022 with an annual growth rate of 38 percent. Today, 72 percent of business executives believe that artificial intelligence will be the most significant business advantage for their company, according to PwC, a consultancy. In the next years, we can expect the investment boom in artificial intelligence to also reach the public sector as well as the military. This will lead to artificial intelligence systems being further integrated into many sensitive areas of society such as critical infrastructures, courts, surveillance systems and military assets.

For governments and policy-makers dealing with national and cybersecurity matters, but also for industry, this poses a new challenge they need to face. The main reason is that the diffusion of machine learning extends the attack surface of our already vulnerable digital infrastructures. Vulnerabilities in conventional software and hardware are complemented by machine learning specific ones. One example is the training data which can be manipulated by attackers to compromise the machine learning model. This is an attack vector that does not exist in conventional software as it does not leverage training data to learn. Additionally, a substantial amount of this attack surface might be beyond the reach of the company or government agency using and protecting the system and its adjacent IT infrastructure. It requires training data potentially acquired from third parties which, as mentioned, can already be manipulated. Similarly, certain machine learning models rely on input from the physical world which also makes them vulnerable to manipulation of physical objects. A facial recognition camera can be fooled by people wearing specially crafted glasses or clothes into thinking that they don’t exist.

The diffusion of machine learning systems is not only creating more vulnerabilities that are harder to control but can also – if attacked successfully – trigger chain reactions affecting many other systems due to the inherent speed and automation. If several machine learning models rely on each other for decision making, compromising one might automatically lead to wrong decisions by the subsequent systems – unless there are special safeguards in place. A safeguard could for example be that for certain decisions a human always have to approve a decision made by such a system before it triggers further actions. In addition, machine learning makes detection and attribution of attacks harder. Detecting unusual behavior, distinguishing it from mistakes made for example by the developers and tracing it back to the original point where the attacker manipulated the system, is difficult as it requires full understanding of the decision-making process of the model. Attribution is further complicated by the fact that interference can take place in many stages in the virtual as well as the physical world. It might for example be impossible to prove who put patches on a street to misdirect passing autonomous vehicles.

In the past, both the Internet infrastructure and technology was built on it has not necessarily been secure by design. This offered militaries, intelligence agencies and criminal groups new avenues to pursue their respective goals. We should not repeat the same mistakes with machine learning. A key requirement is accurate threat modeling for machine learning applications designated to be deployed in high-stakes decisions domains (military, critical infrastructure, public safety) and implement security-by-design as well as resilience mechanisms and safeguards.

Governments and policy makers seeking to approach the security risks of machine learning should in a first step focus on where machine learning is applied at the intersection with national security. This includes traditional areas like law enforcement and intelligence services (e.g. facial recognition in surveillance, riot control or crisis prediction and prevention) as well as applications in infrastructures like process optimization in power grids or machine learning systems powering large fleets of autonomous vehicles. This domain is likely to provide a large divergence between the assumed low level of adversarial interference when designing machine learning until very recently, and the real-life threat model for its use cases. Considering that security is a precondition for successful digitalisation, security aspects of machine learning must be integrated on the level of national artificial intelligence strategies.

Even though it is difficult to predict whether information security will become a precondition for the successful development of machine learning going forward, securing machine learning, especially when it comes to high-stakes applications such as national security, is indispensable. The clock is ticking.

17. Oktober 2019
Autor:in: 

Dr. Sven Herpig

Ansprechpartner:in: