Transcript: Background Talk - Shaping the future of Artificial Intelligence – Comparing strategies from Brussels and DC #2 (US)
This is a transcript of the event “Shaping the future of Artificial Intelligence – Comparing strategies from Brussels and DC” with Ylli Bajraktari, Executive Director at the National Security Commission on Artificial Intelligence (NSCAI) and Kate Saslow, which was streamed on 22 June 2021 online.
You can rewatch the event here.
The text has been edited for better readability. The spoken word prevails.
- Start of the transcript -
Kate Saslow, project manager “AI Governance”, Stiftung Neue Verantwortung: Hello, and welcome to this SNV online background discussion. This is the second of two background discussions in the series, “Shaping the Future of Artificial Intelligence, Comparing Strategies From Brussels And DC”. Part one took place last week where my colleague Phillippe Lorenz spoke to Irina Orssich, a a senior officer at the Directorate General CONNECT at the European Commission, working on AI policy and more specifically Europe’s Draft Proposal for the Artificial Intelligence Act.
While Part One focused mainly on the European regulatory landscape for AI technologies, today's talk will look across the pond to the United States to focus on the US approach to AI regulation and innovation. But before jumping in, I would like to make some introductions and share a few housekeeping rules.
My name is Kate Saslow, I am the project manager for AI Governance here at the SNV, a think tank in Berlin working on various tech policy issues. My focus is on artificial intelligence and foreign policy, AI standardization, AI patents, and geopolitical and geo economic affairs.
2021 has already been a very big year for AI policy. Even though we are only halfway through, there have been official documents, strategies and plans released both here in the European Union and in the United States. World leaders gathered just last week to discuss ways forward and multiple industries including tech affairs. One outcome of these meetings was a tech Alliance, which will be established in the form of the EU-US Trade and Technology Council, for example. So, this is all a very timely and relevant backdrop for our two-part series of background talks. And today, I'm so excited to dive deeper into the United States AI policy landscape and understand how the National Security Commission On Artificial Intelligence, the NSCAI, seeks to guide the US and foster bipartisan activity around AI technologies.
I am so grateful to introduce today's guest, Yll Bajraktari, who is Executive Director of the National Security Commission on AI. Before joining the NSCAI as Executive Director, he has worked at the highest levels of US national security for example, as Chief of Staff of the National Security Adviser Lieutenant General HR McMaster. He held a variety of leadership roles for former Deputy Secretary of Defense Robert Work, and served as special assistant to the Chairman of the Joint Chiefs of Staff, General Dempsey. He originally joined the Department of Defense in 2010, where he served in the office of the Undersecretary for Policy as a Country Director for Afghanistan and then later India. Apart from this very impressive CV, he has also been extremely instrumental in the 700-plus page report from the NSCAI, whose mandate is to make recommendations to the President and Congress to advance the development of artificial intelligence, machine learning and associated technologies.
So, I briefly mentioned the final report published by the NSCAI. But maybe just to give some background context for the audience. The report presents an integrated national strategy to reorganize the government, reorient the nation, and rally America's closest allies and partners to defend and compete in the coming era of AI accelerated competition, and conflict. Ylli, you wrote a letter as a foreword to this to this report. I'd love to have you elaborate on the importance of the report and also about the Commission itself, its overall mission, and maybe how this mission may have changed, for example, within the three years since the Commission's inception.
Ylli Bajraktari, executive director, NSCAI: Thank you, Kate, so much for having me today. And I thank Stiftung Neue Verantwortung for hosting this timely session on artificial intelligence and implication that we face daily from this technology.
Let me talk a little bit about the Commission because I think for the international audience, these kinds of constructs or these kinds of Commissions really are hard to understand, especially why does the United States Congress create a Commission like this? So about three years ago, I think the United States Congress understood that things are moving so fast in private sector with regard to artificial intelligence, that we, the United States government, faced some kind of leaving behind moment in which the things have moved so fast that the government has failed to adopt these kinds of technologies for national security purposes, that the government has failed to recruit the adequate talent that can help with this kind of technology. So, the Armed Services Committee on the United States Congress created the National Security Commission on Artificial Intelligence. So, for the international audience, there is a gap or lack of ownership between three branches of government here in terms of a policy or policy guidance, then the United States Congress steps in. They create these independent Commissions. They picked Commissioners, in my case they had picked 15 commissioners, and they ranged from some well-known technologists, former government officials and leaders from the academic world. The thought here is that if you bring a broad and it is such a diverse group of individuals, on a topic like this, they can help the government move forward in adoption and acceleration of the AI for national security purposes. So sometimes I have to remind the audience that our Commission was not “AI for humanity”, or “AI for the increasing of economic benefits of the United States”. Our Commission was really stood up to look at what can AI do for national security purposes. And what do our departments and agencies, in Germany's case ministries and Cabinet members, do more with such a promising technology, while also understanding the risks and challenges this technology possess to individual rights, privacy and everything else related to AI.
Three years ago, this Commission was stood up, I was asked to bring in a great staff, which I have led for the last three years. And as you mentioned, we produced a final report on March 1. It is a big report, it is 759 pages and it provides recommendations to all the departments and agencies on the ways we can do better in terms of increasing federal R&D spending, research and development spending, how can we bring talent into the government faster, because there are a lot of like challenges in how you bring talented individuals into the federal government to serve and help solve many issues that we have when it comes to technology. AI can be used for multiple things, for example, how we address climate change, fires, health-related issues and not just national security issues. So how do you bring talent, individuals that want to serve in government fast, and into the government?
And obviously, we look at what can AI do in terms of national security missions and operations, how it can help our soldiers better conduct their missions. And so, for the last three years, I would also say we did not wait until the final report to produce this report. This is a field of evolving technologies. And even in the middle of COVID, we started providing Congress like every three months with memos. And by the way, all these documents that I am talking about, they're available on our website. Because of the requirements we have on United States domestic laws, and the way our Commission was that stood up, all our big meetings had to be public, and you can watch all these meetings online, or you could have participated online at the time when they were taking place. Then we compiled all the input from the audience, from our Commissioners, and we wrote the final report. But as I said, even in the middle of COVID, you know, AI played such an important moment in the middle of COVID, whether it was about a contact tracing, whether it was about coming out with therapeutics, or whether it was about coming out with new vaccines. AI, if you look at all the things that were happening around us in the middle of COVID, AI was there. And so, we couldn't just stand by having led this Commission, but we started producing a lot of memos even in the middle of COVID, to try to influence the policy process. And so, after three years, we came up with this big report. The Commissions are also short-timed. So, we end in October, and I actually like that these Commissions don't end in this perpetual life cycle when you can always find ways to exist. But there was a clear mandate given to us by Congress and a clear timeline, and we delivered the report. Now it is up to the departments and agencies and Congress to either adopt or reject our recommendations. And we can talk about, so like the dynamics later during the Q&A. But as I said, since March, what I have been doing every day almost is talking to members of Congress, departments and agencies about what did we mean by recommendations in one way or another, how they can implement it. And then you just entered this kind of a phase in which you educate them, you try to explain like why we thought this is the best way for you to take in terms of adoption of AI for national security purposes.
Kate Saslow: Could you briefly give some specific aspects of the report itself, maybe the most critical recommendation that it makes? You also briefly mentioned that you want to bring the best talent and to all of the different ministries, all the different federal agencies. I think that's one thing that I've noticed with the different US documents that have been produced around AI technology, specifically is that it very, very much follows this philosophy of taking a whole of government approach. And I think that's interesting comparing the US and the EU system, because the EU, obviously is a supranational body. So, what can EU policymakers learn from taking this Whole-of-government-approach, for example? Just to maybe give some examples of the most important recommendations that come out of the final report could also be enlightening.
Ylli Bajraktari: Absolutely. So, you've asked two questions. The first is, what are some of the key findings of our final report, and then the talent piece which took a lot of our time and our effort. And I think, regardless, if you're talking about AI, or quantum or bio or anything, you got to solve the talent piece, I think we're at the point now, in which you have talented individuals that want to serve, but there are no ways in which they can enter government faster. We have talked to a lot of people in this process, like hundreds of private sector companies, academic leaders and institutions. And the one thing, and I'm addressing your second question first, but the one thing that a lot of people said to us is, “We would love to serve in government, we would love to help governments solve like groundbreaking AI challenges”. But by the time the United States government responds to job openings, you know, individuals already have three offers from private sector. They already have a salary offer, and they can join right away. On average, to join the United States government, especially in the areas that we were working on, it takes between eight to 24 months. And so, given that this time lag, given the bureaucratic processes, the paperwork you have to submit, you lose a lot of talented individuals. So, what we looked at is, “How can you speed up this process?”, “How can you bring people faster into government?” If you're looking at national security missions, this becomes even more difficult because of the sensitivity, the background checks, everything you have to undergo, before you enter, let's say, the Department of Defense, or FBI or CIA. And so, how can we resolve some of these barriers to bring these talented people? So that's one piece of the challenge we have to solve in our final report. And we recommend ways in which you can do that.
The second piece is, and this is my favorite part, people don't have to join full time in government. They can serve as a reserve course civilian, Reserve Corps. United States military has a Military Reserve Corps. This means you serve less than 40 days a year, in any government position. And so, for example, we have a very recommendation to establish a national Digital Reserve Corps, which would be if you're a technologist out of Silicon Valley, but you want to help government either, as I said, in these like, “cool” problem sets, or serious national skills challenges, you can become a Digital Reserve Corps member. And then you would serve a government less than 40 days a year and your government can always call you in and say like, “Okay, I need you to help me with this problem set, or this AI or this quantum, on this bio problem”. And so, we have to be, I think, creative in terms of how you utilize this base of technologies because otherwise, we will be stuck in having a government that is reflective of the 90s and early 2000s, in an era where technology plays a critical role. I remember one of my best colleagues, his wife works for one of the federal government agencies. In the first week of COVID, only 200 people were able to log in online from home.
We, for example, we decided early on to use different platforms. We went with private sector platforms, we went with different computers, not government computers. And so, we were able to test our systems weeks before the lockdown, and nothing happened. We were able to work unimpeded, because the platforms and technologies in private sector offer you these opportunities now that working inside the government don't. And so, this is the challenge in terms of the people, like how can the government attract the best and the brightest, either full time or part time, and bring them in faster. So that's the one when it comes to talent.
The second issue of talent, really in the United States is the challenge of talent and immigrants. You know, United States universities still remain most attractive universities to bring talent to study and research here. But the problem is, these individuals don't have a clear pathway towards legal immigration. So, we have couple of recommendations there of like how you make these individuals stay here, contribute to our country, and remain in our country. And ultimately, if they want to, they can contribute to national security mission sets. I have one of my favorite stories in the middle of COVID: We were talking to a Stanford doctor about some of the algorithms he was using to do AI research. And I asked him about the immigration problem, he said, I have six researchers that are coming from abroad, one is Chinese, four are Europeans or five are Europeans. He said if I lose all of them because of immigration paperwork, my lab will collapse. So, he said, “My lab depends on talented immigrants who come here to study at Stanford, and then they want to do research at Stanford”. So, the immigration piece is really critical to solve. And so, we have a lot of recommendations of like, how you incentivize and how you keep these talented individuals in United States. A little bit on the talent piece, but I think this is a really critical piece on any society and how you get this right.
Kate Saslow: Yeah, and I'm sure that will also circle back to this idea of talent and immigration and research cooperations if we're talking about US cooperation. But I'd like to go back to what you said about the speed of adoption in government being so drastically different from private sector. And I would be curious to know if you have any examples of when your recommendations turned into concrete policy. So, if anything within the last few years was adopted by Congress, or if it's also been slow to pick up speed in terms of concrete policy changes?
Ylli Bajraktari: This is an ongoing struggle, Kate, because no matter what you do, whether you're a think tank, or you're a government or an independent Commission, the recommendations you provide to Congress are non-binding. They can take it, they can leave it, they can ignore it. It depends on the political cycle or the political climate. When do you enter these cycles? And what is the mood on the hill or among the executive branch to adopt the recommendations? I think we were fortunate in terms of the timing because I think there's a broad understanding here, at least in Washington, that we have not invested enough in the research and development in the last couple of years, the government has not invested in basic R&D. Private sector companies now invest much more in terms of R&D than the government has. And that has not been the case. Since the end of World War Two, the United States government has been the biggest investors in United States in basic R&D. And that's when you know, most of the promises came. If you look at, you know, what DARPA has done during the Cold War, you know, from the maps, the GPS, the Siri that we use today, these were all created in government labs. And then they were like, outsourced to private sector. Now, most innovation comes from private sector, not from government. And so, I think there was a broad realization that we're behind in terms of investing in R&D. There was a broad realization that we're not adopting fast in these kinds of technologies throughout the government. And so, in the last two years, to be honest with you, what we have seen is because we didn't stop, and we waited until the final report, we've submitted recommendations much, much earlier. I started providing Congress with recommendations that I call “low-hanging fruit”. Some of the things you can tweak today don't need a broad legislation. For example, we have a lot of scholarship programs to enter federal government and we just asked the Congress to extend those programs: instead of 500, expand it to 1000. Or if we just look at the cabinet members, or in the case of Germany, the ministers, they can issue memos to the departments to tweak things that didn't require a broad regulatory change. And so, we entered sort of like that change phase by providing some low-hanging-fruit recommendations to get the momentum going. The final report has broad and big recommendations. Like if you want to move mountains, this is what you need to do.
And so, I divided recommendations into three categories. So, like the organizational aspects of our recommendations, like, how do we need to be organized for an emerging tech, specifically for AI? Probably in Europe, like in the United States, we have institutions, government ministries, that are still reflective of the old thinking and the old systems, the Cold War systems. Emerging tech requires you to rethink the whole organization structure. So, in that sense, we have recommendations to Congress that some of them are already happening on how you change Department of Defense, how you do some reorganization of intelligence community. So those are the recommendations, I bucket under the organizational change.
The second is related to resources. For example, you need a lot of resources to stay ahead in terms of emerging tech competition. The recently adopted what was known Endless Frontier Act, spearheaded by Senator Schumer and Senator Young, really increases the funding that goes towards National Science Foundation, which goes back to my point about investing in basic R&D. So that's one piece of that. The second piece of that legislation really has to do with the semiconductors or the chips problem that I think now, if you open any daily newspaper anywhere in the world, you see that there is a global chip shortage. You know, cars cannot be produced anymore, television sets cannot be produced anymore, because there's a global chips shortage. So, a part of the Schumer bill that was recently adopted, had an increase of funding towards semiconductor manufacturing fabs, as they call it both here in the United States and abroad. So, the second bucket of recommendations really revolves around resources.
And the third bucket of recommendations really is about the people. And you and I talked a little bit about this today. But like, it has four categories that I call: One, is how you increase the domestic STEM pool talent. No matter how you look at the data, every year in the United States, there are 70,000 people that graduate with computer science background. At the same time, there are about 300,000 job openings every year, with people with computer science background. So, the gap is still big, right? And when you look at the AI, specifically, the numbers are even smaller of the people that graduate in computer science field. And so how do you increase the domestic pool talent that these jobs can pull people to. So that's one piece. The second piece is, as I said, how do you remain global magnet for talent? So, you bring the best and the brightest, you keep them, and you provide them with immigration pathways. The third piece is, once you have this talent, how you bring them inside the government faster, how you keep them inside the government, because government is really notorious about losing talent fast? And then the last piece is, the federal government is big. They're more than like, I think 7 million people that work United States federal government. Many talented people work inside the federal government, but government doesn't know how to utilize them. Because let's say if you're in the military, the military changes people every two years in terms of what they do and what they serve. And so, we want to provide a clear pathway for people that have digital skills.
So those are three buckets, Kate, if you ask me what are the final recommendations: organizational aspects, resources and people.
Kate Saslow: I would like to switch gears quickly, and ask you what you think about China's position in AI technologies and what this means for US-EU cooperation and US-EU tech relations.
Ylli Bajraktari: Okay, the one thing that I failed to mention is that for the last three years, the way we have organized ourselves was along six lines of efforts. And one line of effort was really designed specifically to look at what can we do more with our like-minded nations and partners. And so, we looked at the areas we can expand our cooperation with traditional mechanisms and institutions like EU, G7, OECD, and everybody else and NATO. And then we looked at like new partners, India, Israel, Scandinavian countries, countries that are at the cutting edge of technology. So, what can we do more with those countries? From day one, our Commissioners were keen that the way we compete more effectively with China, is if we build a coalition of like-minded nation, democratic countries that share values and norms together in the way we develop, and deploy these kinds of technologies. And so, this played a central point from day one. And a lot of recommendation, if you look at our final report, throughout all our 16 chapters, in our 700-page document, talk about areas we can do more with our allies and partners. So that's one piece of the thing that I failed to mention earlier.
The second piece to your question about China, I think China has made its ambition clear about how they see technology, and where do they see China's role in development of these technologies. They have published their plans. And they have been open and transparent, I believe about their ambitions. They want to have everything made in China by 2025. And they want to be a global AI leader by 2030. Now, the thing we have seen recently from China is how it deploys these kinds of technologies, whether it's for domestic purposes, against their minorities and human rights abuses, or, you know, deployed internationally in Africa, in some African countries, or some Southeast Asian countries. And so, I think the biggest distinction here that we have, us and EU as partners versus China, is the way we deploy these kinds of technologies, the way that our citizens and their rights to challenge the government's use of these kinds of technologies. And I think that's one thing that really makes us different.
I understand there are a lot of differences between US and EU when it comes to tech, and especially the use of tech. But I think there are so many things that bring us together more than keeps us apart. And as we have one of our biggest recommendations late last year was the need to establish as US-EU tech emerging dialogue, which as you as you mentioned, now we formally have a launch of the TTC that came out of a meeting between President Biden and President Ursula von der Leyen, that we you know, at the cabinet-level between our biggest partners, we now will have a dialogue that is focused on Trade and Technology Council. We recommended that last year, not that I would like to take credit. But yes, we spearheaded a lot of these kinds of initiatives right now. Because we thought despite all the differences we have in how we develop and deploy these kinds of technologies, we should be able to have a formal channel of communication in which we talk about differences and about mutual aspects of the things. We want the rest of the world to use these technologies based on the values and norms that are grounded by human rights, the individual issue to address any abuse by government of these technologies. And so I'm glad that you know, out of the US-EU summit, one of the key Development Objective Agreement (?) was the establishing of this kind of dialogue.
Kate Saslow: Absolutely. I have a few follow points on that, that I'd love to address, but that would be an abuse of my power as moderator. So instead, I will go ahead and switch to the audience's questions. And we have one question from an anonymous attendee that has been voted the most. Do you feel the recent EU advances in general AI regulation, for example, definition of high-risk AI systems, remove or create obstacles to form alliances between the EU and the US?
Ylli Bajraktari: That's a great question from one of your participants. Obviously, there are challenges here, right. I mean, some of our Commissioners have spoken publicly about the recent EU advances in general AI regulation. The key balance here that would like to notice: How do you strike a perfect balance in regulating these kinds of technologies and not undermining innovation? And I think that is the one issue that I would call on European audience to think about. As I was preparing for this interview, I was looking at some of the recent articles in the Economist and the Financial Times. But if you look at the history some of the biggest companies at the beginning of last century, were Europeans. Out of 100, 45 were European biggest companies. If you look at the data today, in the top 20 global companies, only one is European. And that is the Louis Vuitton, for example. Out of 20 tech global companies, only three are Europeans. The rest are Chinese and Americans. So, the issue is, how much do you regulate AI before you get into the space of really pushing and stifling innovation? And I think the process that European leaders have put in front of them to allow a couple of years to debate and analyze this regulation before it takes hold, I think should be used as a space in which Europeans debate this issue. You know, Europeans were at the forefront of innovation back in the day. But if Europe becomes just a massive regulatory body, in which there is no more innovation happening, I think that will come at the cost of European citizen and European audience. I think that the need to strike the right balance between innovation regulation is the key that I would recommend. America operates differently, as you know. And so, but the differences are really resolved when we sit down together and go over them. Yeah, I think the beauty of friends and traditional partners is that these things can be resolved through formal and informal channels. One of our Commissioners, Eric has said that what Europe needs is maybe a Commission like ours, in which they look for the next two years, you know, what are some of the challenges associated with European R&D, European talent, European hardware, manufacturing, and then come up with recommendations for European Union. And obviously, regulation and privacy and data use. But anyway, I think what Europe needs to do is really ask themselves, what is the right balance between innovation and regulation?
Kate Saslow: I think that's interesting, since most of the software environments and hardware are coming out of, for example, Silicon Valley, whose mantra is to move fast and break things. And that just, that's not really conducive with the European style of policymaking. But I think that's also a great segue way into our next question from an attendee. The NSCAI Report called for an EU-US alliance in the AI field, particularly to counter growing Chinese clout. To what degree would or should such an alliance be part of a broader anti-China Alliance and geopolitics, versus an issue specific transatlantic cooperation, which could swing free of transatlantic differences regarding other China policies, i.e. how encompassing an anti-China Alliance is being sought by Washington?
Ylli Bajraktari: So, I'm not going to speak on behalf of entire Washington, I'll just speak on behalf of our report. You know, we looked at the trend in terms of emerging tech. We look at what makes AI become an AI? One of our Commissioners defines AI as a stack of six components, data, hardware, algorithm, people, and how you integrate all these elements to make AI what AI is. And when we analyze progress in these six areas, China has done a lot of progress. I mean, China leads in terms of data. And I think that's kind of self-explanatory, because you know, the number of people the data they can generate based on the number of people that live there, you can never compete with that, right? China has made huge progress in terms of talent because they're growing in an enormous domestic pool of AI talent. But they're also bringing talent from abroad. You know, Chinese students that have studied in Europe and America, they're being invited to go back and contribute to their country's economic and social aspects, I guess.
What we've seen is that the deployment of China and their technology, first domestically and internationally, I think it's counter and against the values that I think the democratic nations aspire to do. And so, I think this is not an anti-China coalition – it is just that a group of like-minded nation, democratic, that share basic values and norms. I read this quote this morning that says the next 10 years, we will have much more advancements in technology and the social wellbeing of individuals than we have had in the last 100 years. If that is true, then we are entering a phase in which technology holds great promise for healthcare, for education, for all the aspects of our society. And then how we deploy this, how we use this technology will matter. And so, the issue is, are we going to put some norms and values to these technologies? Will citizens have the right to appeal the way government uses technologies, will matter. I think the way China does it, is counter to the values and norms of democratic nations. So, this is not an anti-China coalition. It's just an alternative approach to what China is doing with technology.
Kate Saslow: I think that's interesting. If you look at the next question, from an anonymous attendee, how much do you look beyond the pond for US regulation on AI? Is it possible to incorporate common AI regulations? I think that's a very interesting question, if we're talking about, you know, not necessarily wanting an anti-China Commission or anti-China Alliance, but a way to counter norms and values coming from a different ecosystem. So, how much in the US is … is it looking for good practices or standards from beyond the pond, so to speak?
Ylli Bajraktari: No, absolutely. I mean, this is a good question. I think we will always have differences between the US and EU in terms of adoption and use of this technology. I think what makes us good is that our citizens will have the right to appeal. In the United States, if somebody is using your face or your voice, you can always appeal that. If you believe that the algorithm is biased, is using some kind of biased algorithm against you, I think you can appeal to those things. I think the same is what Europeans expect out of their government, too. So, I think there are areas where we can do a lot of joint approach to these kinds of technologies. But as you said, like the private sector also is really different in United States versus in Europe. The United States government cannot impose a lot of regulations in the private sector here. And so, there are a lot of efforts now underway because the emerging tech presents a new era of how you understand all these issues we've talked about today, that I think conversations that are happening inside the United States Congress, inside departments and agencies, between private sector and public sector, requires really a look of how can we do things that allows innovation to happen, allows private sector to excel. But at the same time, the government has the right to step in, when these things are not fully regulated, when they're out of control, they're posing danger to human rights and individuals.
And so, I think there are a lot of commonalities that I've spoken about earlier between the United States in Europe, and a lot of common ground that they can work on. But there are also differences that we have to live with because that's just the different system and different political system of both continents – they are just different.
Kate Saslow: Maybe just a follow up briefly on that because you said that the US government can't impose that many regulations against the private sector, against the US private sector. I'd be curious to hear your take on recent activities coming out of the European Union to try and regulate American big tech. Do you think that that activity from the EU might somehow harm EU-US tech relations? Or do you think that that's good for them if they find a way to regulate private sector, then that should be done. As you said, there should be a way to appeal.
Ylli Bajraktari: I think those are still ongoing, I think conversation is happening. I would not speak on behalf of the EU or how they envision controlling the American companies. I think, as I said, these have to be conversations happening between the government levels, whether it's a formal channel now that’s being established under the TTC, or any of the channels that is happening. So, I will not get into like what EU envisions with these kinds of regulations. That's outside of my mandate, to be honest. So, I'll stick to what I was asked to do here for the United States Congress.
Kate Saslow: Alright, then we'll get back to the audience questions. There's one also from an anonymous attendee, how does the US science system plan to compete in the war of talents against companies and how to gain and how to keep AI scientists in public R&D?
Ylli Bajraktari: That's a really good question. And that's one of the things we have really looked into deeply. The problem we have noticed, is that we have in the United States some of the biggest companies that also have the best tools for AI because of, for example, the data and because of the algorithms. So, we wanted to create an environment. And some of our recommendations go after this, looking at how you democratize access to these tools? So, how do you enable access by private, by universities, by individuals, by small and medium-sized companies to these kinds of tools, so they can come up with like the next best AI algorithm, right? So, we have recommendations to create these regional clusters – AI clusters in which private and public sector come together to have access to the tools and the data they need to come up with the next best AI algorithm. So that's one piece.
There's also another recommendation, if you look at the United States Congress that was adopted to create the National Research Cloud resource, which is basically providing a cloud access to small and medium-sized companies, academic institutions, that they have access to this kind of tools to come up with these kind of AI applications. So, I think what we have to do is enable citizens and enable, you know, the innovation ecosystem as we say here, so people that don't have necessary resources have access to the cloud system to make their idea possible.
And sometimes coming up with the best AI applications requires you really to have these expensive tools that even if you don't have access to them directly, but there have to be other possibilities to access them.
Kate Saslow: I'll move on to the next question from a participant: Working with other partners does require trust in the AI technologies they develop. Is there a plan to standardize, secure or certify AI technologies developed by partners?
Ylli Bajraktari: And so again, our mandate was really to look at what can United States government do with its allies to faster adopt or develop AI technologies. I think, you know, what we have argue here is that there are several ways in which wants United States government, or working in concert with its allies and partners, is to have this kind of capabilities. We produce this document called key considerations. Basically, this is a checklist of things that you have to go through before you even deploy these kinds of technologies. And so, whether it's based on regulation, based on testing anvalidation and verification, based on the governance structure of those kind of departments and agencies… But there's a document on our website called „Key Considerations“ that every department agency has to have this kind of a checklist of items that they have to go through before they even use any kind of AI technologies. And so, I would just ask your participant to look at that document, because I think it's such a good document to have for any kind of ministry or a department or agency, before you develop and field this kind of emerging tech, and all the sensitivities that comes with that.
Kate Saslow: Okay, and next question, also from an anonymous attendee, does the US need to share its AI technologies with its friends for a coherent defense strategy, for instance, NATO. I'm sure if these different advantages and regulations from each of these friends show steps into isolationism.
Ylli Bajraktari: Good question. So, when it comes to NATO, the one thing we argue is, first of all, NATO is our biggest security partner. And so, what we argue is that the United States, in partnership with NATO, should start like the way you adopt technology in the military is you start by doing first operational concepts. You do demonstrations, you do tabletop exercises, and you see how technology can help you solve a military or security problem. And so, NATO should have joint exercises where they bring all the countries together, and they introduce these kinds of technologies in their operational concepts. See how it helps them solve any operational challenge they have. For example, we argue that AI can help military organizations along four lines of efforts. Number one is it can help them prepare, for example, in terms of business operations and everything, as we say in United States, the back-office responsibilities, human resources, business operations. The private sector has always adopted these kinds of AI applications, to reduce costs, reduce manpower, and so on. For military purposes, that's the easiest adoption of AI.
The second is sense and understand. AI can enable NATO and United States to understand better the landscape. If you see big massive movements of troops from Russia towards Western Europe, you can see this by analyzing, by having the right AI application. It will help you understand better behaviors, patterns, facial recognition, and all these things, if you apply it towards the military problem you have.
The third one is really in terms of decision. AI really will enable humans to make better decision, I would argue. For those of you that have not seen the Netflix documentary, AlphaGo, I highly recommend it. Because it's an eye-opening of how AI will provide humans with opportunities that we will not be able to see. And so I think in terms of decision making, AI will enable our warfighters, our generals, our military leaders, it will give them more options faster, something that humans will not be able to see.
And then lastly, how we execute operations. I think AI will help NATO and the United States to execute military operations in ways we have not seen before. And always in accordance with laws and the International human rights law and everything else that our militaries and NATO subscribe to.
Ylli Bajraktari: So, we did not look into this issue, just to be perfectly honest with you. And so, I would just leave this question as part of the ongoing conversation that will start between US and EU. But we didn't look at the EU-US Privacy Shield in our last year’s research.
Kate Saslow: Alright, then moving on to the next question from an attendee: In the race of getting supremacy, do you think some countries might not have strict regulation? Do you think this might cause danger to human rights and democratic values?
Ylli Bajraktari: Thank you. It's a big question. I don't think this is a race of getting supremacy on anything. I think it's just that we're entering a phase in which we're dealing with a technology that is not fully explainable. So, I think a lot of people are nervous about this. I think if we walk carefully in terms of how we adapt and apply this kind of technology. On some aspects, as I say in the opening letter of our Final Report, this technology has a lot of promise. It can help us cure some of the long-lasting illnesses that we have had. As I said, look at the COVID. Every aspect of the COVID phase that we have seen in the last 16 months had some kind of AI aspect to it, either helping accelerating adoption or accelerating fielding of vaccines and therapeutics, right. So, there's a lot of promises with these technologies. As I say, in the open letter in our Final Report, these technologies bring a lot of challenges. And a lot of people are rightfully concerned because what we've seen in the last one year is these technologies bring the best and the worst in our human nature. So how we use them and how we deploy these technologies will really matter. And so, out of 16 chapters, two chapters talk about the ethical and the privacy issues of using this technology from government because that was the purpose of the Commission: how can government use this kind of technology? So, two out of 16 chapters really address this issue of ethical and responsible use of AI.
Kate Saslow: We try to avoid the rhetoric of calling it a race just because it could be a zero-sum game. So, I do really appreciate what NSCAI in the Final Report has done talk about this age of competition, and talk about the competition accelerated by AI technologies, but still foster this idea that you said the coalition of like-minded states. And making sure, in particular, that it's not one state trying to lead in this work, but rather to work together with like-minded states to really promote democratic values. I think that's a really important initiative to be pushed, especially in the US.
Next question from an attendee: What’s about regulation that is the stumbling block to innovation or at least companies staying in the EU? The issue is access to capital, something which a CMU can solve and that needs to be more prioritized. Do you have any thoughts on that?
Ylli Bajraktari: Oh, no, I mean, look, the issue of capital is important. I think what we have tried to do in the United States is, as I said at the beginning, increase the resources needed for basic R&D that can flow through the National Science Foundation and the academic institutions, because that's where you have the most innovative initiatives coming on basic R&D. And so that is the area that we have lagged for several years. So, the legislation was there was passed in the Senate really is a step forward to increase resources through National Science Foundation, that then flows through universities and academic institutions and labs, because that's where researchers will come up with a next generation AI application, or this emerging tech, that will benefit I believe all of humanity.
Kate Saslow: I'll move on to the next question from a participant: Sharing AI tools with allies nowadays has to meet EU data sovereignty principles, like for instance, with regard to transparency and OSS (open-source software). Will US government and Congress be open for this?
Ylli Bajraktari: So, I'm not speaking on behalf of the United States government or Congress. But I believe that what we have looked at here domestically in the United States is, what I was just talking about, the lack of access for anybody that does not have the tools and resources to have these kinds of application handy when they want to do the next generation innovation, right. So, the initiative out of Stanford to create the national resource initiative (that is a cloud-based system in which universities and small and business sized enterprises will have access to these groundbreaking technologies) was the step in the right direction. I would argue that you can expand that cloud, either create a separate EU cloud, where universities in Europe or small companies in Europe would have that access, or you can create some kind of partnership in the United States cloud. Obviously, you need to follow the EU rules and regulations if you want to operate in EU, but I think access is going to be fundamental in order for people that are really interested in coming up with next generation applications, they should have access to this. Otherwise, we will end up in a really centralized place. So, we argue that we need to democratize access to the tools and data for the AI application.
Kate Saslow: Yeah. And I think that that also kind of relates to the question on GDPR that was brought up with European policy. If you look at GDPR, as something that essentially raised the bar of access to US companies participating the European market, I think that if there is this coalition, or if the TTC can address this, then exactly, as you mentioned, anything that you know, wants to be rolled out in both markets, we just have to abide by certain standards that are agreed upon by the two regions or other regions that are involved.
Our next question from an anonymous attendee: What is your biggest fear when you think about AI late at night? I guess it goes with a dystopian …
Ylli Bajraktari: Oh, no, no, it's really good question. I mean, the one thing I've learned in the last three years doing this job is that movies had an influence on our perception of technologies. And usually, the discussion about AI, fast evolves into the doomsday scenarios, the Terminator scenarios. When you look at the current state of AI right now, I think we're really far, far from that scenario right now. And you know, like, some of our Commissioners who are the top technologists in the space would argue that we might never get to that space. But it was just interesting, because even if we start talking about any kind of AI application today, the conversation really ends up right away into, „Well, you will have a killer robot or you'll have a Terminator or Skynet will happen“. So that's one thing that I've discovered in the job, is that we're still talking about a narrow AI application. That is basically a facial recognition or behavior pattern recognition. And so, we're far, far from that place of when we will end up in the Skynet situation. So, I'm not really worried about. Any technology brings its own challenges, you know. I worried that, you know, somebody might be sentenced without being guilty because of the biases of the algorithm, which I think could have happened. And so how do we ensure that that doesn't happen? And are we going to have mechanisms to improve them? And so those are kind of the things that, you know, right now, I worry about. It's hard to predict the future, to be honest with you. But that's it.
Kate Saslow: Absolutely. Well, looking at the at the time, I would like to maybe end on a slightly more positive, optimistic note, and give you the chance to answer one last question that I would be curious. And hopefully the audience is curious to hear as well. So where do you see possibilities for cooperation between the US and the EU, in terms of concrete next steps in a way that be mutually beneficial to both ecosystems?
Ylli Bajraktari: I think there are many areas, that I mentioned, that bring us together. So, in the United States, you have these grand challenges. So, I think launching a joint Grand Challenge between the US and the EU in terms of, for example, a healthcare problem, a natural or a climate change problem and to bring resources together - that would be a possibility. Doing a Grand Challenge in one of the aspects would be a clear example that if you bring the best and the brightest together and give them a challenge, give them a technology and ask them to do something, they will come up with something that will benefit both of our societies. And so those would be the thing. But the exchange of people and talent is always critical. And I think that's what we have benefited the most by having free flow of people and talent across the pond for many, many decades. I think that just increases the innovation base, the exchange of ideas, and just foster innovative culture. Ao those will be my areas that I would think. On the formal channels it's always good to have a healthy, dialogue that addresses the differences and aims to narrow those differences. Because I think, as I said at the beginning, there are many more things that bring us together than separates us. But I think I'm an optimist. I would say we have a really good opportunity on both sides of the pond right now to address these challenges and come up with either regulations or initiatives that can take us to the next step and utilize these technologies.
Kate Saslow: Yeah, absolutely. As you were answering that, I was just thinking that this is such a unique format, in that both of us would be the example of flow across borders and how beneficial directly to ecosystems. But so that's, unfortunately the end of our webinar. I would just like to thank you, again, so much for taking time out of a surely busy morning and taking part. Thanks to everyone in the audience for being here. That's it from our side. So, thank you so much for being here. Thank you again, Yll. And I hope you have a wonderful day.
Ylli Bajraktari: Thanks, Kate, for having me.
Kate Saslow: Thank you.
- End of Transcript -