The strategic and security implications of AI

With every new technology comes the potential for abuse. And while AI is clearly starting to deliver an awful lot of value, it’s also creating new systemic vulnerabilities that governments now have to worry about and address. Self-driving cars can be hacked. Speech synthesis can make traditional ways of verifying someone’s identity less reliable. AI can be used to build weapons systems that are less predictable.

As AI technology continues to develop and become more powerful, we’ll have to worry more about safety and security. But competitive pressures risk encouraging companies and countries to focus on capabilities research rather than responsible AI development. Solving this problem will be a big challenge, and it’s probably going to require new national AI policies, and international norms and standards that don’t currently exist.

Helen Toner is Director of Strategy at the Center for Security and Emerging Technology (CSET), a US policy think tank that connects policymakers to experts on the security implications of new technologies like AI. Her work spans national security and technology policy, and international AI competition, and she’s become an expert on AI in China, in particular. Helen joined me for a special AI policy-themed episode of the podcast.

Here were some of my favourite take-homes from the conversation:

  • AI is poised to have transformative effects in a lot of strategic areas. But Helen also thinks that AI might not impact others as much as many think. As an example, she mentions Deepfakes, which is often cited as a potentially dangerous disinformation tool with wide-ranging implications for the integrity of the democratic process, among other things. Helen points out that Deepfakes still require significant resources to deploy, and there are lower hanging fruit for bad actors interested in influencing public opinion or elections, including the use of simple tools like Photoshop, or hiring low-wage workers to put out massive quantities of tweets, for example. As she sees it, there’s an opportunity cost to using tools like Deepfakes that needs to be accounted for before we decide how worried we should be about their potential impact.
  • My assumption has generally been that governments move slowly, and that they might therefore struggle to keep policy up-to-date as the pace of development of AI technology continues to increase. While Helen agrees that the typically slow pace of government is something worth keeping in mind, she also points out that there are different tools available to policymakers and government bodies to move faster when a pressing need presents itself. She cites the massive COVID-19 relief package as an example of significant government action that came together on a surprisingly short timeline.
  • Helen discussed some common misconceptions about the state of AI competition internationally. One in particular had to do with China’s supposed “data advantage”: the idea that due to its large population, and near-ubiquitous application of AI technology, China has access to more data than its international rivals, and therefore, an important edge in AI. Helen doesn’t consider this to be a compelling argument, for two reasons.
  • First, data aren’t fungible: you can’t currently use facial recognition data to help train a chatbot, or text data to train a chess-playing RL agent. AI applications are still narrow, so the fact that China may have more “bulk data” doesn’t translate into a clear advantage where it counts (e.g. security and strategic AI applications). And second, the “China is big” argument tends to ignore the multinational character of many US tech firms: Facebook and Google aren’t just serving the US public — they’re used by billions of people worldwide. As a result viewing data availability through a simple “US vs China” lens isn’t particularly helpful or informative.
  • Helen points out that talent is a critical input to AI development initiatives, both in countries and in companies. Historically, the US has had a huge talent advantage, owing to its being a nice place to live, and to its being a hub of technical talent. Helen is concerned that this may change, particularly as restrictions on skilled immigration have made it harder for AI developers and tech workers more generally to move to the States.

Listen to the podcast at Towards Data Science (TDS).

The post The strategic and security implications of AI appeared first on Center for Security and Emerging Technology.

Read more on CSET

You may also like

Comments are closed.