When you look at the +42 billion dollars invested in AI for the year 2020 according to the AI Index, it is not easy to envision that a large part of these investments is serving International Peace (SDG 16), even though Peace is one of the fundamental means to promote social progress for all according to the United Nations (UN) Charter.
The dual nature of AI and frontier technologies certainly explains that the deployment of pilots and scalable solutions in the sensitive and political context of international conflicts came after the deployment of AI solutions in more operational sectors. UN agencies and international actors working on more operative mandates, like the World Food Program, the United Nations Refugee Agency, UNICEF, and UNFPA have engaged in a collaboration on the now called Principles for Digital Development as early as 2009, whereas innovation cells for peace really emerged from 2019 onwards. Isolated initiatives existed beforehand, but it was not a systemic approach to AI and frontier technology for global peace. For example, in 2018, the UNCHR and Global Pulse used machine learning (ML) to develop a tech robot with a human rights-based bias detecting xenophobia online against refugees to better understand what triggered dislike or hatred against people perceived as outsiders.
The risks and threats of AI uses and misuses also explain this time gap. Though the same cross-disciplinary commonly recognized risks apply, they have a major ripple impact in peace from the destabilization of rule of law and democracies, mass surveillance, digital propaganda with deep-fakes and micro-targeting, misinformation and disinformation by echo chambers and digital profiling, enhanced manipulation of conflicts, new forms of conflicts including incitements of ‘artificial’ conflicts by attacking the social cohesion of communities and nations, as well as the high risk to have fragile and developing states being left out from the global digital competition thus not having the capacity to protect their sovereignty and population.
Another factor that still explains some reluctance to deploy and rely on AI solutions in conflicts and peace is the role of cognitive emotions and the importance of inter-personal relations to build trust, the so-called Human Factor. Can smart algorithms fully understand the complexity of human interactions? Can mediators, conflict experts, and political and legal advisors trust data-centered information when facing opposing parties in unstable environments where emotions, bias, and perceptions are exacerbated?
What also blurry our perceptions on whether AI is serving peace is that in an AI era where data is pivotal, and return on investment remains a key factor in deciding whether to invest in the development of solutions, it remains difficult to assess the real cost of peace, thus whether the deployment of technological solutions is a good investment. It may sound cynical to start more quantifying peace and its benefits when instead it participates in supporting the idea that peace is everyone’s business. According to the Institute for Economics and Peace 2021 report, the cost of Peace in 2019 was 14.4 trillion dollars, an equivalent of 10.5% of the global gross domestic product (GDP) that could be re-invested in the conservation of nature, children, infrastructure, and the overall economy of fragile states instead.
At the same time, it is difficult to seriously promote global peace, if you are lagging behind the evolutions of society and are struggling to find additional insights into rapidly-evolving settings, and therefore not fully equipped for your actions to have a positive impact towards peace.
Promoting and preserving peace is also a general commitment of the international community grounded in various international conventions and protocols applicable to AI and frontier technologies, including the United Nations Charter. As stated in its preamble, “unit[ing] strength to maintain international peace (…) by “combin[ing] efforts” is a commitment of the 193 member states within the multilateral forum of the United Nations. This core principle has been reaffirmed by states during the UNESCO Recommendation on the ethics of artificial intelligence negotiation, as well as by humanitarian actors calling for a “Do No Digital Harm” principle, extending the principle of striving to minimize the harm inadvertently cause by providing or not providing assistance into the digital space.
In fact, AI is already serving peace by creating bridges between cultures using natural language processing (NLP) to translate dialects into mainstream languages enabling the voice of minority groups to be echoed in the big data. AI-powered tools are also used for large-scale 1-on-1 dialogue providing a real-time at-scale survey of targeted groups and populations. For example, the start-up REMESH in the Yemen/Libya conflict has successfully generated a conversation with 1,000 people at a time, thus enabling us to have a general idea of people’s perceptions, opinions, and needs on the proposing recommendations around the table of peace negotiation. Such a new tool can also increase the voice of children and the young generation as we know that children’s rights are still insufficiently represented in peace agreements.
Analyses (context/conflict, predictive/real-time, sentiment/focus group) can also be enriched by AI-powered tool combining NLP and ML, insights from automated speech recognition that monitors what political leaders, influencers, and mainstream media broadcast, as well as satellite imagery, thus increasing the triangulation of information, and consequently the accuracy of analyses.
Simultaneously AI-powered analyses contribute to increasing evidence-based processes by advancing decision-making providing a larger number of varied types of data (quantitative, qualitative, imagery e.g., drones), and testing policy models, scenarios, or recommendations.
In addition to those specific peace-related AI applications, the biggest success of the sustainable development goals (SDGs) is to have created a global momentum capable to steer the AI and frontier technology ecosystem within multilateral organizations, companies, social entrepreneurs, advisors towards common goals that participate to leverage our collective intelligence to solve global challenges. By doing so, the prejudicial ripple effect of risks and threats of AI can be mitigated and overcome.
In that respect, new tools detecting drivers for possible future crises, like the Famine Action Mechanism (World bank) or the data platform ‘Fresh water explorer’ (UNEP, Google, European Commission partnership) support early warning mechanisms, anticipatory financing, and actions towards peacebuilding up a virtuous tech circle for global peace.
AI is already serving peace, and it is our collective responsibility to continue to support the different AI for Good initiatives, and therefore putting our collective intelligence into motion for global peace and a renewed social contract 5.0.