
Discussions on the international governance of artificial intelligence (AI) have mostly been taking place in North America, Europe, and some Asian countries. The Global South, however, is clearly underrepresented. Many countries in Africa, Latin America, and the Caribbean are entirely absent in terms of experts, practitioners, and policymakers engaged in this vital conversation. Should this be a problem?
A recent survey conducted by the Global Partnership on AI (GPAI) and the Future Society, on areas for future action in the responsible AI ecosystem, mapped a total of 214 initiatives on AI ethics, governance, and social good, representing 38 different countries and regions. In their catalogue, over 58% of the initiatives are from Europe and North America. Only 1.4% of them are from Africa. Besides, initiatives in emerging and developing economies (excluding China) are heavily focused on advancing the Sustainable Development Goals (SDGs), while only five out of 179 initiatives (or 3%), under the AI ethics and governance categories, originate from those countries.

What is at stake? AI technologies are expected to be a critical factor in competitiveness, investments, and GDP growth. A widening gap in prosperity and wealth would chiefly affect poor countries unable to develop digital skills and infrastructure to reap the rewards of AI opportunities in business performance, productivity, and innovation. In the long run, AI-driven industry automation in rich countries of repetitive, labor-intensive jobs, can arguably displace traditional, comparative advantages of developing countries, such as cheap workforce and raw materials. Global inequality was once gauged in terms of have and have-not countries. A new divide could be emerging between AI-ready and not-AI-ready.
In such a scenario, developing countries would be exposed to many vulnerabilities, lagging behind in economic, scientific, and technological development. They can become an open ground for data-predation and cyber-colonization. Small, tech-taking countries may turn into testbeds for dual-use technologies, precisely because they lack the technical expertise, scale, and resources to take effective countermeasures against tech-leading powers.
When it comes to AI governance at the global level, the situation today is far from ideal. Amid geopolitical tensions, great-power rivalry, ideological divide, and growing competition, in addition to mistrust towards multilateralism, major agreements in the short term are perhaps unlikely. Instead, a decentralized, fragmented AI landscape seems to be the norm. Comprehensive governance tools, such as normative instruments, institutions, technical standards, and safety measures, are still largely missing. There are numerous initiatives at the national and regional level, and by the private sector and civil society, notably on AI principles, but they lack truly international coordination and often engage a few partners while excluding others.

The dominance of policy narratives by wealthy countries raises several questions. Less attention to policy issues important for low- and middle-income countries, such as agriculture, education, healthcare, and infrastructure, could result in less research and investment in AI applications for these areas. Social imbalance, algorithmic bias, and power asymmetries can be magnified if vulnerable populations are left behind. This is of particular concern in countries that do not have strong institutions, legislation, or civic space to protect their citizens from AI-generated disinformation campaigns, data-manipulation, and digital abuses of all kinds, as examined in a report by the Konrad Adenauer Stiftung.
To put it bluntly, self-driving cars will not be around anytime soon in those places where no GPS can show you the directions. Reliable connectivity should not be universally taken for granted, not to mention electricity. In some least developed countries, power cuts occur three to four times a day and Internet connection, when available, is chronically unstable. Where disenfranchised populations struggle for their livelihood, Zoom meetings are for the privileged (and the brave).

For AI governance to be genuinely global, all interested parties should be engaged more systematically and have their say. Global South leaders and researchers cannot afford to stand idle while others make decisions. AI risk, safety, and security may still seem disconnected from the day-by-day reality of many countries facing more urgent problems (poverty, hunger, violence, or environmental degradation, to name just a few). Yet, this detachment will not insulate them from consequences, unintended or not, of damaging situations brought about elsewhere.
A central theme in AI ethics is the notion of the common good: ideally, the technology should be beneficial for everyone, everywhere. But for AI to be made accessible to as many people as possible, it must respect, for instance, the rich cultural diversity of the world’s population and the particular needs of different societies across the globe. Without inclusion, moving beyond the so-called whiteness of AI, it will be much harder for AI developers to properly design the tools required to deal with this challenge.
Discriminating systems can reproduce inequalities found in the real world and have far-reaching implications. Diversity should be weighed in all stages of the life-cycle of AI systems, from their conception and development to actual deployment. And of course not only in design but in policymaking for global cooperation as well. Structural inequities related to gender, age, race, and culture can be exacerbated if no consideration is given to a broad-based diverse representation of actors in a plurality of settings.

Machines know little about the world where humans live and can come up with one-size-fits-all solutions for a quick fix, which brings the question of who designs and for whom. Analyses on algorithmic colonialism, particularly in Africa, highlight the fact that technology is not a value-free tool: good ideas conceived in a given culture may prove to be inappropriate in others. Western-based AI knowledge, blindly imported by societies with a distinct religious, cultural, and social background, could potentially produce unexpected consequences.
Technologies designed out of context, ignoring different sets of priorities, are bound to fail. Contextualized AI is needed to target the right audience and achieve meaningful outcomes. This may require cultural-dependent, human rights-based design, and context awareness, that is, training AI models by using local datasets that reflect social realities and unique features not present in other environments. This is essential to prevent flawed algorithmic decision-making, ethnic biases, and utter disregard to local needs.
Whatever their technological predicament, developing countries should not be relegated to the role of spectators or victims having their agency and autonomy denied. Here are five suggestions to enact greater participation:
1-Embrace true multi-stakeholderism. Engage civil society, private companies, AI researchers, and other stakeholders in genuine cross-cultural dialogue. Conveners should actively pursue geographical and gender balance to ensure broad representation at all levels.
2-Build capacity to empower people and boost trustworthy AI. This could include fostering AI literacy in poor countries and more involvement in metrics, taxonomy, and collaborative initiatives to realize responsible AI through international standards.
3-Seek inputs from marginalized groups and neglected audiences. Much can be learned from multidisciplinary perspectives by bringing new voices to the table, either to hear their concerns or to exchange views and explore alternative approaches.
4-Promote normative leadership before harm has been done. Simply dismissing controversies will not build long-term trust in AI-driven solutions. A public backlash can quickly destroy a company’s reputation, even if the cause for it was found in faraway lands. Taking early action in regulation and forging partnerships can contribute to prevention and oversight.
5-Bring multilateralism back in. The United Nations has the experience, convening power, and universality to provide a legitimate platform for facilitating negotiations on AI in several domains. UNESCO, for instance, has launched a two-year process to adopt the first global standard-setting instrument on the ethics of AI, in the form of a recommendation, by the end of 2021.
Interestingly enough, in his Roadmap for digital cooperation, the UN Secretary-General António Guterres identified inter alia three main challenges going forward: lack of inclusiveness in AI global discussions; inadequate overall coordination of AI-related initiatives, with few easily accessible channels to other countries outside the existing groupings; and the need for AI capacity-building, especially in the public sector. Guterres also announced plans to establish a multi-stakeholder AI advisory body in due course. Consultations on the matter have shown that the UN is committed to make it a diverse and fully representative body.
International cooperation on AI policymaking will not be complete without the Global South. This crucial task must not be left solely to the most technologically advanced countries. When stakes are too high and are likely to entail worldwide externalities, all should get involved in the debate concerning our shared AI future.
ABOUT THE AUTHOR
Eugenio V. Garcia is a Tech Diplomat at the Consulate General of Brazil in San Francisco, USA. And a Deputy Consul General and Head of Science, Technology, and Innovation. He serves as a focal point for Silicon Valley and the Bay Area innovation ecosystem. And he holds a Ph.D. in International Relations and researcher on the international governance of artificial intelligence.