Governing artificial intelligence (AI), including how it is developed and deployed, is already a topic of global concern – and not just within the confines of tech companies alone. In short, the AI lifecycle is in vogue. AI, though not a singular technology, is tied up in broader questions of automation and economic rationalisation (of government and private sector services) – questions that are becoming more pronounced with the growing focus on inequality. Some of the issues wrapped up in AI touch on profound areas of political concern for communities. These range from the entrenchment of racial and gender biases in hiring decisions, owing to the training of algorithms off skewed data-sets, themselves reflecting prejudices, to the use of AI in facial recognition, including in authoritarian settings, to effectively profile individuals and communities and curtail their freedom. Other algorithmic examples frequently cited and examined, including in relation to criminal justice, are more nuanced. Here, humans might already be ‘in-the-loop’, and old-school prejudices arguably play just as much, if not more, of a role as any technical system in determining who walks and who stays behind bars.
It is, then, not surprising that in an age of ‘regulatory capitalism,’ characterised by ‘ideas originating from any point in a vast regulatory network and ricocheting across borders and into different capitals’, countries, multilateral organisations, and companies themselves, are attempting to tackle the core challenge of how we govern AI effectively and responsibly. How do we, for example, establish frameworks to embrace the economic, social and environmental promises of AI, and curtail its potential negative impacts? Here, the OECD have articulated sound Principles for AI, whilst UNESCO is proceeding with a standards-setting instrument for AI. Canada, the United States, Australia and other governments have all rallied together to form the Partnership on AI – which is unmistakably global in its ambitions. The European Commission has unveiled a comprehensive program for AI, with the Ethics guidelines for Trustworthy AI, supported by the 2020 Rolling Plan for ICT Standardisation, and the UN Special Rapporteur on the right to Privacy is consulting on draft Data Privacy Guidelines for AI. The World Economic Forum (WEF) have also partnered to undertake extensive work on the governance of AI, shaping resources in particular areas of focus. National governments from the United States, United Kingdom, Singapore, Canada, Australia and New Zealand have variously established their own frameworks, assurance processes and national principles for AI, whether broad or sector-based, often with a strong focus on ‘ethics’. A review of the many ethical frameworks for AI, has demonstrated a common focus on some core value groupings, such as ‘beneficence’, ‘non-maleficence’ and ‘justice’. Some nation-states have strong investment ambitions too. The United States Germany and China, for example, have committed themselves to increased funding for AI research and development, and market based AI investment across countries such as India, Japan, Singapore and Israel has grown strongly over recent years.
In the midst of all this activity, another area crucial to effectively governing AI is fast coming into its own: standards-setting. At an international level, this holds much promise – precisely because, in a rapidly growing area dependent on strong, but diverse, expertise it represents a form of crowd-sourcing across domains. This sees the work of people concerned with building neural networks and Machine Learning (ML) systems, assessing and managing risk, embedding information security, and communicating in complex organisations, brought together concretely, through robust structures and processes. Through this, they can effectively shape particular material that can have global, and not just local, applicability in governing how AI is developed and deployed. This material, in the form of Standards, Technical Specifications or Technical Reports, might well find its way into commercial contracts, company policies or be referenced in government procurement, in some circumstances.
The proposition that standards can play a role, in-itself, should not be surprising. Standards have played a transformative, although under-appreciated, role in previous industrial revolutions. From the 1960s onwards, for example, shipping containers, which were standardised through the International Organisation for Standardisation (ISO) revolutionised the way we move goods, at speed and scale. This played a critical role in opening up new markets and was a significant boost for free trade. Some might criticise the speed of standardisation, but in a world where ‘move fast and break things’ is slowly giving way to a mantra more akin to ‘come with us and build things,’ this is not a bad outcome. After all, moving too quickly in standards development doesn’t always enable trade and commerce. And of course, when it comes to standards development, we need to pay attention to scale. It can take us years, and sometimes a decade or longer, to negotiate bilateral free trade agreements, so two or three years for a robust international standard is hardly a crisis or a failure. In fact, in an increasingly fractured world, it’s quite the opposite.
Taking these considerations into account, what might the analogies be in relation to AI? Where might the next transformative opportunities be in our digital world when it comes to standards for AI? Getting the balance of regulation right is a global challenge, so reverting to narrow technical norms, without justification, is not appealing. In liberal democracies we have existing regulations that can, should, and often do, apply in relation to harms like breaches of privacy, discrimination, unlawful surveillance or dual use applications of technology. So, how then, might Standards make a meaningful contribution that adds, doesn’t duplicate; that enables, doesn’t stifle, and that reflects agreement on core values, when it comes to AI. This latter point is particularly important.
How might we ensure the delivery of practical standards for AI internationally, and infuse liberal democratic values into the governance of AI at the same time? These are not opposing agendas, but they do imply different roles for different actors (companies, civil society, nation-states). In fact they are highly complementary, if we get the mix of initiatives right, and demarcate the lines between principles, laws, standards and accountability processes. A mutually interlocking framework for the governance of AI might be the result. This would support international trade and the responsible use of technology.
The first-step is for countries themselves to be clear on their values, their interests and their alignment. For example, what obligations do particular countries have to not only their citizens, but internationally? These range from their domestic laws and associated regulations, treaties or agreements they might already be party to, to the defence and intelligence arrangements they might maintain, necessary to their physical and economic security. From this vantage point, what particular harms, concerns or aspirations, might the inhabitants of those countries have in relation to AI? How might these harms, concerns or aspirations be identified, assessed, catalogued and either mitigated or enabled? More importantly, in the event of gaps in responses, how might new norms or approaches be developed? There are novel approaches elsewhere that might provide practical guidance, straddling salient social concerns, human rights obligations, and data analysis. In 2017, for example, the United Kingdom undertook a Race Disparity Audit, across domains of public life, attending to a salient issue of public concern – racism. Aotearoa/ New Zealand researchers have similarly, over a long period of time, tracked inequities in health, by ethnicity, identifying data collection and data quality concerns for Maori (tangata whenua/indigenous peoples), in the process turning the gaze on government. This has also seen the elaboration of new theoretical frameworks to guide responsible and ethical data collection, including through specific sampling techniques. Australia has also consulted on a set of AI Ethics Principles, which touch on a number of areas of common-concern. How would similar, perhaps cyclical, mapping in relation to the digital divide, or hopes and fears around AI, look, when disaggregated by relevant local areas of concern and specific attributes and grounds under international human rights law? Here, emerging methods for engagement in AI, including with affected communities, would play an important role. Getting specific, through these types of processes (with tangible outputs), as it turns out, usually prompts more concrete action, and it certainly supports accountability.
Another task, as the OECD’s recent consultative process has shown, is to shape common global norms through recognised fora, including through developing specific actionable principles to guide AI development and adoption. But we can, and should, take this a step further. We must ensure that principles that respond to the concerns of liberal democracies, as they navigate the growth and impact of AI, can be elevated and adopted as part of the global order through technical standards too. How can, for example, we encourage like-minded countries, and companies that operate within their borders at-scale, to rally together for the purpose of either targeted collaboration, through a technology alliance or more targeted ‘pre-standardisation’ work? This refers to forming a common set of practices so strong that they can influence more formal standards-setting processes, through bodies such as ISO and IEC for example. This might lead to the development of practical guidance material that is technically sound, but politically and socially-attuned to the realities of the 21st century. Using the OECD principles on AI as a scaffold to do so seems like a logical idea, considering that these principles call for ‘consensus based standards development,’ as I have previously argued.
Finally, we need to prioritise and accelerate the work currently underway multilaterally, and within recognised Standards Development Organisations (SDOs), to develop common International Standards for AI. National approaches can play a role, but to truly move into the stack and embed responsible AI practices within supply chains, we need to work smartly across borders. The Australian AI Standards Roadmap, which I authored, strikes this balance by channeling local concerns into a global context. Critically, it also talks to the fact that a number of nations committed to the OECD principles are actively participating in specific SDOs. Take, for example, ISO/IEC JTC1/SC42, the Sub-committee focused on AI. This Sub-committee is tackling challenges ranging from standardising the building blocks of AI, including through terminology, to developing frameworks that scaffold off existing tried and tested standards used internationally, such as risk-management (ISO 31000). Perhaps even more pointedly, this same Sub-committee is developing a Management System Standard for AI, again scaffolding off the approach of other management system standards like ISO 9001 and ISO/IEC 27001 (the latter being an existing standard that enables organisations of all sizes to identify, assess and manage their information security risks). The benefits of such a standard for AI within supply-chains and in service agreements might well be significant. More work is outlined below in Table 1, below. Beyond this, of course, is the work underway through the IEEE, and their Ethically-Aligned Design Initiative, which has seen the publication of a significant initial output, and the ITU, with their focus groups in similar areas.
Broad focus and potential application
ISO/IEC WD 42001: Artificial Intelligence – Management System
Management System Standard. Based on the broad methodology of a range of existing standards used for certification. Entities of all sizes might use this standard to assess the impact of AI, manage its deployment and demonstrate, including to external partners, how they embed various requirements and/or controls through their operations.
ISO/IEC AWI TR 24368: Artificial Intelligence – Overview of ethical and societal concerns
A Technical Report on ethical and social concerns and considerations relating to AI, including how these could inform AI Standards development. Might map how salient concerns are being practically addressed within the JTC 1 community.
ISO/IEC CD 23894: Artificial Intelligence – Risk Management
Risk Management Framework for AI.This will leverage the insights of an internationally recognised risk management framework, ISO 31000 (Risk Management – Guidelines).
ISO/IEC CD 38507: Governance of IT — Governance implications of the use of artificial intelligence by organizations
Standard that addresses the roles and responsibilities of Boards and senior executives in relation to AI deployment in organisational settings. This might play an important role as companies start to pay increasing attention, including at Board level, to AI, based on concerns around liability etc.
Table 1: Select standards under development within ISO/IEC JTC 1/SC42 (AI)
Effectively governing AI, in an interconnected world, will require strong international engagement and collaboration. There are clear roles for nation-states, civil society and companies themselves to play. AI standards-setting, as an activity that often brings different parties together, is an important function in accelerating the responsible development and deployment of AI at-scale. But, realising the full benefits of international AI standards-setting will require us to work with the logic of ‘regulatory capitalism’ for more beneficial ends. This means governments doing more targeted ‘steering’ in terms of policy frameworks and incentive structures (which includes regulatory certainty), as well as through multi-lateral fora, and the private sector doing the ‘rowing’ necessary, through commercial and technical expertise. This rowing will need to include continued and intensified engagement in standards-setting. Already, many responsible global companies, researchers and civil society organisations seem committed to this task, alongside some of their government counterparts. Through this process, we can all have greater confidence in the rules that will define the current industrial revolution unfolding before our very own eyes. This is our chance to shape, leverage and to use international standards to help realise a vision of responsible AI; one that moves from common aspiration to shared practice(s).
ABOUT THE AUTHOR
Jed Horner is an experienced public policy professional working at the intersection of tech and social issues, with a sound grounding in social, regulatory, and, increasingly, foreign, policy. Jed thinks laterally, being a product of three countries, a diverse education, and experience driving change from the ground-up.