Lambert is the Chief of Data for Analytics and Emerging Technologies at the United Nations. He has worked for the UN for almost two decades, and we are fortunate to have an interview with him on his work for the UN and the organization’s relationship with technology. Caroline met Lambert some time ago by being a guest on Deep Learning AI, wherein they were both panelists for a segment of The Impact of AI for Social Good.
Michelle Diaz: What types of tech teams exist in the UN? What do they do and why do they exist? Is there a specific team that handles AI?
Lambert Hogenhout: The UN is a very large organization. I work in the UN Secretariat, which is the core of the UN. Even there, we have many different technology teams, both in the central IT Department, focused on technical specializations, and in other departments, where they focus on the use of technology for a specific business function. I oversee the Data, Analytics, and Emerging Tech teams, all of which touch on AI, and we have built various AI applications over the past 5 years.
Right now, different forms of AI (such as NLP and Machine Learning) are at the point of becoming mainstream in the UN: several small teams are experimenting with these technologies across the organization. AI is still evolving a lot, so we still count it as “emerging tech”, but our focus is shifting now toward governance and enablement.
For instance, we recently launched a toolkit that has web services for common AI tasks such as entity recognition in texts (and we tailored them for the UN context/vocabulary) and topics that are of interest to us, such as automatically checking gender neutrality of a text or recognizing personally identifiable information (PII) in datasets to help manage data privacy. We also run an Academy, where staff can learn data science from beginner to advanced level.
Michelle Diaz: What does your typical day look like?
Lambert Hogenhout: First thing in the morning I catch my emails that have come in. As the UN, we have staff all over the world, and Asia, Europe, and Africa are ahead of us in time zones, so I usually have quite a few messages waiting for me. I might have some meetings with technology providers or with academic researchers about emerging tech topics. So much is happening in emerging tech these conversations help me stay up to date and determine possible new directions we need to explore.
I meet or message my staff – we talk a bit about ongoing projects of course, but I am more interested in how they are doing. We are working remotely much of the time now, and I have some teams that are fully remote, spread out around the world, with whom we are using asynchronous working modalities. So we are finding ways to do that and still feel like it’s a “team”, ensure there is peer support, give people a sense of belonging and avoid loneliness. I spend some time working in policy and governance.
Right now, in the field of AI, we are sorting out ML governance: provide standardized frameworks for staff to develop, share models, to document their work. And within the development process insert checkpoints to do self-assessments in terms of compliance with our principles for Responsible AI. These assessment methodologies are something we are working on right now.
Michelle Diaz: Tell us about the public tools developed by the UN tech & AI teams.
Lambert Hogenhout: Developing technical tools for the public is not the primary objective of the UN. Private sector companies, large and small, probably do a better job at creating AI tools. We are more of a normative organization, a convener and facilitator. Having said that, we do create some tools, including ones that involve AI. We recently collaborated with other partners to set up UNITAC, a technology accelerator for people-centered smart cities.
We created a proof of concept for a disaster risk management tool, using knowledge graphs and AI, and a tool for eThekwini municipality to better support informal settlements using AI techniques. We also developed open-source tools for governments to combat money laundering and for air travel safety. These are not meant for the general public but for government entities.
Michelle Diaz: How does the tech team collaborate with the public and private sectors? Are there partnerships in development with the big tech companies, do governments around the globe use tools developed by the UN? What about research and development?
Lambert Hogenhout: We are fortunate that many companies take an interest in our mission and in making a positive impact on the world. We collaborate on projects where we benefit from their expertise and sometimes their products. We have had a pro-bono partnership with Qlik for over 10 years that gives us access to data analytics and data integration tools and these days also auto-ML. In the case of the disaster risk management tool, we collaborated with Relational AI, and in another project, for humanitarian grant management, we worked with Slalom.
On Research we might work both with the private sector or with academia. We just completed a study with Waseda University in Japan on how youth think about AI and robots. What are their fears, hopes, and dreams? We organized workshops earlier this month with youth and representatives from Google, Nvidia, Honda, and other companies, which was fantastic.
Michelle Diaz: You’ve mentioned previously how the UN is measuring the effect of emerging tech on the world. Can you tell us more about that — what is the reasoning behind this endeavour? What do you expect and hope to find from the results? What is the timeline?
Lambert Hogenhout: Obviously, technology plays a big role in many changes we see in society these days. From cell phones and remote working tools to social media, decentralized finance, and autonomous vehicles. For all the big technological trends we would like to imagine early on what are the risks and opportunities. For instance, right now, we are looking now at web3, quantum computing, biotech, and of course AI remains a focus.
I lead a Foresight service that helps departments in the UN think about possible future scenarios in a 10-year timeframe. They would then use that as input for their strategic planning. And where we identify possible scenarios that we think are negative outcomes for the world, we might think about what we can do to nudge developments in a positive direction instead of a negative one. Obvious examples that are already very real are data privacy and the responsible use of AI. But there are examples that are a bit further in the future. For instance, data privacy and the safety of children in the Metaverse.
Michelle Diaz: What can the UN ask of its member nations when it comes to data and AI usage?
Lambert Hogenhout: Many countries and regions are putting in place regulations on data and AI. The European Union is the most progressive in this. The problem is that the platforms are often global – data travels around the world in seconds. At the moment, big social networks collect data from people all around the world – that is obviously not OK. Countries, especially in the Global South, need to become more proactive in protecting their citizens’ data. On the other hand, if every technology provider needs to comply with hundreds of different data privacy and AI regulations around the world, likely with contradicting requirements, it becomes unworkable. I think there might be a role for the UN there to get member nations together to work on common standards.
Technology is moving so fast these days that we don’t have time to collectively think through what we are doing before we adopt the latest gadget or platform. Regulation moves very slowly. And the big technology companies have become so powerful that their decisions have a major effect on the world and our lives. We need to find a way as a society to adopt technology more intentionally rather than leaving ourselves at the mercy of technological developments.
`ABOUT THE AUTHOR
Michelle Diaz …