Back in July this year, we reported about Civic AI Lab (CAIL). The collaboration happened between the City of Amsterdam, Vrije Universiteit Amsterdam (VU), and the University of Amsterdam (UvA), to accelerate the goals of fair AI creation and ensuring that it is responsibly used. Fast forward to December 10, and the official launch event for CAIL happened internationally online. In an exclusive interview with one of the co-directors of the lab, Sennay Ghebreab, we learn more about the CAIL, its research and societal objectives.
Civic AI Lab: Using AI to offer equal opportunity in Amsterdam
In our interview, we asked Ghebreab about the founders’ role and what the Civic AI Lab is all about. Ghebreab notes that apart from him, there are two other co-directors of the CAIL, Hinda Haned from the University of Amsterdam, and Jacco van Ossenbrugge from Vrije Universiteit Amsterdam. The directors ensure that the lab focuses on its main objectives, which is the research and development of AI technology that promotes equal opportunity with a focus on societal challenges. The lab will figure out how AI can be leveraged in domains such as healthcare and education, workplace and others, to enhance equal opportunity.
To give a better idea of how the Civic AI Lab will focus on its objectives, Ghebreab delves deeper into the CAIL’s workings. “Let’s talk about the five projects that we are currently focusing on. These five projects are picked together with the city of Amsterdam, which also provides data for them to work with. Healthcare, well-being, mobility, education and environmental questions are the current projects,” says Ghebreab.
“If I focus on one project, for example, mobility, then we are trying to address the issue of mobility inequality, which certain groups in Amsterdam are affected by since they are less mobile than others. With AI, we try to get a better understanding of what causes mobility inequality and how can AI help in reducing it and enabling equal opportunity for all in terms of mobility.”
How the lab works
Since the word ‘lab’ is present in the Civic AI Lab, one’s thoughts could naturally gravitate towards the conventional laboratory setup. However, Ghebreab clears out such preconceived notions as he reveals that the CAIL is an ICAI lab, which stands for Innovation Center for Artificial Intelligence. ICAI has around 20 labs in Amsterdam with most of them being a public-private partnership. However, CAIL is mostly focused on societal issues, whereas most other labs primarily focus on industrial issues.
All ICAI labs also enrol at least five PhD students, which is also true for CAIL. These five students will work in their respective projects, which we previously mentioned.
“So, as a lab, the PhD students have to do the work while the directors and lab managers need to supervise all the students. In ICAI’s network, we are also expected to work a bit at the partner’s site, which for us is the municipality, to get better data and understand the underlying questions. So, it’s a co-creation between partners and scientific institutions, and that’s essentially what we do as a lab,” notes Ghebreab.
Speaking about Civic AI Lab’s core mission statement, Ghebreab says, “ Equality is at the core of our mission. We’re not developing AI technology to enhance processes for increasing financial gains. We’re here to develop AI technology that, at its core, uses equal opportunity and equal treatment as a basis.”
Can AI bias be fought?
It’s no secret that AI can be biased, depending on the type of data it is trained upon. We ask the fundamental question of whether AI bias can be fought, or will it always be a point of concern. Ghebreab answers, “I’ve been working with AI for the last 30 years, and I’ve been in the field of AI bias and inequality for the last ten years. When I started working on my research and education in biassed, racist machines, people thought it didn’t make sense. Now, ten years later, people attribute bias and discrimination to AI.”
He emphasises the fact that we humans created AI, and we did it in ways that it mirrors us, our communities and societies. He notes, “Systems that we build using AI feature inherent biases like historical or social biases, and so on. In a way, AI has quantified bias in our society as it works on our data. As for answering the question if AI can help us in promoting human rights and providing equal opportunities? I say yes, it can if we decide to use it for that purpose.”