As the world continues to battle COVID-19, a lesser-known public health crisis endures: antimicrobial resistance (AMR). The World Health Organization (WHO) reports that AMR is one of the top public health threats to humanity and that a growing list of infections—including pneumonia and tuberculosis—are becoming “harder, and sometimes impossible, to treat as antibiotics become less effective”. Without concerted action now, some medical experts assess that AMR will be responsible for an additional 10 million deaths per year by 2050.
One of the leading causes? The excessive and unnecessary use of antimicrobial drugs, including antibiotics. AMR increases when microbes, including bacteria, change in response to the use of these medicines. This is accelerated when antimicrobial drugs are misused or wrongly prescribed.
In humanitarian crises or lower-resource settings where lab services are limited, health workers face delays or difficulties in diagnosing specific microbial infections and thus prescribing the right course of treatment. Few agencies know this better than Médecins Sans Frontières (MSF), a humanitarian agency renowned for the care it provides to crisis-affected populations around the world. Faced with increasing numbers of patients infected with drug-resistant microbes, MSF and the MSF Foundation designed a cunning solution to tackle AMR in lower-resource settings.
Lab technicians typically use a disc-diffusion antibiotic susceptibility test (AST) to measure how susceptible or resistant microbes are to certain antibiotics. To do this, they place small, paper discs (called pellets) infused with antibiotics into a petri dish containing a cultured bacteria sample from a patient. The antibiotics slowly diffuse from the pellets, decreasing in potency the further they are from the pellet. If an antibiotic stops the bacteria from growing or kills the bacteria, an area around the pellet called a “zone of inhibition” appears. The diameter of these “zones” are measured and then compared with information in databases on bacteria susceptibility.
The AST is relatively low cost and requires no advanced hardware. But correctly measuring and interpreting the results of these tests requires specialised training and often support from a microbiologist, resources typically in short supply in humanitarian crises. Some health facilities in higher-resource settings use image-processing algorithms to measure AST results. But, until now the hardware and infrastructure requirements of these tools have prevented their use in lower-resource settings.
Enter Antibiogo, a fully-offline, machine learning-powered, smartphone application. Antibiogo seeks to reduce AMR by helping non-expert lab technicians interpret AST results and prescribe the right course of antibiotics. Using a smartphone and the app, a lab technician takes an image of the petri dish. A machine learning (ML) and image processing model measures and analyses the results in the petri dish. A clinically tested, third-party expert system interprets the measurements and generates results that can be shared with colleagues or other experts.
Credit : Fondation MSF / Antibiogo
This is AI for Good at its best
The end goal of the app was never automation and a reduction in human oversight. So, while ML powers some elements of the app, a human is in the loop at all steps of the diagnosis, verifying the image classification and the measurements produced by the ML model.
As the app is more widely used in lower-resource settings, the MSF Foundation hopes it will increase the use and interpretation of ASTs, improve the accuracy of antibiotic prescriptions and patient outcomes, and reduce AMR.
Credit : Fondation MSF / Antibiogo
The Secrets of Success
When the MSF Foundation started designing Antibiogo, nearly four years ago, success was by no means a foregone conclusion. So, what are the secrets to their triumph?
- The problem-owners were in the lead. Inputs from both aid workers and tech experts are required to trigger “lightbulb moments” and develop effective AI/ML tools for humanitarian contexts. Yet, the voices of those who understand the operational and political risks and realities of humanitarian action remain absent from a large part of the project design and public discourse on humanitarian AI. To leverage technological solutions to counter the world’s most pressing challenges, aid experts need to identify the problems and lead the design and development process.
- They designed a solution around the problem and not the tech. ML is only one of many components that contributed to the success of this app. Pilot projects using AI or ML should be designed around prioritised humanitarian need, drawing on a range of tools and resources, and not reverse-engineered to test a discrete capability.
- Everyone spoke the same language. Communication challenges between tech and aid experts abound. MSF staff speak about patients and microbiology, while technologists speak of human-centred deign and user experience. The team of experts who developed Antibiogo worked to a shared understanding of the project’s aims and used plain language as much as possible, chipping away at the inherent barriers that exist between two very different industries.
- Google stumped up the cash and not just free tech. Funding for AI for Good projects is in short supply. Traditional development or humanitarian donors are risk averse and wary of providing funds to ventures that might fail to generate impact. And tech firms typically gift services and software, waive license fees or provide other in-kind support without offering the cash necessary to pay for other project costs like aid worker salaries or data cleaning and management.
- MSF established the right relationships. Pro-bono tech support and cash from Google was not the only “x factor” at play. The MSF Foundation worked hard to establish and maintain ethical partnerships with others who brought critical expertise to the table, like the University Paris-Saclay in Paris, i2a, and the Laboratoire de Mathématiques et Modélisation d’Évry (LaMME), amongst others.
- Everyone prepared for a significant investment. Developing effective AI for Good systems requires valuable staff time, resources and strategic patience. It took close to 4 years for the MSF Foundation to develop and evaluate Antibiogo as well as millions of Euros in grants, gifted software and staff support from Google.org.
While hackathons and innovation funds have a role to play in developing AI for Good, well-resourced, multi-industry collaborations led by aid experts may yield more projects as promising as Antibiogo.
# # #
Sarah W. Spencer is an independent consultant who specialises in public policy, development, humanitarian action, and technology for good. She is currently on sabbatical from the British Government. The views expressed in this article are wholly her own and do not represent the official policies or positions of the UK Government.