Sustainable breast cancer screening service delivery requires efficient patient and process management. Today we meet Mo Abdolell, CEO and Founder of Densitas, based in Halifax, Canada which uses AI to better screen breast cancer patients.
Caroline Lair: Hi Mo, let’s talk about you, what led you to start Densitas back in 2013?
Mo Abdolell:
It’s a pleasure to meet you, Caroline. Before I begin, I would just like to say that The Good AI brings a very important voice (or rather voices) to the discourse on the use of AI. Actually, I have worked as a biostatistician for 25 years in academic teaching hospitals and in diagnostic radiology at Dalhousie University specifically for 15 of those years.
From as early as I can remember I wanted to help improve the health of underserved populations globally. I wanted to work for the World Health Organization in some capacity that made good use of my mathematical inclinations. Well, that never quite worked out. Instead, I studied biostatistics in graduate school, completing my thesis in machine learning before moving on to work in hospitals and health research institutions. I have worked with graduate students in diverse areas including biostatistics, epidemiology, biomedical engineering, and health/medical informatics with a focus on diagnostic imaging. This has afforded me a front-row seat at the intersection of diverse scientific disciplines and has given me a broad perspective on how artificial intelligence can be used to deliver better patient outcomes. What drew me to diagnostic imaging was the sheer volume of data that the specialty generates. Data is the currency of AI and AI solutions for clinical care and mammography, in particular, are well suited.
The story actually begins earlier, in 2011 when I was speaking with Dr. Judy Caines — the founding medical director of the Nova Scotia Breast Screening program — and she insisted that the only way that breast density can be a useful measure is for it to be reliably reproducible. So that is when Densitas was founded. But it took a couple of years before work actually started in earnest.
Caroline Lair: With Densitas, you’re providing tools to radiologists to help with breast cancer screening. Which problem are you trying to solve?
Mo Abdolell:
The obsession with diagnostic applications of AI I think is going to change. There are plenty of other problems to solve well before the diagnosis and that actually impact on diagnosis. Indeed, clinical and diagnostic confidence is predicated on correctness, completeness, consistency, and good quality control.
Our view is that sustainable breast cancer screening service delivery requires efficient patient and process management and that the ability of the radiologist or computer-aided detection and diagnosis (often referred to as CAD software) to be effective is very predicated on quality control. So our focus has been on all those processes, such as breast cancer image quality and breast cancer risk, that lead up to the radiologist actually reviewing the mammograms to make a diagnosis.
I will make an analogy for you. Breast cancer screening is like running a railway. Once you build the train, the rail station, once you staff your railway system, what remains? It’s all about managing your customers, it’s about safety and efficiency. If you think about bread cancer screening, once you build the facility, purchase the mammography scanner, once you have staffed your facility with radiologists, what remains? Patient and process management and this ends being an issue of safety quality, efficiency, precision, and patient care. There are a lot of activities around managing a patient through their journey through breast cancer screening that add a lot of burden to the healthcare system, it costs money, so there is a real requirement to be able to provide sustainable healthcare services delivery.
Caroline Lair: We hear a lot about AI as a game-changer for cancer, especially breast cancer. Can you share a short overview of where we are today, what have been the key advances enabled by AI so far, and the challenges that come with it?
Mo Abdolell:
Typically, when people think of AI and breast cancer, CAD is the first thing that comes to mind. But CAD has been around for a long time, well before AI really took off. Now it is hard to find CAD solutions that don’t use AI because of the performance improvement that it is capable of producing. Sometimes you will hear the term “CAD 2.0” which simply means the transition from traditional image processing techniques to AI-based algorithms to solve the same detection and diagnosis problem.
But there are a lot of cautionary tales about how AI needs to be rigorously validated in the field. For instance, AI models can be quite susceptible to bias that is introduced by systematically skewed training data — this is especially an issue in the context of the great variability in detector plate models and post-processing proprietary algorithms across the numerous mammography scanner manufacturers. An AI model trained on mammograms from one scanner model does not simply work out of the box on mammograms from another scanner model.
The irony is that this is not unique to AI. These problems predate AI. It’s just that AI is a lightning rod for surfacing these underlying modeling challenges. That’s because AI can be more sensitive to this because, as it is often remarked, it is a black box. The reality is that AI is being commoditized now and even a 16-year-old with no knowledge of the underlying data can develop an AI algorithm in short order to detect breast cancer. Bottom line, garbage-in garbage out is an immutable rule of thumb when modeling data. That has always been the case pre-AI and continues to be the case for AI. AI is not a panacea for good data hygiene. That is the biggest challenge for AI. It is data-hungry and it requires curated data, which means it has to be labeled by experts and it has to be representative of the real-world data that the models will encounter.
Everyone has acknowledged that AI won’t ever replace radiologists. After all, AI algorithms are built on data that has been labeled by radiologists. As Gary Kasparov remarked about Centaur chess, the combination of AI and human intelligence will outperform both AI alone and humans alone. There have been studies that show this is true in radiology as well.
On the topic of bias, the interesting thing is that AI is somehow a lightning rod for these discussions when in fact these issues have always been present for even the simplest predictive models and algorithms predating what we now call AI. I think that’s simply because AI-based models are more complex and opaque to the “average” person whose data and statistical literacy are quite poor. Good study design and internal and external validation should root out bias. This includes properly curated data and appropriate sampling to establish the training data set. For example there have been many studies to show just how poorly understood simple statistical tests are in the medical literature. Those same studies also show how with the increasing complexity of the statistical models being used there is a commensurate increase in the inappropriate use of the statistical method. So you can just imagine just how that translates into even more complex machine learning and deep learning solutions. There is no substitute for rigorous and sound methodological foundations for building predictive models.
At its most basic, AI is simply more complex predictive modeling, and predictive modeling has been around far longer than what we now think of as AI. And all the issues related to good and poor performing predictive models port over to AI.
A poorly performing predictive model (or AI) is one that simply gets its predictions wrong. The more specific aspect of that is bias, whereby the predictions are poor systematically, not randomly.
So the challenge with AI, as with any other predictive modeling strategy, is to ensure that there is no systematic skewness in your training data that would steer the model to systematically miss-classify new inputs. The example of algorithms systematically generating high-risk scores for individuals based on their race and socioeconomic status is well known, resulting in banks systemically rejecting business loan applications from those segments of the population.
In medicine, we can be prone to the same kinds of systematic bias and so we need to be particularly diligent that we are methodologically rigorous when we train AI models. For example, modeling breast cancer risk on a training data set composed predominantly of women who are caucasian, highly educated and of high socioeconomic status, and then applying that trained model to women who are not caucasian, less educated, low socio-economic status can result in very poor predictive performance. This could be because the prevalence of breast cancer can be different between the two groups. Or it could be because the risk factors included in the training data have better data integrity in one group versus the other. Or women from different socio-economic, cultural, and ethnic groups do not have access to or seek out medical care equivalently.
Caroline Lair: What AI still can’t do today for radiologists?
Mo Abdolell:
To answer that question we need to consider first what humans can do. Humans can reason, infer causality, port lessons learned from one domain, and apply them to solve never before seen problems in an entirely other domain. Empathize. Make ethical decisions.
What can be done in a controlled fashion under research protocols to link disparate data sources to cobble together labeled data to train models, is not the same environment in which models are deployed. Deploying a model with such extensive inputs is a non-trivial task. We are far from that being a reality.
A radiologist clinically interprets a case, bringing together all the patient’s clinical history and findings, including prior scans, and establishes a clinical diagnosis that requires thinking and reasoning based on many more inputs than simply image pixels. The ability to bring together all of that disparate information and process it is unmatched.
We have yet to see any AI that extends beyond simple “Narrow AI” — single task-based algorithms.
Caroline Lair: You’ve been kind of a precursor in the area back in 2013 when you started and It took you a few years to develop your first algorithm, what did you learn on the way?
Mo Abdolell:
Developing and introducing AI solutions in the medical space is not for the faint of heart. You don’t do this in your basement. There is a standard formula for this, in which each step is critical, resources intensive, and time-consuming:
First, you need to access data, and early on, you need to establish data-sharing agreements, managed with data privacy and regulations.
Secondly, you have to collect your training data and get a large enough sample size, representative of the population to which you want to apply your algorithm, spanning the space of the particular construct that you are modeling.
Thirdly, you have to get the validation data with the same issues as around the training data, without mentioning it needs to be external to the training data. Data privacy is once again critical here in order to establish trust.
Fourthly, it’s one thing to collect the data but to collect the images, it is another thing to get good quality data, so data curation is critical. Remember, garbage in garbage out, if you have poorly curated data, the algorithm will produce nonsense output.
Fifth, clinical validation is critical as well, it is a sanity check. Do the algorithm results demonstrate what we expect? Such studies take a lot of time and resources to conduct.
Sixth, we have regulatory clearance, whether it is FDA, CE mark or Health Canada, that is key for the safety of the product, and actually, there are good practices around securing regulatory clearance that make your product better.
Seventh, securing ISO 13485 certification for medical devices, otherwise, you can’t get regulatory clearance and many hospitals will not purchase your software.
Finally, you have to make sure that the system you have developed is eventually interoperable with other medical information systems, with hardware, and be able to integrate with the existing IT infrastructure.
Caroline Lair: You’ve recently announced the deployment of your platform intelliMammo™ throughout the Maritimes, which means Densitas® is now adopted by 30% of the provincial health systems across Canada, congrats, what are the next steps for Densitas in the next 5 years, expanding abroad I guess?
Mo Abdolell:
Thank you! We are excited to be deploying our intelliMammo™ platform province-wide in the Maritime provinces. It means that every mammogram taken can be processed to generate breast density, clinical image quality and breast cancer risk assessments in a consistent manner for greater clinical confidence and better patient care. We are already expanding into global markets with some notable deployments in the USA and Europe.
We have a healthy pipeline of additional product features and solutions that will be released over the next year that will propel our platform even further to establish intelliMammo™ as a critical mammography enterprise solution for mammography facilities. And if we dream a little, we could be expanding further beyond mammography in the next few years.
Caroline Lair: To what extent can you help the underserved population with these tools?
Mo Abdolell:
We are actually partnering with RAD-AID International to establish an A.I.-based decision support program for breast health in medically underserved regions.
The first problem that I see generally in terms of AI in underserved regions is that there needs to be an infrastructure to support the deployment of AI. Depending on how AI is delivered, whether it is on a local server or cloud, the reliability of their internet access, the adequate hardware and technical expertise to manage onsite deployments, there are some real practical issues around how AI can be delivered to underserved regions.
Added to that, other challenges with regard to data privacy, cultural, or perspectives on how AI may or may not be trusted, an area we are aware of and it is something that RAD is also working on very closely with their stakeholders across the world on the different projects they have.
Finally, it’s one thing to provide a tool whereas it is AI or any other tool, it’s another thing to make sure the user really understands how to use it. And that brings up the issue: is it to replace the radiologist, to augment the radiologist, to work in tandem with the radiologist? Those are questions we also have in well-resourced regions. But in underserved environments sometimes there is a million to 1 ratio in terms of how many people there are to the number of radiologists there are. There are often not many radiologic technologists, available if any, so the issue about replacing the radiologist or augmenting them is not that relevant anymore. The question has to be a little different and should be: in the absence of resources, what role can AI really play? These are very interesting and important topics to address when you want to deploy AI in underserved regions.
Caroline Lair: There are quite a lot of players in the industry, according to you what will it take to make a difference, and what would be your advice for the youngest startups emerging in the area?
Mo Abdolell:
You’re right. There are plenty of players in the industry. But as I was told years ago, it’s not the idea that differentiates you — it’s your ability to build a team that can execute on your vision. Stay focussed on what you do and do it better than anyone else.
That was the most important advice I got when we started. We’re no longer a startup, but this remains true. I am grateful to have a brilliant team who are truly good people, who care deeply about doing good and bring a truly collaborative and interdisciplinary perspective to their work. Our products reflect the character of our team.
My advice to any startup is to work on a problem that resonates with you, that you feel you can really make an impact on, find a mentor, stay focussed, work hard, surround yourself with people smarter than yourself, be grateful to those who support you, and constantly learn and re-calibrate when you make mistakes — because you will make lots of mistakes along the way.
ABOUT THE AUTHOR
Caroline Lair is the CEO and Founder of The Good AI. She is also a co-founder of the Women in AI non-profit. Her academic background is in International Relations, with a degree from Université Jean Moulin (Lyon III). And a business management degree from Emlyon Business School.