Like any powerful technology, artificial intelligence requires a degree of oversight to assure it is used in a sustainable, ethical, and responsible manner. Today, we are underlining this importance through an interview with one of our startup community members: Deeploy. Maarten Stolk, co-founder, and CEO answered questions on the inception, motivation, and exciting work currently underway at Deeploy — where they help AI scientists and practitioners deploy their AI models on a responsible AI platform, without compromising on transparency, control, and compliance.
10 Questions
Michelle Diaz: Tell me more about yourself, your work, and your background. How did all of these factors lead to the start of Deeploy? What is the motivation and story behind the founding of the company?
Maarten Stolk: The journey of Deeploy started in 2020, but the ideas that were the fundament for Deeploy were shaped over the years before. While working on multiple machine learning projects in previous companies, we realized that with a growing number of ML models going to production, it is time for a platform that ensures manageable, explainable and accountable models.
Michelle Diaz: Explain how artificial intelligence is used in your product.
Maarten Stolk: At Deeploy we support high-risk use cases in industries such as fintech, banking and healthcare. Companies in these industries develop complex AI models that requires an MLOps tool that ensures responsible usage of AI. As AI is becoming more and more part of our day-to-day lives, responsible (and ethical) use of AI becomes more and more important. So we support companies that incorporate AI models, by providing the necessary explainability functionalities out of the box with our platform Deeploy.
Deeploy Meetup Event
Michelle Diaz: Are there other technologies paired with AI/the platform? IOT for example? If there are, explain how you make use of them.
Maarten Stolk: We offer cutting-edge technology by constantly doing research and listening to our customers' requirements. A great example is our conversational explainability method which allows users to interact with the given model prediction. We will publish a white paper in April regarding this new technology.
New developments in generative AI change the public perspective drastically. On the one hand, we expect more from AI applications and their interfaces with humans. On the other hand, some worry for good reasons that we lose control of AI.
Michelle Diaz: Which pain points were you facing when you came to build a Responsible AI Platform?
Maarten Stolk: We basically experienced two pain points. On the one hand, most AI models never made it to production due to various reasons. Most importantly, after the first steps of innovation, the hardest steps are integration in both products as well as the way of working of people. And that’s the point where people start the think about potential risks, and innovation often stops due to a lack of transparency, understanding of the outcomes, and other important reasons to halt innovation in AI.
On the other hand; whenever AI does get to production, we don’t always have full control. A good example here is explainability to end users and the feedback loop. Take healthcare: AI is being used more and more in healthcare, but AI does make mistakes. Now, if the outcome is explainable to medical experts, those experts can overrule decisions, give feedback and steer AI algorithms. But often, the model is to a certain extent a block box, leading to false and intransparent outcomes. There is no way to steer the algorithm in the right direction, even though we do know in some cases the outcome is wrong.
Workshop at Deeploy
Michelle Diaz: What does Responsible AI mean to you? And why is it now crucial that companies take Responsible AI seriously?
Maarten Stolk: We are facing a tremendous amount of innovation in the AI-landscape. Not only the launch of ChatGPT4, Dall-E, and other generative AI solutions but also developments in how to optimize business processes. Despite these developments, an increasing prevalence of machine learning and how to keep control of these models after deployment whilst ensuring responsible usage remains a significant challenge to companies. And with the upcoming AI regulation across the globe also a pressing one, we need solutions to make AI more transparent to everyone.
Responsible AI to me means that AI is being used for the greater good and that everyone understands well enough how AI comes to its conclusions. AI should be a buddy, a colleague, not a scary black box system.
Michelle Diaz: Where do we see the usage of the Responsible AI heading? How do companies incorporate that and which factors are especially important?
Maarten Stolk: Usage of Responsible AI is of great importance for every company that has an impact on our day-to-day lives. Not only that…. It is also from an ethical perspective required. Putting the human in the loop is an essential requirement as it results in a conversation between both parties. Enabling AI-human interaction is beneficial for further developments. However, many challenges lie ahead:
- The complexity of model deployments ( e.g. giant generative AI )
- AI as a service, without access to source code or source data
- The frequent handovers between development teams
- Maintaining accountability within the model and the Data Science Team
- Understandable explanations tp present model outputs in an easily understandable manner
- Model steering through human feedback loops
For me, the explainability and feedback loop is most important here, as it is in any other technological development of the past decades. By making sure everyone is able to steer AI algorithms, we make sure AI is aligned with human values. It works in that sense much better than extensive responsible AI frameworks.
Michelle Diaz: What is the impact of Deeploy within these high-risk use cases/industries?
Maarten Stolk: There are quite some important steps in the MLOps lifecycle, like statistical monitoring, human interaction/explainability, and governance. Several challenges lie ahead in our industry and stakeholders expect transparency, explainability, and accountability of AI models in a responsible manner while reducing risks. This is also put together in new legislation, which restricts intransparent or irresponsible use of AI in high-risk applications.
Michelle Diaz: And who are your target customers, can you outline a typical customer?
Maarten Stolk: As high-risk industries especially require ethical decision-making, thus usage of the AI models in a responsible manner, our focus area is fintech, banking, and in healthcare. We already have a great impact on these industries by keeping a human-centric approach. A great example of this is bunq. Bunq uses AI in its transaction monitoring system to tackle anti-money laundering. Every stakeholder in the organization needs to understand how the AI model comes up with its model prediction, not only for the consumers of bunq but also for the customer service as they need to understand, explain and give feedback.
Deeploy Building a Community
Michelle Diaz: What makes Deeploy unique, taking into account the competitors, if any? Are there any other vendors similar to Deeploy? What distinguishes them from Deeploy? And what are your plans to address the competition and goals for expansion?
Maarten Stolk: With Deeploy, deploying machine learning models is made simple and straightforward, allowing data scientists to deploy models in just a matter of minutes. Deeploy automatically logs and stores all information related to model changes and predictions, ensuring full traceability for years to come. Furthermore, we do lots of research on the explainability of such complex models, giving us a unique position in the market.
Michelle Diaz: What makes the work of Deeploy important for society at large? What is your long-term vision/ambition?
Maarten Stolk: AI is the biggest revolution of our time. AI is outsmarting humans, and rapidly changing the way we live and work. But AI makes mistakes and doesn’t explain why. If we (society) don’t understand why, we cannot appeal against or steer AI. Our long-term goal is to make human-AI interaction the norm. People are able to check AI decisions, leading to higher adoption and more trust in AI. Deeploy makes AI decisions explainable to everyone, bridging the gap between data teams and society.
—
Opportunities at Deeploy
Accountability is important in any field, particularly for the technologies that affect our daily lives. So in case you or someone you know is interested in joining Deeploy in their mission for responsible use of AI, you can check out their current vacancies in The Good AI’s Job Board — there are full-time positions available.