Ivan Danielov Ivanov is the Chief Technology Officer of Humans in the Loop, a venture that provides model training and validation services for machine learning, with the goal of helping companies and practitioners of artificial intelligence to make use of the technology ethically. Humans in the Loop does this by going beyond the gold standard dataset collection and annotation required for the initial model training. Learn all about their motivation and work through this interview with Ivan, the company’s CTO, brought to you by The Good AI team!
10 Questions:
Michelle Diaz: Tell me about Humans in the Loop (HITL) & how it started. What was the motivation behind its founding?
Ivan Danielov Ivanov: Humans in the Loop provides data labeling and human-in-the-loop services for AI companies, on the one hand, and job opportunities for people from conflict-affected countries, on the other. The company was started by Iva Gumnishka, our CEO, back in 2017. She was just finishing up her degree in Human Rights at Columbia University and for her thesis, she wanted to write about refugees in Bulgaria. She saw that the refugees are having big challenges integrating into the labor market so she started Humans in the Loop as a way to provide refugees with work and learning opportunities.
Michelle Diaz: Humans in the Loop is classified as a social enterprise, having both a for-profit and non-profit entity. Can you expand on how this works, and the decision behind it?
Ivan Danielov Ivanov: Our for-profit branch deals with finding opportunities on the market for data labeling and human-in-the-loop services and organizing the delivery process in order to maximize the quality of timely results. The non-profit branch deals with developing partnerships with other NGOs and providing training to people from conflict-affected countries. Both branches work together to ensure that the workforce has jobs and that the clients are happy with the results.
Michelle Diaz: Can you explain precisely the different services you offer? And the machine learning behind them?
Ivan Danielov Ivanov: The typical services that we offer are dataset annotation and dataset collection for computer vision. Our well-trained dedicated workforce is capable of collecting and annotating high-quality, bias-free datasets. This helps to make the AI models more fair and robust. Additionally, we have just launched three new types of services: human-in-the-loop for active learning, real-time edge case handling, and reinforcement learning with human feedback. The new services are powered by our brand new Humans in the Loop Platform which is accessible to our customers through an API. We are building internal machine learning capabilities to make the annotation process more efficient, improve the quality of annotations, and organize the work process optimally.
Michelle Diaz: What does your typical customer look like?
Ivan Danielov Ivanov: Our typical customers are companies that already have a computer vision solution in production. They are usually looking to continuously improve the quality of their AI models or to ensure the accuracy of the AI decisions on production by adding a human reviewer as part of the ML pipeline. Some of the industries that we operate in are healthcare, automotive, surveillance, and agritech.
Michelle Diaz: At what stages of the Machine Learning lifecycle do human workers get involved and why is it important to involve a human-in-the-loop?
Ivan Danielov Ivanov: Our services can support multiple stages of the machine learning lifecycle. Our team can be involved right from the start of a machine learning project in order to collect and annotate a high-quality dataset for training or testing purposes. Once the model passes the proof-of-concept stage, our team can be involved to monitor the production model in real-time in order to handle edge-cases. This can be very valuable especially during the early production stages when the model is still not very stable. Perhaps the biggest value our team can bring is when it is plugged into the re-training pipeline of the ML model. Our API is perfectly fit for active learning pipelines and closes the re-training loop to ensure efficient continuous improvement of the ML models over time. This approach reduces the risk of model deterioration due to data drift. It also reduces the overall amount of annotated data needed for model re-training.
Michelle Diaz: In which sector or use case will the services of humans in the loop be most impactful?
Ivan Danielov Ivanov: We see the highest impact of our services in use cases where a single false-positive or false-negative decision provided by AI can have a significant cost. For example, a camera equipped with a burglary detection AI. A false-positive example would mean that the security team will have to handle a false alarm, which can be costly. A false-negative, on the other hand, would mean that the burglary will happen unnoticed. Both unwanted cases can be avoided by including a real-time human-in-the-loop to verify the model's decision.
Michelle Diaz: Principally, HITL addresses bias in AI, it focuses on assuring a responsible use of AI. Given the quick improvement of the technology (for example Chat GPT), do you think that companies will bake into their product this consideration, or will ventures like HITL be needed to continuously account for the ethical use of the technology?
Ivan Danielov Ivanov: Ethics has become a very important topic in the AI community in recent years. More and more companies are directing significant amounts of resources to make their AI solutions more fair and trustworthy. However, many of the ethical problems with AI come from the data that is used to train the models. And for many use cases training data is the result of the deliberate work of humans. This means that the workforce needs to be capable of producing fair and ethical datasets. This is exactly where the power of Humans in the Loop lies.
Michelle Diaz: Can you explain how HITL avoids bias in the implementation of its services? In computer vision or data labeling in general, for example.
Ivan Danielov Ivanov: We approach the problem of reducing bias in our datasets from multiple sides. For example, we ensure that our workforce is demographically diverse in order to avoid any inherent cultural biases. We also pay great attention to properly defining the dataset taxonomy in order to reduce subjectivity to a minimum. This is especially important when annotating data about humans. We also work with a dedicated workforce that is trained to strictly follow our ethical standards. The work process also may include additional steps for ensuring the quality of results, such as manual QA, consensus steps, validation examples, etc.
Michelle Diaz: What are the main hurdles the venture is facing, as a start-up, particularly, in the field of AI?
Ivan Danielov Ivanov: One of our biggest challenges is that, due to the active conflicts in multiple parts of the world, our pool of available workers from conflict-affected countries is growing faster than we can currently keep up to provide them with work. We are addressing this problem by opening new markets, for example in the active learning domain.
Humans in the Loop Team
Michelle Diaz: What is the roadmap for the company in the next 2-3 years?
Ivan Danielov Ivanov: On the technological side, we are in the process of building our Humans in the Loop Platform that connects our human workforce to the customer's ML infrastructure. In the coming months and years, it will help our workforce to become more efficient and flexible in working with projects of diverse scales and domains. We expect to open new markets for continuous monitoring and improvement of AI solutions which will in turn provide more jobs for our workforce. On the impact side, we have the goal to expand our ability to provide jobs and professional training to people from conflict-affected countries.
—
Opportunities at Humans in the Loop
In case you or someone you know is interested in working for Humans in the Loop, you can check out their current vacancies in The Good AI’s Job Board — there are full-time positions available. Effectively utilizing AI for good can be tricky, but we are confident that ventures like Human in the Loop will allow society to maximize the potential of AI while limiting the possible disadvantages.