Skip to main content

Thu, 02/18/2021 - 11:43

COMPANIES INTERVIEWS: CLEARBOX AI

Investigating how companies are approaching ethics in Europe, in this article we hear the experience of Luca Gilli, CTO and co-founder of 

Investigating how companies are approaching ethics in Europe, in this article we hear the experience of Luca Gilli, CTO and co-founder of Clearbox AI where he leads R&D and Product Engineering.

His role is to apply mathematical methodologies to develop and improve the core technology of his company upon which our Model Management platform is based.

Tell us about your company

Clearbox AI offers a model assessment & management platform for data scientists and managers in companies. It plugs into MLOps pipelines to enable trustworthy & human-centric AI in companies by providing actionable insights on data drift, robustness and explainability aspects of AI models used by companies in their business. 
 

 

Image removed.

There is a great hype around AI and some expectations are probably too exaggerated. What Impact should we expect from this innovation in your opinion? There is any particular example of a positive impact of AI coming from your company or your field that you would like to share?

What we observed when talking to practitioners in the field is that despite a big hype, a lot of AI projects/ideas often reach a sudden stop at the end of the proof of concept phase (PoCs). This is because of the hurdles that need to be overcome when putting an idea into production, such as explainability, monitoring, uncertainty analysis and active learning. Our company aims to help data scientists ' convert’ PoCs into useful and robust tools by addressing the aforementioned issues.

One of the main core topics of the European Strategy is Trustworthy AI with its human-centric vision. What does Trustworthy AI mean in your view and how does it translate in your everyday research and business?

For us Trustworthy AI means two main things: being able to make sure we understand model behaviour by using a set of post-hoc interpretability techniques and performing real time uncertainty analysis. We should not expect models to be infallible but we should have robust estimates of the probability of a mistake based on real-time data. Models should be able to say I don’t know without being under or overconfident. At Clearbox AI we focus our research and development efforts on solving these two important problems.

As a matter of fact, working on AI raises ethical issues. Which are the ethical challenges in your job and which effort you and your team make to address them? Ideally, what resources (e.g.expert advice, training, software tools) would help you address them better?

Rather than facing the challenges ourselves, we believe we are addressing them through our solution. We consider ourselves as a solution provider to enable Trustworthy AI . While we don’t address every ethical aspect of AI, we create awareness in terms of usefulness, validation, testing and robustness and interpretability to model builders and managers on how to use the information or decisions provided by an AI system in a responsible and effective way. In addition we are also involved in global initiatives to contribute our share in developing frameworks for assessing AI models in fields like digital healthcare where AI ethics are of utmost importance. We recently published our joint paper on the topic.

Would you like to share a personal reflection or experience in your work that has made you realize the importance of an ethical approach? Which values inspire you more at work?

It is not directly related to our work but I find the concept of AI nudging a bit scary. For example right now there is a high risk of creating online echo chambers due to models recommendations which are generated in order to maximize few KPIs. Making these models more transparent is very important to avoid dangerous situations in this context.

AI has demonstrated to have important social (e.g racial bias) or environmental consequences on the environment (e.g. energy consumptions). What is your position on that and how does this influence your job?

I feel the energy consumption aspect is not stressed enough. I think that there is an issue with using too complex models to deal with issues that could be tackled with simpler ones, as they say “cracking a nut with a sledgehammer”. Another issue is related to putting models in production using cloud infrastructure, I really think there is a big margin of improvement with respect to computational resources required to deploy a model in terms of storage, cpu and memory usage.

 

Source: Luca Gilli