XMANAI Hybrid model
EU-funded XMANAI project deals with bringing explainable AI to the Industry, and this asset is an example of the models developed during the project.
The EU-funded XMANAI project will focus on explainable AI, a concept that contradicts the idea of the ‘black box’ in machine learning, where even the designers cannot explain why the AI reaches at a specific decision. XMANAI will carve out a ‘human-centric’, trustful approach that will be tested in real-life manufacturing cases. The aim is to transform the manufacturing value chain with ‘glass box’ models that are explainable to a ‘human in the loop’ and produce value-based explanations.
Despite the indisputable benefits of AI, humans typically have little visibility and knowledge on how AI systems make any decisions or predictions due to the so-called “black-box effect” in which many of the machine learning/deep learning algorithms are not able to be examined after their execution to understand specifically how and why a decision has been made. The inner workings of machine learning and deep learning are not exactly transparent, and as algorithms become more complicated, fears of undetected bias, mistakes, and miscomprehensions creeping into decision making, naturally grow among manufacturers and practically any stakeholder. In this context, Explainable AI (XAI) is today an emerging field that aims to address how black box decisions of AI systems are made, inspecting and attempting to understand the steps and models involved in decision making to increase human trust. XMANAI aims at placing the indisputable power of Explainable AI at the service of manufacturing and human progress, carving out a “human-centric”, trustful approach that is respectful of European values and principles, and adopting the mentality that “our AI is only as good as we are”. XMANAI, demonstrated in 4 real-life manufacturing cases, will help the manufacturing value chain to shift towards the amplifying AI era by coupling (hybrid and graph) AI "glass box" models that are explainable to a "human-in-the-loop" and produce value-based explanations, with complex AI assets (data and models) management-sharing-security technologies to multiply the latent data value in a trusted manner, and targeted manufacturing apps to solve concrete manufacturing problems with high impact.
EU-funded XMANAI project deals with bringing explainable AI to the Industry, and this asset is an example of the models developed during the project.
This regression model exploits historical data measured from machine sensors to perform inference on future usage and detect possible future faults in the machine itself. Explainability metrics targets sensor groups and are powered by the SHAP library.
This recurrent neural network model exploits historical data measured from machine sensors to perform inference on future usage and detect possible future faults in the machine itself. Explainability metrics targets sensor groups and are powered by the SH...