Nowadays, decision-making processes for real-world problems used ML models which learn a function that maps the observed features with the decision outcomes. However these models usually do not convey causal information about the association in observational data, thus are not easily understandable for the average user. This makes it therefore impossible to retrace the models’ steps or rely on their reasoning. Hence, it is natural to investigate more explainable methodologies, such as causal discovery approaches, since they apply processes that mimic human reasoning. For this reason, we used such methodologies to create more explicable models that replicate human thinking and that are easier for the average user to understand. More specifically, we suggested its application in methods such as decision trees and random forest, which are by themselves highly explainable correlation-based methods.
1 Conference Paper
This Humane-AI-Net micro-project was carried out by INESC TEC (Joao Gama), Università di Pisa (UNIPI, Dino Pedreschi) and Consiglio Nazionale delle Ricerche (CNR, Fosca Giannotti).