Additional information: Explainability is a key requirement for human-centred AI approaches. Using classic unsupervised learning techniques and an enhanced similarity metric, to wander among transparent trees inside a forest following breadcrumbs, the interpretable essence of tree ensembles arises. An interpretation provided by these systems using our approach, which we call “LionForests”, can be a simple, comprehensive rule.
The interpretation technique implemented in LionForests is intended to provide explanations to Random Forests classifier, towards a trustworthy machine learning component. LionForests actively takes into consideration the internal structure of the model it tries to explain.
LionForests component is GDPR compliant (Articles 13–15) because it is providing a way to interpret the decision of a random forest model in a form of rules. This approach addresses the "explicability" requirement of the GDPR, where a requirement is fixed for automated decision processes that have an impact on humans.