LionForests
Local Interpretation of Random Forests
@ Intelligent Systems Lab - Ioannis Mollas
License
GNU General Public License (GPL) v3
Main Characteristic
LionForests is a methodology for shedding light on the predictions of the misjudged family of tree ensemble algorithms, thus addressing the drawbacks due to their black-box uninterpretable nature.
Research areas
Explainable AI
Technical Categories
Machine learning
Business Categories
Earth Observation
Last updated
27.05.2021 - 10:04
Detailed Description
Additional information: Explainability is a key requirement for human-centred AI approaches. Using classic unsupervised learning techniques and an enhanced similarity metric, to wander among transparent trees inside a forest following breadcrumbs, the interpretable essence of tree ensembles arises. An interpretation provided by these systems using our approach, which we call “LionForests”, can be a simple, comprehensive rule.
Trustworthy AI
The interpretation technique implemented in LionForests is intended to provide explanations to Random Forests classifier, towards a trustworthy machine learning component. LionForests actively takes into consideration the internal structure of the model it tries to explain.
GDPR Requirements
LionForests component is GDPR compliant (Articles 13–15) because it is providing a way to interpret the decision of a random forest model in a form of rules. This approach addresses the "explicability" requirement of the GDPR, where a requirement is fixed for automated decision processes that have an impact on humans.