MANOLO_OnlineSurveyResults_Dataset4_20240731_v1
The dataset includes the results from the Online Survey activity in the framework of the project's "D1.1 Trustworthy Efficient AI for the Cloud-edge Deep Dive".
The dataset includes the results from the Online Survey activity in the framework of the project's "D1.1 Trustworthy Efficient AI for the Cloud-edge Deep Dive". read more of MANOLO_OnlineSurveyResults_Dataset4_20240731_v1
The dataset includes the questions used to collect feedback from the general public through the Online Survey activity in the framework of the project's "D1.1 Trustworthy Efficient AI for the Cloud-edge Deep Dive". read more of MANOLO_OnlineSurveyQuestionnaire_Dataset3_20240731_v1
The dataset includes the input collected from MANOLO partners through the Desk Research activities in the framework of the project's "D1.1 Trustworthy Efficient AI for the Cloud-edge Deep Dive". read more of MANOLO_DeskResearchResults_Dataset2_20240731_v1
The dataset includes the questions used to collect feedback from the MANOLO consortium regarding the project's "D1.1 Trustworthy Efficient AI for the Cloud-edge Deep Dive". read more of MANOLO_DeskResearchQuestionnaire_Dataset1_20240731_v1
Dataset of EEG data acquired during our experiment. Participants had to decide with support from an AI model what is shown in a series of noisy images, either with or without AI explanations. From the EEG data, we can infer how difficult the decision was ... read more of Decision Difficulty EEG Dataset
This recurrent neural network model exploits historical data measured from machine sensors to perform inference on future usage and detect possible future faults in the machine itself. Explainability metrics targets sensor groups and are powered by the SH... read more of GRU + SHAP - Explainable Predictive Maintenance for Irregular Multivariate Time Series
This regression model exploits historical data measured from machine sensors to perform inference on future usage and detect possible future faults in the machine itself. Explainability metrics targets sensor groups and are powered by the SHAP library. read more of Facebook Prophet + SHAP - Explainable Predictive Maintenance for Regular Multivariate Time Series
This app identifies whether an audio file is synthetic or real and uses GradCAM saliency maps to visualize the key audio features that influenced its classification. read more of Audio Deepfake Detection with GradCam Saliency Map Explainability
To boost interpretability with concept vectors, a reverse engineering approach automates concept identification by analyzing the latent space of deep neural networks using Singular Value Decomposition. This framework combines factorization, latent space c... read more of Unique Concept Vectors through Latent Space Decomposition
Learning Visual Explanations for DCNN-Based Image Classifiers Using an Attention Mechanism read more of L-CAM
EU-funded XMANAI project deals with bringing explainable AI to the Industry, and this asset is an example of the models developed during the project. read more of XMANAI Hybrid model
This library lists the outcomes of our research on video summarization and explainable AI-based summarization. read more of Explainable Video Summarization
Nowadays, Artificial Intelligence, notably Advanced Machine Learning (ML) drives scientific and economic growth worldwide. They are essentially massive ‘learning by experience/examples’ systems. However, as our tasks and the world change, such systems sho... read more of Explainable AI Lecture
This course, offered by the University of Groningen, provides introductory knowledge and coverage of Industry 4.0 technologies and their interrelation with humans, with a specific focus on technologies such as internet of things, and machine learning / ar... read more of Smart Industry Operations
Nowadays, Artificial Intelligence, notably Advanced Machine Learning (ML) drives scientific and economic growth worldwide. They are essentially massive ‘learning by experience/examples’ systems. However, as our tasks and the world change, such systems sho... read more of Advanced Deep Learning Module
An encyclopedia of the major scientific and technical terms related to Trustworthy Artificial Intelligence This book (to be consolidated in the second phase of the project) represents the first period deliverable of the TAILOR project, providing an enc... read more of The TAILOR Handbook of Trustworthy AI
Style-transfer autoencoding for explainable classification developed in collaboration with FDI for MANOLO T5.3. read more of MANOLO xSTAE: A generative framework
A series of STAR blogs for better understanding Trusted Artificial Intelligence in the Industry 5.0 era read more of STAR Booklet available online!
We used EEG to assess how people's reliance on AI and the effectiveness of explanations depend on perceived decision difficulty. read more of The Effect of Perceived Decision Difficulty on Human Overreliance on AI