[TMP-106] Robustness verification for Concept Drift Detection
This project explores proactive concept drift detection in data streams by monitoring changes near decision boundaries using neural network verification.
Real-world data streams are often non-stationary and subject to concept drift, where the distribution of observations changes over time. To maintain model accuracy, concept drift must be monitored so new models can be trained as needed. Traditional methods detect drift retroactively by monitoring performance and triggering retraining after a significant drop, which delays response.
Neural network verification detects whether a neural network is susceptible to an adversarial attack, i.e., whether a given input image can be perturbed by a given epsilon, such that the output of the network changes. This indicates that this input is close to the decision boundary. When the distribution of images close to the decision boundary significantly changes, this indicates that concept drift is occurring, and we can proactively (before the performance drops) retrain the model.
The short-term goal of this micro-project is to define ways to a) monitor the distribution of images close to the decision boundary, and b) define control systems that can act upon this notion. However, neural network verification is computationally intensive and requires significant optimization to handle high-throughput data streams efficiently.
Partners
- Leiden University, Holger Hoos
- INESC TEC, João Gama, jgama@fep.up.pt