Multilayer perceptron. Backpropagation Lecture
Nowadays, Artificial Intelligence drives scientific and economic growth worldwide. This is largely due to advances in Machine Learning (ML), notably in Deep Neural Networks (DNNs), which are essentially massive ‘learning by experience/examples’ systems. Their applications span and revolutionize almost every human activity:
-Autonomous Systems (cars, drones, vessels),
-Media Content and Art Creation (including fake data creation/detection), Social Media Analytics,
-Medical Imaging and Diagnosis,
-Financial Engineering (forecasting and analytics), Big Data Analytics,
-Broadcasting, Internet and Communications,
-Robotics/Control
-Intelligent Human-Machine Interaction, Anthropocentric (human-centered)Computing,
-Smart Cities/Buildings and Assisted living.
-Scientific Modeling and Analytics.
Several DNN advances and challenges hit the news almost every day, arising discussions on AI ethics, privacy protection and its societal impact.
This lecture covers the basic concepts and architectures of Multi-Layer Perceptron (MLP), Activation functions, and Universal Approximation Theorem. Training MLP neural networks is presented in detail: Loss types, Gradient descent, Error Backpropagation. Training problems are overviewed, together with solutions, e.g., Stochastic Gradient Descent, Adaptive Learning Rate Algorithms, Regularization, Evaluation, Generalization methods.