Simultaneous Localization and Mapping Lecture
Our dream to make machines sense and perceive (notably see) comes true: nowadays Computer Vision enables diverse applications:
-Autonomous Systems (cars, drones, vessels) Perception,
-Robotics Perception and Control,
-Intelligent Human-Machine Interaction,
-Anthropocentric (human-centered) Computing,
-Smart Cities/Buildings and Assisted living.
Computer Vision, coupled with AI (notably Machine Learning and Deep Neural Network) advances hit the news almost every day.
The lecture includes the essential knowledge about how we obtain/get 2D and/or 3D maps that robots/drones need, taking measurements that allow them to perceive their environment with appropriate sensors. Semantic mapping includes how to add semantic annotations to the maps such as POIs, roads and landing sites. Section Localization is exploited to find the 3D drone or target location based on sensors using specifically Simultaneous Localization and Mapping (SLAM). Finally, drone localization fusion describes improves accuracy on localization and mapping by exploiting the synergies between different sensors.