Neural Semantic 3D World Modeling and Mapping Lecture
Our dream to make machines sense and perceive (notably see) comes true: nowadays Computer Vision enables diverse applications:
-Autonomous Systems (cars, drones, vessels) Perception,
-Robotics Perception and Control,
-Intelligent Human-Machine Interaction,
-Anthropocentric (human-centered) Computing,
-Smart Cities/Buildings and Assisted living.
Computer Vision, coupled with AI (notably Machine Learning and Deep Neural Network) advances hit the news almost every day.
This lecture overviews neural semantic 3D world modeling and mapping that has many applications in 3D world mapping and in attaching semantics to the world maps It covers the following topics in detail: neural disparity/depth estimation and joint 3D scene geometry and semantics estimation. Their results are then transferred in semantic 3D world maps (e.g., semantic octomaps). Dynamic and static semantic map annotations (e.g., no flight zones, crowd areas) are also attached to such 3D world maps as KML documents.