Monocular-SLAM algorithm and architecture for FPGA acceleration

Research group on Embedded Architecture for Multisensing

Monocular SLAM (Simultaneous Localization and Mapping) is the problem of constructing a 3D map while simultaneously keeping track of a single moving camera within the map and, it is the keystone of several useful applications:augmented reality, robotics systems, autonomous vehicle applications,etc.

Slam with a monocular camera
In the last 30 years, the SLAM community has made astonishing progress in terms of robustness and accuracy under monocular formulation. Unfortunately, this accuracy/robustness came at the price of high computational cost. As a result, in real world applications, most of the available computational resources are spent in the SLAM process, this makes the resources availability for other tasks such as control, navigation, path planning, etc., become low. In this project, we addressed this problem via a smart camera formulation. We believe that a monocular-SLAM smart camera could be highly promising since all computational resources in the system will become free. Furthermore, it could be possible to integrate cooperative information from several cameras, integrate other image understanding algorithms inside the camera fabric and therefore, have a better visual representation of the world.
Features Extraction
Padawan DREAM