Robot localization using visual infromation

1 Dec 2017– 30 Nov 2019
External identifier
Founded %

Numerous automotive and small aircraft companies have announced promising new applications in the field of autonomous vehicles. Alongside self-driving cars, in the near future small-size micro aerial vehicles could be used for goods delivery (Amazon Prime Air, DHL, Alibaba, Matternet, Swiss Post), in healthcare (Matternet, Flirtey, Wingtra, RedLine), to carry out various inspection and surveillance tasks (SenseFly, Skycatch), or can be deployed at accidents as remote-controlled first aid/responder devices (Drone Aventures, Microdrones). In order for these technologies to become one day reality, several scientific and technical questions have to be answered. In urban environments, the GPS signal is often shadowed by the surrounding buildings or is completely unavailable, thus it cannot be used reliably for accurate localization. In addition, for the sake of safe driving and flight, semantic recognition and understanding of the environment is necessary. In our research plan we are focusing on an important issue within this research field, namely on designing, programming and testing algorithms that accomplish the robust positioning of vehicles using only monocular camera images in urban environments. We plan to reach this goal by implementing new machine learning methods and by using dense detailed 3D models. Our other goal is to research robust Simultaneous Localization and Mapping (SLAM) algorithms that allow an autonomous vehicle to explore previously unknown environments in case the measurements are affected by significant errors.