Change detection and event recognition with fusion of images and Lidar measurements

 
2 Oct 2017– 30 Sep 2019
External identifier
NKFIH KH-125681
 

Various key aspect of machine-based environment interpretation are the automatic detection and recognition of objects, obstacle avoidance in navigation, and object tracking in certain applications. Integrating visual sensors, such as video cameras, with sensors providing direct 3D spatial measurements, such as Lidars may offer various benefits (high spatial and temporal resolution, distance, color or illumination invariance). However, fusing the different data modalities often implies sensor specific unique challenges. Since the data characteristics provided by the recently appeared 3D laserscanners is significantly different from earlier sensors, the options of data fusions are not widely exploited yet in the literature. On the other hand, the results are not only interesting from a scientific point of view, but they also provide useful information regarding possible future utilization, which may mean benefits for hardware manufacturers.

The main goal of the project is to combine the newest 3D sensors with traditional high resolution cameras to obtain new pattern recognition, scene understanding event and change detection methodologies, extend the validity of the existing methods or making them more accurate. Although sensors other than optical cameras - radars, Lidars, sonars – are appropriate at a certain degree (depending on sensor-to-object distance) to detect obstacles and field objects, fusing them with image features can significantly enhance their performance. Fusing the implicit depth maps and 3D spatial data provided by active sensors, such as Lidars, with camera images aligned to the point clouds may provide particular benefits in object detection and recognition. Such fusion processes can be based not only on standard registration methodologies of colored camera images and Lidar point clouds, but also may use image content-based feature maps (saliency maps, relative depth maps, focus regions etc), integrated with point clouds. While the earlier mentioned methods can efficiently contribute to recognition and classification tasks, the later ones may facilitate the solution of obstacle detection and avoidance problems.

Department

Manager

Email
benedek.csaba@sztaki.hun-ren.hu
Phone
+36 1 279 6097
+36 1 279 7194