The project aims to process the data of novel 3D sensors (e.g. Microsoft Kinect, Lidar, MRI, CT) available in a wide range of application fields and to fuse them with 2D image modalities to build saliency models, which are able to automatically and efficiently emphasize visually dominant regions. Such models not only tighten the region of interest for further image processing steps, but facilitate and increase the efficiency of segmentation in different application fields with available 3D sensor data, e.g.
Numerous automotive and small aircraft companies have announced promising new applications in the field of autonomous vehicles. Alongside self-driving cars, in the near future small-size micro aerial vehicles could be used for goods delivery (Amazon Prime Air, DHL, Alibaba, Matternet, Swiss Post), in healthcare (Matternet, Flirtey, Wingtra, RedLine), to carry out various inspection and surveillance tasks (SenseFly, Skycatch), or can be deployed at accidents as remote-controlled first aid/responder devices (Drone Aventures, Microdrones).
The aim of the project is to develop an image fusion and processing method that uses images of cameras with different modalities to track various objects, taking into account the needs of border surveillance end-users.
Various key aspect of machine-based environment interpretation are the automatic detection and recognition of objects, obstacle avoidance in navigation, and object tracking in certain applications. Integrating visual sensors, such as video cameras, with sensors providing direct 3D spatial measurements, such as Lidars may offer various benefits (high spatial and temporal resolution, distance, color or illumination invariance). However, fusing the different data modalities often implies sensor specific unique challenges.
In this project we address a new and very important issue: the observation of small backcountry wetland areas surrounded by different areas, hosting important species and delivering essential ecosystem services and biodiversity. Although these patches are small one by one, but together they can contribute to the wetland cover area with a very high rate – their protection and mapping is a need.
Recent Simultaneous Localization and Mapping (SLAM) algorithms are basically developed for stable environment in time; dynamic scenes cause strong bias in the localization models. For this reason we will improve the conventional SLAM calculus with statistical optimizing the models of changing parts and their neighborhood connection; this will result in semantic connectedness investigation on the models, which needs good classification methods of the scalable cluster structure.
Up to date 3D sensors revolutionized the acquisition of environmental information. 3D vision systems of self driving vehicles can be used for -apart from safe navigation- real time mapping of the environment, detecting and analyzing static (traffic signs, power lines, vegetation, street furniture), and dynamic (traffic flow, crowd gathering, unusual events) scene elements.
The MPLab laboratory is involved in the joint project SCOPIA where the task of our colleagues is to develop accurate image registration techniques for multi-spectral images: Predicting the chances of a successful Embryo transfer by the use of minimal invasive endoscopic device
Based on the APIS project, with extended goals: "To study, define, analyse a new system concept for implementing and demonstrating ISAR imaging capability in a plug-in multistatic array passive radar finalized to target recognition."