The Industry 4.0 National Technology Platform was established under the leadership of the Institute for Computer Science and Control (SZTAKI), Hungarian Academy of Sciences, with the participation of research institutions, companies, universities and professional organizations having premises in Hungary, and with the full support and commitment of the Government of Hungary, and specifically that of the Ministry of National Economy.
Higher education has to keep pace with the global market needs for the necessary ICT(Information and Communications Technology) skills and the overall understanding of the complexity of industries in the 21st century. Global market companies have to effectively deal with the constant evolution of products, processes and production systems (and all in parallel) that can be more easily monitored, developed and up-graded using digital applications based on the concept of digital twin and taking advantage of Virtual Reality (VR) and Augmented Reality (AR) simulations.
The EOSC-hub project creates the integration and management system of the future European Open Science Cloud that delivers a catalogue of services, software and data from the EGI Federation, EUDAT CDI, INDIGO-DataCloud and major research e-infrastructures. This integration and management system (the Hub) builds on mature processes, policies and tools from the leading European federated e-Infrastructures to cover the whole life-cycle of services, from planning to delivery.
The goal of the project is to reduce the death rate of neonatal and prenatal infants, and increase their chances of living via developing vision based non-contact body devices for monitoring physiological signals like pulse rate, breath rate, blood oxygenation, activity, and body temperature using remote photoplethysmographic methods.
The project aims to process the data of novel 3D sensors (e.g. Microsoft Kinect, Lidar, MRI, CT) available in a wide range of application fields and to fuse them with 2D image modalities to build saliency models, which are able to automatically and efficiently emphasize visually dominant regions. Such models not only tighten the region of interest for further image processing steps, but facilitate and increase the efficiency of segmentation in different application fields with available 3D sensor data, e.g.
Numerous automotive and small aircraft companies have announced promising new applications in the field of autonomous vehicles. Alongside self-driving cars, in the near future small-size micro aerial vehicles could be used for goods delivery (Amazon Prime Air, DHL, Alibaba, Matternet, Swiss Post), in healthcare (Matternet, Flirtey, Wingtra, RedLine), to carry out various inspection and surveillance tasks (SenseFly, Skycatch), or can be deployed at accidents as remote-controlled first aid/responder devices (Drone Aventures, Microdrones).
The aim of the project is to develop an image fusion and processing method that uses images of cameras with different modalities to track various objects, taking into account the needs of border surveillance end-users.
Various key aspect of machine-based environment interpretation are the automatic detection and recognition of objects, obstacle avoidance in navigation, and object tracking in certain applications. Integrating visual sensors, such as video cameras, with sensors providing direct 3D spatial measurements, such as Lidars may offer various benefits (high spatial and temporal resolution, distance, color or illumination invariance). However, fusing the different data modalities often implies sensor specific unique challenges.
The mission of CloudiFacturing is to optimize production processes and producibility using Cloud/HPC-based modelling and simulation, leveraging online factory data and advanced data analytics, thus contributing to the competitiveness and resource efficiency of manufacturing SMEs, ultimately fostering the vision of Factories 4.0 and the circular economy. CloudiFacturing will empower over 60 European organizations (many of them being manufacturing SMEs) and will support about 20 cross-national application experiments that will primarily be selected via two Open Calls.
We can efficiently use digital holographic microscopy for monitoring of sparse samples. From a recorded hologram the whole illuminated volume can be reconstructed using numerical simulation of wave propagation. From a single recorded hologram we can reconstruct several objects at different depths within the volume. Thus we can avoid the small depth of field constraint of conventional microscopes, and even 200 times larger volume can be observed from a single exposure.