The Industry 4.0 National Technology Platform was established under the leadership of the Institute for Computer Science and Control (SZTAKI), Hungarian Academy of Sciences, with the participation of research institutions, companies, universities and professional organizations having premises in Hungary, and with the full support and commitment of the Government of Hungary, and specifically that of the Ministry of National Economy.
The aim of the present study is to develop and evaluate a computer-based methods for automated and improved detection and classification of different colorectal lesions, especially polyps. For this purpose first, pit pattern and vascularization features of up to 1000 polyps with a size of 10 mm or smaller will be detected and stored in our web based picture database made by a zoom BLI colonoscopy. These polyps are going to be imaged and subsequently removed for histological analysis. The polyp images are analyzed by a newly developed deep learning computer algorithm.
MTA SZTAKI participated in the development and implementation of the Digital Repository on Water Management whose eLearning part consists of 40 subjects. It contains several thousands of multimedia elements (pictures, videos and animations) and a huge amount of mathematical formulas besides the textual parts of the eLearning training materials of the subjects which makes online learning possible. The training materials comply with the international SCORM standard to support the reusability.
FameLab is the world’s leading science communication competition. It spotlights the science communicators of tomorrow, who can show off their area of expertise in a truly engaging way. Cheltenham Festivals launched the first FameLab in 2005, and in 2007 partnered with British Council to take the model worldwide. Since then more than 9,000 scientists have participated from all around the globe.
The goal of the project is to reduce the death rate of neonatal and prenatal infants, and increase their chances of living via developing vision based non-contact body devices for monitoring physiological signals like pulse rate, breath rate, blood oxygenation, activity, and body temperature using remote photoplethysmographic methods.
The EOSC-hub project creates the integration and management system of the future European Open Science Cloud that delivers a catalogue of services, software and data from the EGI Federation, EUDAT CDI, INDIGO-DataCloud and major research e-infrastructures. This integration and management system (the Hub) builds on mature processes, policies and tools from the leading European federated e-Infrastructures to cover the whole life-cycle of services, from planning to delivery.
Numerous automotive and small aircraft companies have announced promising new applications in the field of autonomous vehicles. Alongside self-driving cars, in the near future small-size micro aerial vehicles could be used for goods delivery (Amazon Prime Air, DHL, Alibaba, Matternet, Swiss Post), in healthcare (Matternet, Flirtey, Wingtra, RedLine), to carry out various inspection and surveillance tasks (SenseFly, Skycatch), or can be deployed at accidents as remote-controlled first aid/responder devices (Drone Aventures, Microdrones).
The project aims to process the data of novel 3D sensors (e.g. Microsoft Kinect, Lidar, MRI, CT) available in a wide range of application fields and to fuse them with 2D image modalities to build saliency models, which are able to automatically and efficiently emphasize visually dominant regions. Such models not only tighten the region of interest for further image processing steps, but facilitate and increase the efficiency of segmentation in different application fields with available 3D sensor data, e.g.
The aim of the project is to develop an image fusion and processing method that uses images of cameras with different modalities to track various objects, taking into account the needs of border surveillance end-users.
Various key aspect of machine-based environment interpretation are the automatic detection and recognition of objects, obstacle avoidance in navigation, and object tracking in certain applications. Integrating visual sensors, such as video cameras, with sensors providing direct 3D spatial measurements, such as Lidars may offer various benefits (high spatial and temporal resolution, distance, color or illumination invariance). However, fusing the different data modalities often implies sensor specific unique challenges.