Partner projects
Project KI4HE
The KI4HE project is a funded project of the FuE-programme “Informations- und Kommunikationstechnik” of the land of Bavaria and the project developer VDI/VDE Innovation + Technik GmbH.
The research partners DLR and the industrial partner Roboception GmbH develop and examine technologies that are necessary for the safe and autonomous transport of food in crisis areas.
The technical work packages in the KI4HE project are extensions of four of the five technology work packages from the AHEAD application, but nevertheless a clear demarcation of the work is planned.
As part of the project, the partner Roboception GmbH will change its approach of visual odometry from the rc_visard towards a 360° covering odometry. This odometry is extended by a novel 360° 3D environment model, which enables AI-based semantic annotation as well as interfaces for semi-autonomous operation in the case of remote control.
The partner DLR is represented in the KI4HE project with two institutes, the Institute of Robotics and Mechatronics (DLR-RM) and the Institute of Communications and Navigation (DLR-KN). The RM Institute participates as a project manager in all work packages. DLR-KN, with a focus on environmental perception, semantic
Annotation and teleoperation focuses on localization and the associated intelligent GNSS fusion and computation.
The associated partner “WFP Incubator Munich” supports the project with the specification of the application scenario and can play a major role for the KI4HE project, especially in the later exploitation.
The following work packages are processed:
- Advanced Environment Modeling
- Advanced Localization
- AI-based semantic annotation
- Telerobotics with haptic reproduction
Project MaiSHU
The MaiSHU project is a funded project of the FuE-programme “Informations- und Kommunikationstechnik” of the land of Bavaria and the project developer VDI/VDE Innovation + Technik GmbH.
The research partners DLR-RM and Blickfeld GmbH are jointly investigating the use of innovative solid state laser systems (LiDAR). By dispensing with moving parts, such LiDARs are more cost-effective and also more robust against vibrations, which is why they are suitable for the area of application outlined. In addition to the complementarity of the laser sensor to the existing camera systems, aspects such as modeling, calibration, registration and the integration of the systems in a simulation environment are examined, and the 360 ° laser environmental perception is validated in tests.
The research partners DLR-RM and SENSODRIVE GmbH are investigating the modalities of an extended human-machine interface, which through intelligent algorithms and the use of “confidence” information of the entire processed sensor information via haptic and visual interfaces give a smooth transition to the tele-operator from direct remote control up to partially autonomous or augmented assistance functions. The extension of the access to the gear shift or the switch panels of the vehicle enable the use at higher speeds.
The research partners DLR-DFD, in detail the ZKI (Center for Satellite-Based Crisis Information), and the WFP develop methods for expanding global mission planning and three-dimensional situation representation and assessment. For this purpose, high-resolution satellite images and local drone data are displayed in a 3D situation image in the global control center (GMOC) and communicated to the local control and control center (LMOC) together with other collected geographic information in the form of a “dashboard” in order to support the operator. Using various data modalities, the confidence of the planned route is increased in advance. The planning, pre-evaluation and live evaluation are continuously supported by intelligent algorithms, e.g. AI-based flood detection, as well as by the operational expert knowledge of the WFP. A three-dimensional processing of all available geodata after the vehicle mission allows a detailed assessment and evaluation both in retrospect and for preparing new missions and improving the methods used.
The methodological and technological developments are tested in continuous simulations and evaluated in a joint demonstration.
The work is divided into the following work packages:
- Simulation
- Diversity for robust perception and navigation
- Artificial intelligence for shared autonomy
- Human-machine interface, control and visualization
- Resource planning, operation and evaluation using a multimodal 3D situation map