CMU researchers train autonomous drones using cross-modal simulated data

A novel method developed by Carnegie Mellon University researchers allows drones to learn perception and action separately.

The two-stage approach overcomes the “simulation-to-reality gap”, and creates a way to safely deploy drones trained entirely on simulated data into real-world course navigation.

Rogerio Bonatti, a doctoral student in the School of Computer Science’s Robotics Institute, says: “Typically drones trained on even the best photorealistic simulated data will fail in the real world because the lighting, colors and textures are still too different to translate…

Bonatti wants to push current technology to approach a human’s ability to interpret environmental cues.

He says: “Most of the work on autonomous drone racing so far has focused on engineering a system augmented with extra sensors and software with the sole aim of speed.

“Instead, we aimed to create a computational fabric, inspired by the function of a human brain, to map visual information to the correct control actions going through a latent representation.”
More>>