Transferring content from one video to the style of another relies on consistencies — a type of artificially intelligent mimickry that creates images after studying similarities and transforming them.
New technology developed at Carnegie Mellon University takes that cycle-GAN model a step further, allowing not only facial expressions to be copied, but also the movements and cadence of the performance. Its creator, Aayush Bansal, who’s a phD student in CMU’s Robotics Institute, says he started the project with applications to film, entertainment and autonomous driving in mind, but quickly realized its potential for deepfakes — simulations used to intentionally mislead.
More>>