Let’s face it: we’re an empire built on illusions. Media, Social Media, Politics, take your pick. But what if you could program any video, of any person, to say and do what you wanted? Becoming a digital puppeteer of sorts. Right now, face-swapping on Snapchat may be cool for petty amusement, but manipulating Donald Trump’s Fox News interview to advocate building Mosques in San Diego instead of a wall, in real time, might be even cooler.
Welcome to the world of Face2Face.
Visiting Stanford scientist, Matthias Niessner built upon existing Stanford research using web cameras that capture 3D facial gestures as a means of “live facial re-enactment.” As stated in The New York Times, Dr. Niessner’s goal was to use this technique to improve modern technology by means of instantaneous language translation via Skype, and improved qualities of virtual reality and dubbing in movies.
In incredibly simplistic terms, the way Face2Face works is by training itself on a Target Actor (like Donald Trump talking) and a Source Actor (like you or me) to track both facial expressions and gestures by mapping them accordingly via a commodity webcam. Once mapped, the Source Actor can seamlessly manipulate facial re-enactment in real time of the Target Actor, and let the puppeteering begin!