![]() ![]() We aim to modify the target video in a photo-realistic fashion, such that it is virtually impossible to notice the manipulations. The target sequence can be any monocular video for example, legacy video footage downloaded from Youtube with a facial performance. In contrast to previous reenactment approaches that run offline, our goal is the online transfer of facial expressions of a source actor captured by an RGB sensor to a target actor. ![]() However, instead of transferring facial expressions to virtual CG characters, our main contribution is monocular facial reenactment in real-time. In this paper, we employ a new dense markerless facial performance capture method based on monocular RGB data, similar to state-of-the-art methods. It is now feasible to run these face capture and tracking algorithms from home, which is the foundation for many Virtual Reality (VR) and Augmented Reality (AR) applications, such as teleconferencing. These techniques have become increasingly popular for the animation of virtual Computer Graphics (CG) avatars in video games and movies. Impressive results have been achieved, both based on Red-Green-Blue (RGB) as well as RGB-D data. In recent years, real-time markerless facial performance capture based on commodity sensors has been demonstrated. This live setup has also been shown at SIGGRAPH Emerging Technologies 2016, by Thies et al. ![]() We demonstrate our method in a live setup, where Youtube videos are reenacted in real time. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Reenactment is then achieved by fast and efficient deformation transfer between source and target. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. The source sequence is also a monocular video stream, captured live with a commodity webcam. Face2Face is an approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). ![]()
0 Comments
Leave a Reply. |