Intel Corp. demonstrated its RealSense 3D camera running an application developed by Swiss-based startup faceshift during the opening keynote address of Computex Taipei, the largest ICT show in Asia.
The live demonstration provided a future vision of online virtual communication and avatar-based gaming on consumer devices. During Intel’s Keynote address, company president Renee James spoke to Michelle Xu, but it was Xu’s avatar that spoke back. The live avatar experience was a combination of Intel and faceshift technology. The new Intel RealSense 3D camera provides both depth and video, but unlike the similar Microsoft Kinect, is small enough to be embedded in consumer devices and designed for close range human-computer interactions such as gesture tracking. faceshift’s software technology analyses this data and outputs meaningful expressions and eye gaze information, which are used to drive the virtual character.
The demo showed the power and versatility of using next-gen 3D depth sensors, particularly when embedded into consumer devices. In this case, Intel used an ASUS laptop, although other devices will be shown at the Intel booth, 4F TWTC Nangang Hall.
Previously this type of face tracking technology was restricted to large visual effects and game studios, due to the high price of the specialized hardware and software. With embedded Intel RealSense 3D cameras, for the first time ever consumers will have this capability in their own devices. “The combination of Intel RealSense technology and faceshift technology provides a new level of robustness and fidelity on a consumer device. This is a game changer for online communication and in-game avatar use.” Mooly Eden, senior vice-president, general manager of perceptual computing at Intel.
“The combination of faceshift and Intel RealSense puts animation-studio technology in the hands of consumers,” says faceshift CEO Thibaut Weise.