Project Starline: A high-fidelity telepresence system

Project Starline: A high-fidelity telepresence system
J. Lawrence, D. Goldman, S. Achar, G. Blascovich, J. Desloge, T. Fortes, E. Gomez, S. Häberling, H. Hoppe, A. Huibers, C. Knaus, B. Kuschak, R. Martin-Brualla, H. Nover, A. Russell, S. Seitz, K. Tong.
ACM Trans. Graphics (SIGGRAPH Asia), 40(6), 2021.
Chat with a remote person as if they were copresent.
Abstract: We present a real-time bidirectional communication system that lets two people, separated by distance, experience a face-to-face conversation as if they were copresent. It is the first telepresence system that is demonstrably better than 2D videoconferencing, as measured using participant ratings (e.g., presence, attentiveness, reaction-gauging, engagement), meeting recall, and observed nonverbal behaviors (e.g., head nods, eyebrow movements). This milestone is reached by maximizing audiovisual fidelity and the sense of copresence in all design elements, including physical layout, lighting, face tracking, multi-view capture, microphone array, multi-stream compression, loudspeaker output, and lenticular display. Our system achieves key 3D audiovisual cues (stereopsis, motion parallax, and spatialized audio) and enables the full range of communication cues (eye contact, hand gestures, and body language), yet does not require special glasses or body-worn microphones/headphones. The system consists of a head-tracked autostereoscopic display, high-resolution 3D capture and rendering subsystems, and network transmission using compressed color and depth video streams. Other contributions include a novel image-based geometry fusion algorithm, free-space dereverberation, and talker localization.
Hindsights: My contributions focused on the real-time compression and rendering technologies. Related patent publications include Spatially adaptive video compression for multiple streams of color and depth and Image-based geometric fusion of multiple depth images using ray casting.