Learning Long-Term Motion Embeddings for Efficient Kinematics Generation
Learning Long-Term Motion Embeddings for Efficient Kinematics Generation
AuthorsNick Stracke†‡, Kolja Bauer†‡, Stefan Andreas Baumann†‡, Miguel Ángel Bautista, Josh Susskind, Björn Ommer†‡
Learning Long-Term Motion Embeddings for Efficient Kinematics Generation
AuthorsNick Stracke†‡, Kolja Bauer†‡, Stefan Andreas Baumann†‡, Miguel Ángel Bautista, Josh Susskind, Björn Ommer†‡
Understanding and predicting motion is a fundamental component of visual intelligence. Although modern video models exhibit strong comprehension of scene dynamics, exploring multiple possible futures through full video synthesis remains prohibitively inefficient. We model scene dynamics orders of magnitude more efficiently by directly operating on a long-term motion embedding that is learned from large-scale trajectories obtained from tracker models. This enables efficient generation of long, realistic motions that fulfill goals specified via text prompts or spatial pokes. To achieve this, we first learn a highly compressed motion embedding with a temporal compression factor of 64×. In this space, we train a conditional flow-matching model to generate motion latents conditioned on task descriptions. The resulting motion distributions outperform those of both state-of-the-art video models and specialized task-specific approaches.
On the Role of Lip Articulation in Visual Speech Perception
April 24, 2023research area Human-Computer Interaction, research area Speech and Natural Language Processingconference ICASSP
*= Equal Contribution
Generating realistic lip motion from audio to simulate speech production is critical for driving natural character animation. Previous research has shown that traditional metrics used to optimize and assess models for generating lip motion from speech are not a good indicator of subjective opinion of animation quality. Devising metrics that align with subjective opinion first requires understanding what impacts human…
Naturalistic Head Motion Generation From Speech
April 24, 2023research area Human-Computer Interaction, research area Speech and Natural Language Processingconference ICASSP
Synthesizing natural head motion to accompany speech for an embodied conversational agent is necessary for providing a rich interactive experience. Most prior works assess the quality of generated head motion by comparing them against a single ground-truth using an objective metric. Yet there are many plausible head motion sequences to accompany a speech utterance. In this work, we study the variation in the perceptual quality of head motions…

