Modeling the Speed and Timing of American Sign Language to Generate Realistic Animations

Author(s):
Sedeeq Al-khazraji, Larwan Berke, Sushant Kafle, Peter Yeung and Matt Huenerfauth

Institution:

Rochester Institute of Technology, Rochester, NY, USA

Abstract:

To enable more websites to provide content in the form of sign language, we investigate software to partially automate the synthesis of animations of American Sign Language (ASL), based on a human-authored message specification. We automatically select: where prosodic pauses should be inserted (based on the syntax or other features), the time-duration of these pauses, and the variations of the speed at which individual words are performed (e.g. slower at the end of phrases). Based on an analysis of a corpus of multi-sentence ASL recordings with motion-capture data, we trained machine-learning models, which were evaluated in a cross-validation study. The best model out-performed a prior state-of-the-art ASL timing model. In a study with native ASL signers evaluating animations generated from either our new model or from a simple baseline (uniform speed and no pauses), participants indicated a preference for speed and pausing in ASL animations from our model.

Full Paper:
Download Full Paper Here