The How2Sign is a multimodal and multiview continuous American Sign Language (ASL) dataset consisting of a parallel corpus of more than 80 hours of sign language videos and a set of corresponding modalities including speech, English transcripts, and depth. A three-hour subset was further recorded in the Panoptic studio enabling detailed 3D pose estimation.
30 PAPERS • 3 BENCHMARKS
The ASL-Skeleton3D introduces a representation based on mapping into the three-dimensional space the coordinates of the signers in the ASLLVD dataset. This enables a more accurate observation of the body parts and the signs articulation, allowing researchers to better understand the language and explore other approaches to the SLR field.
0 PAPER • NO BENCHMARKS YET