no code implementations • 23 Jan 2024 • Md Asif Jalal, Pablo Peso Parada, George Pavlidis, Vasileios Moschopoulos, Karthikeyan Saravanan, Chrysovalantis-Giorgos Kontoulis, Jisi Zhang, Anastasios Drosou, Gil Ho Lee, Jungin Lee, Seokyeong Jung
During training, a list of biasing phrases are selected from a large pool of phrases following a sampling strategy.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 22 Jan 2024 • Jisi Zhang, Vandana Rajan, Haaris Mehmood, David Tuckey, Pablo Peso Parada, Md Asif Jalal, Karthikeyan Saravanan, Gil Ho Lee, Jungin Lee, Seokyeong Jung
On-device Automatic Speech Recognition (ASR) models trained on speech data of a large population might underperform for individuals unseen during training.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 14 Aug 2020 • Taewoo Lee, Min-Joong Lee, Tae Gyoon Kang, Seokyeoung Jung, Minseok Kwon, Yeona Hong, Jungin Lee, Kyoung-Gu Woo, Ho-Gyeong Kim, Jiseung Jeong, Ji-Hyun Lee, Hosik Lee, Young Sang Choi
We propose an adapter based multi-domain Transformer based language model (LM) for Transformer ASR.
no code implementations • 2 Jan 2020 • Kwangyoun Kim, Kyungmin Lee, Dhananjaya Gowda, Junmo Park, Sungsoo Kim, Sichen Jin, Young-Yoon Lee, Jinsu Yeo, Daehyun Kim, Seokyeong Jung, Jungin Lee, Myoungji Han, Chanwoo Kim
In this paper, we present a new on-device automatic speech recognition (ASR) system based on monotonic chunk-wise attention (MoChA) models trained with large (> 10K hours) corpus.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3