Speech-to-Speech Translation
29 papers with code • 3 benchmarks • 5 datasets
Speech-to-speech translation (S2ST) consists on translating speech from one language to speech in another language. This can be done with a cascade of automatic speech recognition (ASR), text-to-text machine translation (MT), and text-to-speech (TTS) synthesis sub-systems, which is text-centric. Recently, works on S2ST without relying on intermediate text representation is emerging.
Libraries
Use these libraries to find Speech-to-Speech Translation models and implementationsMost implemented papers
SeamlessM4T: Massively Multilingual & Multimodal Machine Translation
What does it take to create the Babel Fish, a tool that can help individuals translate speech between any two languages?
Direct speech-to-speech translation with a sequence-to-sequence model
We present an attention-based sequence-to-sequence neural network which can directly translate speech from one language into speech in another language, without relying on an intermediate text representation.
Towards Automatic Face-to-Face Translation
As today's digital communication becomes increasingly visual, we argue that there is a need for systems that can automatically translate a video of a person speaking in language A into a target language B with realistic lip synchronization.
ESPnet-ST: All-in-One Speech Translation Toolkit
We present ESPnet-ST, which is designed for the quick development of speech-to-speech translation systems in a single framework.
Direct speech-to-speech translation with discrete units
When target text transcripts are available, we design a joint speech and text training framework that enables the model to generate dual modality output (speech and text) simultaneously in the same inference pass.
Multimodal and Multilingual Embeddings for Large-Scale Speech Mining
Using a similarity metric in that multimodal embedding space, we perform mining of audio in German, French, Spanish and English from Librivox against billions of sentences from Common Crawl.
CVSS Corpus and Massively Multilingual Speech-to-Speech Translation
In addition, CVSS provides normalized translation text which matches the pronunciation in the translation speech.
LibriS2S: A German-English Speech-to-Speech Translation Corpus
In contrast, the activities in the area of speech-to-speech translation is still limited, although it is essential to overcome the language barrier.
Leveraging Pseudo-labeled Data to Improve Direct Speech-to-Speech Translation
Direct Speech-to-speech translation (S2ST) has drawn more and more attention recently.
TranSpeech: Speech-to-Speech Translation With Bilateral Perturbation
Specifically, a sequence of discrete representations derived in a self-supervised manner are predicted from the model and passed to a vocoder for speech reconstruction, while still facing the following challenges: 1) Acoustic multimodality: the discrete units derived from speech with same content could be indeterministic due to the acoustic property (e. g., rhythm, pitch, and energy), which causes deterioration of translation accuracy; 2) high latency: current S2ST systems utilize autoregressive models which predict each unit conditioned on the sequence previously generated, failing to take full advantage of parallelism.