This is a research project, not an official NVIDIA product.
OpenSeq2Seq main goal is to allow researchers to most effectively explore various sequence-to-sequence models. The efficiency is achieved by fully supporting distributed and mixed-precision training. OpenSeq2Seq is built using TensorFlow and provides all the necessary building blocks for training encoder-decoder models for neural machine translation and automatic speech recognition. We plan to extend it with other modalities in the future.
- Sequence to sequence learning
- Neural Machine Translation
- Automatic Speech Recognition
- Data-parallel distributed training
- Multi-GPU
- Multi-node
- Mixed precision training for NVIDIA Volta GPUs
https://nvidia.github.io/OpenSeq2Seq/
Speech-to-text workflow uses some parts of Mozilla DeepSpeech project.
Text-to-text workflow uses some functions from Tensor2Tensor and Neural Machine Translation (seq2seq) Tutorial.