Volume 2, Issue 1, 2022
Articles

Universal Speech Translator

Sai Prakash R
Kristu Jayanti College (Autonomous), Bengaluru, Karnataka
Ayesha ML
Kristu Jayanti College (Autonomous), Bengaluru, Karnataka
Ayesha ML
Kristu Jayanti College (Autonomous), Bengaluru, Karnataka
Ayesha ML
Kristu Jayanti College (Autonomous), Bengaluru, Karnataka
Srusti
Kristu Jayanti College (Autonomous), Bengaluru, Karnataka
Divya MO
Kristu Jayanti College (Autonomous), Bengaluru, Karnataka

Published 2022-05-25

Keywords

  • N keywords

How to Cite

R, S. P., ML, A. ., ML, A., ML, A., Srusti, & MO, D. . (2022). Universal Speech Translator. Kristu Jayanti Journal of Computational Sciences (KJCS), 2(1), 88–96. https://doi.org/10.59176/kjcs.v2i1.2240

Abstract

In summary, Meta explains that this all-in-one translator is “pushed via way of means of the purpose of breaking down language boundaries on an international scale” the use of system gaining knowledge of technology. Since a mechanical perception, "This is a provisional computational prototype based on the Thinly Gated Mixture of Experts, qualified on data gained using novel and effective data mining methods personalized to low-resource languages. It is a model that forms the basis of universal translation. In the system, BLEU (short for Bilingual Evaluation Understudy).

Downloads

Download data is not yet available.

References

[1] T. Luong, H. Pham, and C. D. Manning, “Effective approaches to attention-based neural machine translation,” in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, (Lisbon, Portugal), pp. 1412–1421, Association for Computational Linguistics, Sept. 2015.

[2] P. Brown, S. Della Pietra, V. Pietra, and R. Mercer, “The mathematics of statistical machine translation: Parameter estimation,” Computational Linguistics, vol. 19, pp. 263–311, 01 1993.

[3] A. W. Black, H. Zen, and K. Tokuda, “Statistical parametric speech synthesis,” in 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP ’07, vol. 4, pp. IV–1229–IV–1232, 2007.

[4] Y. Jia, R. J. Weiss, F. Biadsy, W. Macherey, M. Johnson, Z. Chen, and Y. Wu, “Direct speech-to-speech translation with a sequence-to-sequence model,” CoRR, vol. abs/1904.06037, 2019.

[5] K. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and J. Schmidhuber, “Lstm: A search space odyssey,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 10, pp. 2222–2232, 2017.

[6] J. Zhang, Z. Ling, L. Liu, Y. Jiang, and L. Dai, “Sequence-to-sequence acoustic modeling for voice conversion,” CoRR, vol. abs/1810.06865, 2018.

[7] Y. Jia, M. Johnson, W. Macherey, R. J. Weiss, Y. Cao, C. Chiu, N. Ari, S. Laurenzo, and Y. Wu, “Leveraging weakly supervised data to improve end-to-end speech-to-text translation,” CoRR, vol. abs/1811.02050, 2018.

[8] C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, K. Gonina, N. Jaitly, B. Li, J. Chorowski, and M. Bacchiani, “State-of-the-art speech recognition with sequence-to-sequence models,” CoRR, vol. abs/1712.01769, 2017.

[9] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in Advances in Neural Information Processing Systems 27 (Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds.), pp. 3104–3112, Curran Associates, Inc., 2014.

[10] S. Benk, Y. Elmir, and A. Dennai, “A study on automatic speech recognition,” vol. 10, pp. 77–85, 08 2019.

[11] F. J. Och, C. Tillmann, and H. Ney, “Improved alignment models for statistical machine translation,” in 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, 1999.

[12] P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst, “Moses: Open source toolkit for statistical machine translation,” in Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, (Prague, Czech Republic), pp. 177–180, Association for Computational Linguistics, June 2007.