1. [1] B. Babaali, A state-of-the-art and effitient framework for persian speech recognition, Signal and Data Processing, Vol. 13, pp. 51-62, 2016. [
DOI:10.18869/acadpub.jsdp.13.3.51]
2. [2] Y. Samareh, Persian language phonology, Tehran, university publishing center, 1985.
3. [3] M. Rahiminezhad, S. A. Seyyedsalehi, Comparision and assessment of different feature extraction and normalization methods in speaker independent speech recognition, Amirkabir journal of science and research, 2000.
4. [4] S. A. Seyyedsalehi, I. Nejadgholi, F. Tohidkhah, Boostingt pattern recognition performance of neural networks with deleoping bidirectional methods, independent research report, 2004.
5. [5] S. Karami, Speaker independent persian phone recognition using a neural network model with a combination of steady and transition parts of phones, M.Sc. thesis, Biomedical engineering faculty, Amirkabir University, 2000.
6. [6] M. Yazdiyan, Persian continous speech recognition based on discrete acoustic events modeling, M.Sc. thesis, Biomedical engineering faculty, Amirkabir University, 2001.
7. [7] S. Alisamir, S. M. Ahadi, and S. Seyedin, An end-to-end deep learning model to recognize Farsi speech from raw input, 4th Iranian Conference on Signal Processing and Intelligent Systems, pp. 1-5, 2018. [
DOI:10.1109/ICSPIS.2018.8700538]
8. [8] N. Amini, S. A. Seyyedsalehi, Manipulation of attractors in feed-forward autoassociative neural networks for robust learning, Iranian Conference on Electrical Engineering (ICEE), 2017. [
DOI:10.1109/IranianCEE.2017.7985469] [
PMID] [
]
9. [9] Z. Ansari and S. A. Seyyedsalehi, Toward growing modular deep neural networks for continuous speech recognition, Neural Computing and Applications, pp.1177-1196, 2017. [
DOI:10.1007/s00521-016-2438-x]
10. [10] S. Babaei, , A. Geranmayeh, and S. A. Seyyedsalehi, Protein secondary structure prediction using modular reciprocal bidirectional recurrent neural networks, Computer methods and programs in biomedicine, 100(3), pp.237-247, 2010. [
DOI:10.1016/j.cmpb.2010.04.005] [
PMID]
11. [11] M. Bijankhan, J. Sheikhzadegan, M. R. Roohani, FARSDAT-the speech database of Farsi spoken language, proccedings australian conference on speech science and technology, 1994.
12. [12] S. Borysand M. Hasegawa-Johnson, SVM-HMM landmark based speech recognition, 2009.
13. [13] Z. Chen, Y., Luo and N. Mesgarani, Deep attractor network for single-microphone speaker separation, In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 246-250, 2017. [
DOI:10.1109/ICASSP.2017.7952155]
14. [14] J. Chorowski, D. Bahdanau, K. Cho and Y. Bengio, End-to-end continuous speech recognition using attention-based recurrent NN: first results, arXiv, pp.1412.1602, 2014.
15. [15] G. Dahl, M. A. Ranzato, A. R. Mohamed and G. E. Hinton, Phone recognition with the mean-covariance restricted Boltzmann machine, Advances in neural information processing systems, pp. 469-477, 2010.
16. [16] Z. D. Doolab, S. A. Seyyedsalehi, and N. S. Dehaghani, , Nonlinear Normalization of Input Patterns to Handwritten Character Variability in Handwriting Recognition Neural Network, International Conference on Biomedical Engineering and Biotechnology, pp. 848-851, 2012. [
DOI:10.1109/iCBEB.2012.284]
17. [17] L. Dehyadegary, S. A. Seyyedsalehi and I. Nejadgholi, Nonlinear enhancement of noisy speech using continuous attractor dynamics formed in recurrent neural networks, Neurocomputing. 2011. [
DOI:10.1016/j.neucom.2010.12.044]
18. [18] B. Delgutte and N. Y. Kiang, Speech coding in the auditory nerve: IV. Sounds with consonant‐like dynamic characteristics, The Journal of the Acoustical Society of America, pp.897-907, 1984. [
DOI:10.1121/1.390599] [
PMID]
19. [19] S. Firooz, F. Almasganj, and Y. Shekofteh, Improvement of automatic speech recognition systems via nonlinear dynamical features evaluated from the recurrence plot of speech signals, Computers & Electrical Engineering, pp. 215-226, 2017. [
DOI:10.1016/j.compeleceng.2016.07.006]
20. [20] D. Gillick, S. Wegmann and L. Gillick, Discriminative training for speech recognition is compensating for statistical dependence in the HMM framework, international conference on acoustics, speech and signal processing (ICASSP), pp. 4745-4748, 2012. [
DOI:10.1109/ICASSP.2012.6288979]
21. [21] A.H. Hadjahmadi, and M. M. Homayounpour, Robust feature extraction and uncertainty estimation based on attractor dynamics in cyclic deep denoising autoencoders, Neural Computing and Applications, 31(11), pp.7989-8002, 2019. [
DOI:10.1007/s00521-018-3623-x]
22. [22] M. Hasegawa-Johnson, J. Baker, S. Borys, K. Chen, E. Coogan, S. Greenberg, A. Juneja, K. Kirchhoff, K. Livescu, S. Mohan and J. Muller, Landmark-based speech recognition, Report of the 2004 Johns Hopkins summer workshop, International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2005.
23. [23] D. He, B. P. Lim, X. Yang, M. Hasegawa-Johnson and D. Chen, Acoustic landmarks contain more information about the phone string than other frames for automatic speech recognition with deep neural network acoustic model, The Journal of the Acoustical Society of America, pp. 3207-3219, 2018. [
DOI:10.1121/1.5039837] [
PMID]
24. [24] D. He, X. Yang, B. P. Lim, Y. Liang, M. Hasegawa-Johnson and D. Chen, When CTC training meets acoustic landmarks., International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5996-6000, 2019. [
DOI:10.1109/ICASSP.2019.8683607]
25. [25] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. R. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath and B. Kingsbury, Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups, IEEE signal processing magazine, pp. 82-97, 2012. [
DOI:10.1109/MSP.2012.2205597]
26. [26] A. Juneja and C. Espy-Wilson, A probabilistic framework for landmark detection based on phonetic features for automatic speech recognition, journal of the acoustical society of America, pp. 1154-1168, 2008. [
DOI:10.1121/1.2823754] [
PMID]
27. [27] J. Kahn, A. Lee and A. Hannun, Self-training for end-to-end speech recognition, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7084-7088, 2020. [
DOI:10.1109/ICASSP40776.2020.9054295]
28. [28] M. A. Kermanshahi and M. M. Homayounpour, Improving Phoneme Sequence Recognition using Phoneme Duration Information in DNN-HSMM, Journal of AI and Data Mining, pp.137-147, 2019.
29. [29] R., Kumar, Y. Luo, and N. Mesgarani, Music Source Activity Detection and Separation Using Deep Attractor Network, In INTERSPEECH, pp. 347-351, 2018. [
DOI:10.21437/Interspeech.2018-2326]
30. [30] J. W. Lee, J. Y. Choi and H. G. Kang, Classifcation of stop place in consonant-vowel contexts using feature extrapolation of acoustic-phonetic features in telephone speech, The Journal of the Acoustical Society of America, Vol. 131, 2012. [
DOI:10.1121/1.3672706] [
PMID]
31. [31] Y. Luo, Z. Chen and N. Mesgarani, Speaker-independent speech separation with deep attractor network, IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(4), 787-796, 2018. [
DOI:10.1109/TASLP.2018.2795749]
32. [32] M. Meister and M. J. Berry, The neural code of the retina, Neuron, pp.435-450, 1999. [
DOI:10.1016/S0896-6273(00)80700-X] [
PMID]
33. [33] N. Morgan, J. Cohen, S. H. Krishnan, S. Changand S. Wegmann, Final Report: OUCH Project (Outing Unfortunate Characteristics of HMMs), 2013.
34. [34] T.S. Nguyen, S. Stüker, J. Niehues, and A. Waibel, Improving sequence-to-sequence speech recognition training with on-the-fly data augmentation, IEEE International Conference on Acoustics, Speech and Signal Processing, 2020. [
DOI:10.1109/ICASSP40776.2020.9054130]
35. [35] C. Niu, J. Zhang, X. Yang and Y. Xie, A study on landmark detection based on CTC and its application to pronunciation error detection, Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, pp. 636-640, 2017. [
DOI:10.1109/APSIPA.2017.8282103] [
]
36. [36] S. Parveen, and P. Green, Speech enhancement with missing data techniques using recurrent neural networks, IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 1, pp. I-733, 2004.
37. [37] M. Ravanelli, T. Parcollet and Y. Bengio, The pytorch-kaldi speech recognition toolkit, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6465-6469, 2019. [
DOI:10.1109/ICASSP.2019.8683713]
38. [38] S. Reza, S. A. Seyyedsalehi, S. Z. Seyyedsalehi, A Persian Language Phone Recognition Based on Robust Extraction of Acoustic Landmarks, 27th Iranian Conference on Biomedical Engineering, 2020. [
DOI:10.1109/ICBME51989.2020.9319436]
39. [39] S. Reza, S. A. Seyyedsalehi, S. Z. Seyyedsalehi, Attractor Manipulation in Denoising Autoencoders for Robust Phone Recognition, 29th Iranian Conference on Electrical Engineering, 2021. [
DOI:10.1109/ICEE52715.2021.9543707]
40. [40] T. N. Sainath, Island-driven search using broad phonetic classes, automatic speech recognition & understanding, pp. 287-292, 2009. [
DOI:10.1109/ASRU.2009.5373547]
41. [41] T. N. Sainath, B. Kingsbury and B. Ramabhadran, Auto-encoder bottleneck features using deep belief networks, IEEE international conference on acoustics, speech and signal processing (ICASSP), pp. 4153-4156, 2012. [
DOI:10.1109/ICASSP.2012.6288833]
42. [42] L. San, N. Moritz, T. Hori, and J. L. Roux, Unsupervised Speaker Adaptation Using Attention-Based Speaker Memory for End-to-End ASR, International Conference on Acoustics, Speech and Signal Processing, 2020.
43. [43] S. A. Seyyedsalehi, A modular neural network speech recognizer based on the both acoustic steady portions and transitions, international conference of spoken language processing (ICSLP), 2000. [
DOI:10.21437/ICSLP.2000-408]
44. [44] S. Z. Seyyedsalehi, and S. A. Seyyedsalehi, Attractor analysis in associative neural networks and its application to facial image analysis, Computational Intelligence in Electrical Engineering, Vol. 9, No. 1, 2018
45. [45] K. N. Stevens, S. J. Keyser, and H. Kawasaki, Toward a phonetic and phonological theory of redundant features , Ph.D. thesis. MIT, camberidge, 1986.
46. [46] K. N. Stevens, From acoustic cues to segments, features and words, international conference on Spoken Language Processing (ICSLP), pp. A1-A8, 2000.
47. [47] J. Vaněk, J. Michálek and J. Psutka, Recurrent DNNs and Its Ensembles on the TIMIT Phone Recognition Task, International Conference on Speech and Computer, pp. 728-736, 2018. [
DOI:10.1007/978-3-319-99579-3_74]
48. [48] H. Veisi, and A. H. Mani, Persian speech recognition using deep learning, International Journal of Speech Technology, 23(4), pp. 893-905, 2020. [
DOI:10.1007/s10772-020-09768-x]
49. [49] P. Vincent, H. Larochelle, Y. Bengio and P. A. Manzagol, Extracting and composing robust features with denoising autoencoders, Proceedings of the 25th international conference on Machine learning, pp. 1096-1103), 2008. [
DOI:10.1145/1390156.1390294]
50. [50] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion, Journal of machine learning research, pp. 3371-3408, 2010.
51. [51] M. Zapotoczny, P. Pietrzak, A. Lancucki and J. Chorowski, 2019. Lattice generation in attention-based speech recognition models, pp.2225-2229, 2019. [
DOI:10.21437/Interspeech.2019-2667]
52. [52] T. Yoshimura, T. Hayashi, K. Takeda and S. Watanabe, End-to-end automatic speech recognition integrated with CTC-based voice activity detection, arXiv, 2020. [
DOI:10.1109/ICASSP40776.2020.9054358]
53. [53] N. Zeghidour, N. Usunier, I. Kokkinos, T. Schaiz, G. Synnaeve and E. Dupoux, Learning filterbanks from raw speech for phone recognition, International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5509-5513, 2018. [
DOI:10.1109/ICASSP.2018.8462015]