1. [1]. Rajpurkar, P., Zhang, J., Lopyrev, K., & Liang, P. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016. [
DOI:10.18653/v1/D16-1264]
2. [2]. Yuanjun Li, Yuzhu Zhang, Question Answering on SQuAD 2.0 Dataset, s. University, Editor, 2018.
3. [3]. d'Hoffschmidt, M., Belblidia, W., Brendlé, T., Heinrich, Q., & Vidal, M. FQuAD: French question answering dataset. arXiv preprint arXiv:2002.06071, 2020. [
DOI:10.18653/v1/2020.findings-emnlp.107]
4. [4]. Möller, T., Risch, J., & Pietsch, M. Germanquad and germandpr: Improving non-english question answering and passage retrieval. arXiv preprint arXiv:2104.12741, 2021. [
DOI:10.18653/v1/2021.mrqa-1.4]
5. [5].임승영, 김명지, & 이주열. KorQuAD: 기계독해를 위한 한국어 질의응답 데이터셋. 한국정보과학회 학술발표논문집, 539-541, 2018.
6. [6].김영민, 임승영, 이현정, 박소윤, & 김명지. KorQuAD 2.0: 웹문서 기계독해를 위한 한국어 질의응답 데이터셋. 정보과학회논문지, 47(6), 577-586, 2020. [
DOI:10.5626/JOK.2020.47.6.577]
7. [7]. So, B., Byun, K., Kang, K., & Cho, S. Jaquad: Japanese question answering dataset for machine reading comprehension. arXiv preprint arXiv:2202.01764, 2022.
8. [8]. Ayoubi MY Sajjad & Davoodeh Persianqa: a dataset for persian question answering. https://github.com/SajjjadAyobi/PersianQA, 2021.
9. [9]. Mozafari, J., Fatemi, A., & Nematbakhsh, M. A. BAS: an answer selection method using BERT language model. arXiv preprint arXiv:1911.01528, 2019.
10. [10]. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. Attention is all you need. Advances in neural information processing systems, 30, 2017.
11. [11]. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
12. [12]. Farahani, M., Gharachorloo, M., Farahani, M., & Manthouri, M. Parsbert: Transformer-based model for persian language understanding. Neural Processing Letters, 53(6), 3831-3847, 2021. [
DOI:10.1007/s11063-021-10528-4]
13. [13]. Sanh, V., Debut, L., Chaumond, J., & Wolf, T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019.
14. [14]. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
15. [15]. Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
16. [16]. Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116. [
DOI:10.18653/v1/2020.acl-main.747]
17. [17]. Persian Wikipedia. Available from: https://github.com/miladfa7/Persian-Wikipedia-Dataset