Volume 16, Issue 3 (12-2019)                   JSDP 2019, 16(3): 48-37 | Back to browse issues page


XML Persian Abstract Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

hosseini M M, zahedi M, Hassanpour H. A New Statistical Model for Evaluation Interactive Question Answering Systems Using Regression. JSDP 2019; 16 (3) :48-37
URL: http://jsdp.rcisp.ac.ir/article-1-814-en.html
Shahrood University of Technology
Abstract:   (4201 Views)
The development of computer systems and extensive use of information technology in the everyday life of people have just made it more and more important for them to make quick access to information that has received great importance. Increasing the volume of information makes it difficult to manage or control. Thus, some instruments need to be provided to use this information. The QA system is an automated system for obtaining the correct answers to questions posed by the human in the natural language. In these systems, if the response is found, and if it is not the user's expected response or if it needs more information, there is no possibility of exchanging information between the system and the user to ask more questions and get answers related to it. To solve this problem, interactive Question answering (IQA) systems were created. Interactive question answering (IQA) systems are associated with linguistic ambiguous structures, so these systems are more accurate than QA systems. Regarding the probability of ambiguity (ambiguity in the user question or ambiguity in the answer provided by the system), the repetition is possible in these systems to obtain the clarity. No standard methods have been developed on IQA systems evaluation, and the existing evaluation methods have been developed based on the methods used in QA and dialogue systems. In evaluating IQA systems, in addition to quantitative evaluation, a qualitative evaluation is used. It requires users’ participation in the evaluation process to determine the success level of interaction between the system and the user. Evaluation plays an important role in the IQA systems. In the context of evaluating IQA systems, there is partially no specific methodology for evaluating these systems in general. The main problem with designing an assessment method for IQA systems lies in the rare possibility to predict the interaction part. To this end, human needs to be involved in the evaluation process. In this paper, an appropriate model is presented by introducing a set of built-in features for evaluating IQA systems. To conduct the evaluation process, four IQA systems were considered based on the conversation exchanged between users and systems. Moreover, 540 samples were considered as suitable data to create a test and training set. The statistical characteristics of each conversation were extracted after performing the preprocessing on them. Then a feature matrix was formed based on the obtained characteristics. Finally, using linear and nonlinear regression, human thinking was predicted. As a result, the nonlinear power regression with 0.13 Root Mean Square Error (RMSE) was the best model.
Full-Text [PDF 3283 kb]   (1146 Downloads)    
Type of Study: Research | Subject: Paper
Received: 2017/11/24 | Accepted: 2019/01/9 | Published: 2020/01/7 | ePublished: 2020/01/7

References
1. [1] S. Shahriini, S. Zahedi, "Interactive Question answering System Using Artificial Intelligence Techniques", Senior Thesis, Shahrood University of Technology, Faculty of Computer and Information Technology, 2015.
2. [2] M.M. Hosseini, M. Zahedi, "Improvement of the response provided in interactive question answering systems using neural network", Eighth International Conference on Information and Knowledge Technology, pp. 84-91, 2016.
3. [3] Bouziane, Abdelghani, Bouchiha, Doumi, and Malki, “Question Answering Systems: Survey and Trends”, Procedia Computer Science, pp. 366-375, 2015.
4. [4] Bao, Junwei, Nan Duan, Ming Zhou, and Tiejun Zhao, "Knowledge-based question answering as machine translation," Cell 2, no. 6, 2014.
5. [5] C. Guinaudeau, M. Strube, “Graph-based Local Coherence Modeling”, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pp. 93–103, 2013.
6. [6] Hartawan, Andrei, and Derwin Suhartono, “Using Vector Space Model in Question Answering System”, Procedia Computer Science, pp. 305-311, 2015.
7. [7] Höffner, Konrad, S. Walter, E. Marx, R. Usbeck, J. Lehmann, and A. Ngomo, “Survey on challenges of question answering in the semantic web”, Semantic Web 8, no. 6, pp.895-920, 2017.
8. [8] Kelly, Diane, P. B. Kantor, E. L. Morse, J. Scholtz, and Y. Sun, “Questionnaires for eliciting evaluation data from users of interactive question answering systems,” Natural Language Engineering 15, no. 1, , pp. 119-141, 2009.
9. [9] L. C. Yew, “Rouge: A package for automatic evaluation of summaries,” In Text summarization branches out: Proceedings of the ACL-04 workshop, vol. 8, 2004.
10. [10] L. Vanessa, V. Uren, M. Sabou and E. Motta. “Is question answering fit for the semantic web? A survey,” Semantic Web 2, no. 2, pp.125-155, 2011.
11. [11] M. Amit, and S. K. Jain, “A survey on question answering systems with classification,” Journal of King Saud University-Computer and Information Sciences 28, no. 3, pp. 345-361,2016.
12. [12] M. Mansoori, and H. Hassanpour, “Boosting passage retrieval through reuse in question answering,” International Journal of Engineering 25, no. 3, pp.187-196, 2012.
13. [13] Y. Boreshban, H. Yousefinasab, S. A. Mirroshandel, “Providing a Religious Corpus of Question Answering System in Persian”, Journal of Signal and Data Processing (JSDP),Vol 15, no 1, pp.87-102, 2018.
14. [14] N. Wacholder, S. G. Small, B. Bai, D. Kelly, R. trittman, S. Ryan, R. Salkin, “Designing a Realistic Evaluation of an End-to-end Interactive Question Answering System.” In LREC. 2004.
15. [15] Quarteroni, Silvia and S. Manandhar, “Designing an interactive open-domain question answering system,” Natural Language Engineering 15, no. 1, pp. 73-95,2009.
16. [16] S. Ying, P. B. Kantor and E. L. Morse, “Using cross-evaluation to evaluate interactive QA systems.” Journal of the Association for Information Science and Technology 62, no. 9, pp. 1653-1665, 2011.

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

© 2015 All Rights Reserved | Signal and Data Processing