Aspect-Based Sentiment Analysis (ABSA) is a detailed subdomain of sentiment analysis that focuses on detecting sentiments toward specific aspects of entities, such as product features or service attributes, rather than providing a general sentiment polarity. This granular understanding is essential in domains such as customer feedback evaluation, social media opinion mining, and intelligent recommendation systems. However, capturing the syntactic and semantic dependencies required for accurate ABSA remains a challenge for conventional models. In this study, we propose an ensemble-based approach utilizing Graph Convolutional Networks (GCNs), which are particularly effective in learning structural relationships from sentence-level dependency trees. Our methodology involves the integration of four advanced GCN-based models: DualGCN, RDGCN, SSEGCN, and R-GAT. Each model offers distinct strengths, ranging from dual-graph encoding and reinforcement-driven attention mechanisms to syntax-aware semantic enhancements. These models are trained individually and then aggregated through a majority voting mechanism to create a robust ensemble capable of improved sentiment prediction at the aspect level. The models were evaluated on benchmark datasets including SemEval-2014 (Rest14 and Laptops subsets) and Twitter, covering both formal and informal texts. Extensive preprocessing was conducted to standardize input formats and ensure fair comparison across models. Moreover, training was performed using both GLoVE and BERT embeddings, allowing the ensemble to benefit from a diverse range of semantic features. The proposed majority voting strategy aggregates the predictions of individual models and determines the final sentiment class based on the most frequent output. In case of a tie, the model with the highest validation accuracy takes precedence. This strategy effectively combines the complementary capabilities of multiple GCN variants, leading to improved performance and stability across diverse datasets. Experimental results show that the proposed ensemble method significantly outperforms both baseline models and recent state-of-the-art methods. On the Rest14 dataset, the ensemble achieved an accuracy of 88.47%, improving upon the best recent model (SAGCN + BERT) by +1.34%. On the Laptops dataset, it attained 85.44%, exceeding SAGCN’s 85.12% by +0.32%. Similarly, on the Twitter dataset, our model reached 82.12%, surpassing SAGCN’s 81.45% by +0.67%. Additionally, compared to individual baseline models, the proposed method improved accuracy by 2.15% and F1-score by 2.8% on Rest14, 9.2% and 11.74% on Laptops, and 7.8% and 8.7% on Twitter, respectively. These improvements highlight the robustness of the ensemble in handling varying linguistic structures and domains. We also explored alternative ensemble strategies including weighted voting, neural fusion, and combined embedding approaches, yet none outperformed the majority voting strategy in consistency or accuracy. This further reinforces the effectiveness and simplicity of our proposed method .In conclusion, this research introduces a novel and practical ensemble technique for ABSA using multiple GCN models and a majority voting strategy. The method achieves state-of-the-art accuracy across multiple benchmarks and demonstrates strong generalization, making it a valuable contribution to aspect-level sentiment analysis. Future work may extend this approach to multilingual and domain-specific contexts or integrate large pretrained language models such as RoBERTa or GPT to further enhance contextual understanding.
| Rights and permissions | |
|
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. |