Trust management systems are used in interactive environments, where an agent needs to make a decision about using a service. Due to the preponderance of these systems, malicious entities have strong incentives to influence trust management systems and divert their decisions. In spite of approaches presented in previous trust models to mitigate the malicious activities, many of them could not cope with the problem efficiently. For example, tackling the variable behavior of agents is a common failure point for many trust models. Moreover, no rigid, flexible and adaptive general approach has been presented and the problem somehow remains.
This paper presents a novel approach to prevent malicious actions and identify anomalies using an entropy-based trust management system. The system is capable of recognizing the intrinsic characteristics of actions, determining whether they are malicious or not. To achieve this, the information environment is divided into four main parts based on entropy changes. Trust calculation in this system relies on an entropy structure derived from information ethics theory. To enhance the system’s resistance and resilience against malicious behavior, it is important to understand the nature of the actions performed by the agents. To accomplish this, we define the patterns of entropy changes for the four parts and use these patterns to identify and refine the nature of actions as good, bad, or insignificant. The simulation-based experimental results indicate that the proposed system shows promising performance in terms of accurately calculating trust and detecting malicious behavior. Specifically, the proposed system exhibits a 10 percent advantage over well-known trust systems with regards to swiftly adapting to environmental changes and diverse agent behaviors. Moreover, the observed experiments have displayed a notable trend in the fluctuation of good, bad, and insignificant actions. The results indicate a consistent increase in the number of good actions and a corresponding decrease in bad actions. Put simply, the method demonstrates improvement over time through repeated system implementations. This improvement can be attributed to the agents’ heightened honesty as they gain a better understanding of the nature of their actions. Additionally, the provision of feedback on their behavior plays a pivotal role in reinforcing more accurate decision-making within the system.
Type of Study:
Research |
Subject:
Paper Received: 2023/04/1 | Accepted: 2023/07/18 | Published: 2024/04/25 | ePublished: 2024/04/25