Du er ikke logget ind
Beskrivelse
As intelligent autonomous agents and multiagent system applications become more pervasive, it becomes increasingly important to understand the risks associated with using these systems. Incorrect or inappropriate agent behavior can have harmful - fects, including financial cost, loss of data, and injury to humans or systems. For - ample, NASA has proposed missions where multiagent systems, working in space or on other planets, will need to do their own reasoning about safety issues that concern not only themselves but also that of their mission. Likewise, industry is interested in agent systems that can search for new supply opportunities and engage in (semi-) automated negotiations over new supply contracts. These systems should be able to securely negotiate such arrangements and decide which credentials can be requested and which credentials may be disclosed. Such systems may encounter environments that are only partially understood and where they must learn for themselves which aspects of their environment are safe and which are dangerous. Thus, security and safety are two central issues when developing and deploying such systems. We refer to a multiagent system's security as the ability of the system to deal with threats that are intentionally caused by other intelligent agents and/or s- tems, and the system's safety as its ability to deal with any other threats to its goals.