Learning the Value Systems of Agents with Preference-based and Inverse Reinforcement Learning
AI 摘要
该论文提出一种新方法,从观察和演示中自动学习智能体的价值系统,用于多智能体协商场景。
主要贡献
- 提出了价值系统学习的形式模型
- 基于多目标MDP,设计了价值系统学习的实例
- 设计了基于偏好和逆强化学习的算法来推断价值函数和价值系统
方法论
使用多目标马尔可夫决策过程(MDP)建模价值系统,并采用偏好学习和逆强化学习算法推断价值函数。
原文摘要
Agreement Technologies refer to open computer systems in which autonomous software agents interact with one another, typically on behalf of humans, in order to come to mutually acceptable agreements. With the advance of AI systems in recent years, it has become apparent that such agreements, in order to be acceptable to the involved parties, must remain aligned with ethical principles and moral values. However, this is notoriously difficult to ensure, especially as different human users (and their software agents) may hold different value systems, i.e. they may differently weigh the importance of individual moral values. Furthermore, it is often hard to specify the precise meaning of a value in a particular context in a computational manner. Methods to estimate value systems based on human-engineered specifications, e.g. based on value surveys, are limited in scale due to the need for intense human moderation. In this article, we propose a novel method to automatically \emph{learn} value systems from observations and human demonstrations. In particular, we propose a formal model of the \emph{value system learning} problem, its instantiation to sequential decision-making domains based on multi-objective Markov decision processes, as well as tailored preference-based and inverse reinforcement learning algorithms to infer value grounding functions and value systems. The approach is illustrated and evaluated by two simulated use cases.