Moral Machine and the Moral Prospect of Value Alignment
YUE Jin1, TIAN Hai-ping2
1. Department of Philosophy and Science, Southeast University, Nanjing, Jiangsu, 211189; 2. Value and Culture Research Center, Beijing Normal University, Beijing, 100875
Abstract:In the “trolley problem” encountered by AI actors, AI development highlights the urgency of the value alignment problem. The core of the value alignment problem is to solve the value gap problem faced by “artificial moral agents (AMAs)”. Will “moral machines” emerge in a world living with humans? In addition to the machine normalism attributes, does it also conceal some spiritual humanistic attribute? While new ideas, new schemes, and new institutions of “value alignment” in artificial intelligence continue to reinforce the logic of “machine normalism”, they also bring deep concerns of “spiritual humanism”. By implanting ethical systems in artificial intelligence or through machine learning, “moral machine” can have ethical capabilities similar to human beings and achieve ethical behaviors comparable to human beings. If this kind of exploration itself only stays at the level of functional moral construction, then it only belongs to a form of moral materialization and cannot be called a moral machine in a strict sense. However, if it breaks through the boundary of human ethics and carries out continuous instrumental digging in this dimension, then it is itself an “immoral” conception, which is to a large extent an ideological exploration of “impossible possibility”, and therefore must be a false proposition in its reality. Nevertheless, the “moral machine” concept reveals a good opportunity to reflect on human characteristics, especially human moral characteristics and moral prospects.
岳瑨, 田海平. 道德机器与价值对齐的道德前景[J]. 《深圳大学学报》(人文社科版), 2024, 41(4): 125-133.
YUE Jin, TIAN Hai-ping. Moral Machine and the Moral Prospect of Value Alignment. , 2024, 41(4): 125-133.