Unpacking the Ethical Value Alignment in Big Models
-
Graphical Abstract
-
Abstract
We explore the emerging challenges presented by artificial intelligence (AI) development in the era of big models, with a focus on large language model (LLM) and ethical value alignment. Big models have greatly advanced AI’s ability to understand, generate, and manipulate information and content, enabling numerous applications. However, as these models become increasingly integrated into everyday life, their inherent ethical values and potential biases pose unforeseen risks to society. We provide an overview of the risks and challenges associated with big models, survey existing AI ethics guidelines, and examine the ethical implications arising from the limitations of these models. Taking a normative ethics perspective, we propose a reassessment of recent normative guidelines, highlighting the importance of collaborative efforts in academia to establish a unified and universal AI ethics framework. Furthermore, we investigate the ethical inclinations of current mainstream large language models using moral foundation theory, analyze existing big model alignment algorithms, and outline the unique challenges encountered in aligning moral values within them. To address these challenges, we introduce a novel conceptual paradigm for ethically aligning the values of big models and discuss promising research directions for alignment criteria, evaluation and method, representing an initial step towards the interdisciplinary construction of a morally aligned general artificial intelligence.
-
-