Reference
Advances in AI, robotics and so-called ‘autonomous’ technologies¹ have ushered in a range of increasingly urgent and complex moral questions. Current efforts to find answers to the ethical, societal and legal challenges that they pose and to orient them for the common good represent a patchwork of disparate initiatives. This underlines the need for a collective, wide-ranging and inclusive process of reflection and dialogue, a dialogue that focuses on the values around which we want to organise society and on the role that technologies should play in it.
This statement calls for the launch of a process that would pave the way towards a common, internationally recognised ethical and legal framework for the design, production, use and governance of artificial intelligence, robotics, and ‘autonomous’ systems. The statement also proposes a set of fundamental ethical principles, based on the values laid down in the EU Treaties and the EU Charter of Fundamental Rights, that can guide its development.