Call for Papers for Springers' Topical Collection - Deadline: 15th December 2022
Pubblicato: 01 ottobre 2022
Over the last years, the European Union has committed towards responsible and sustainable Artificial Intelligence research, development and innovation. In 2019, the High-Level Expert Group on AI (AI HLEG) delivered the Ethics Guidelines for Trustworthy AI and in 2021 the Commission put forward the proposal of a regulatory framework to address different AI risk levels, known as the AI Act. Besides rules and principles, building a Trustworthy AI culture poses several challenges to the whole AI ecosystem, such as:
1) how to create meaningful and constructive debates involving experts with multidisciplinary backgrounds, but also citizens and people who might be directly or indirectly affected by AI systems;
2) the cultural equipment needed to help future AI experts cope with the complexity of societal and ethical changes generated by AI and data-intensive applications;
3) how to translate these cultural resources into working experience with a view to creating a mutual and beneficial interaction between the theory and the practice of Trustworthy AI.
This topical collection aims to explore how we can get closer to a Trustworthy AI Culture sharing investigations and good practices moving along the trajectories suggested by the AI HLEG guidelines: public debate, education and practical learning. This topical collection calls for research papers, project reports, or position papers addressing, but not limited to the following topics:
- Experiences of multidisciplinary perspectives and methodologies that contribute to building a Trustworthy AI culture;
- Critical and constructive analysis of ideas and strategies aimed at building an ecosystem of trust;
- Contributions to the identification of disciplinary gaps (conceptual, language, skills and social diversity) and how to address them;
- Analysis of methodologies or approaches that can help AI experts address tensions and trade-offs among ethical principles in play;
- Approaches to the definition of educational strategies, content and skills to be included in courses dealing with Trustworthy AI;
- Approaches that can contribute to a better integration of the humanities into AI research and development;
- Methods to apply Trustworthy AI concepts and requirements into practice and processes to validate and verify them;
- Proposals of participatory methods that involve all stakeholders of the AI system life-cycle, including the developers, researchers, policy-makers, governments, private and public sectors and the society.