

When working with AI I adhere to the "Ethics Guidelines for Trustworthy AI" that were developed by the High-Level Expert Group on Artificial Intelligence (AI HLEG) appointed by the European Commission. These guidelines aim to ensure that AI systems are developed and deployed in a manner that is ethical, trustworthy, and aligned with fundamental rights. Here's a summary of the key points from the guidelines:
1. Definition of Trustworthy AI
Trustworthy AI is defined by three components:
Lawful: Complying with all applicable laws and regulations.
Ethical: Adhering to ethical principles and values.
Robust: Technically robust and reliable, ensuring that the AI system behaves as expected, even under adverse conditions.
2. Four Ethical Principles
The guidelines identify four ethical principles that should guide the development and deployment of AI:
Respect for Human Autonomy: AI systems should support human decision-making, promote individual agency, and not undermine human autonomy.
Prevention of Harm: AI systems should not harm individuals, society, or the environment. This includes ensuring privacy, security, and avoiding biases that could lead to unfair treatment.
Fairness: AI systems should be fair, ensuring equal treatment, non-discrimination, and the ability for individuals to contest decisions made by AI.
Explicability: AI systems should be transparent and explainable, enabling understanding of how decisions are made.
3. Seven Key Requirements for Trustworthy AI
The guidelines outline seven key requirements that AI systems should meet to be considered trustworthy:
Human Agency and Oversight: AI systems should empower individuals, allow for human oversight, and respect human autonomy.
Technical Robustness and Safety: AI systems should be reliable, secure, and resilient to attacks, ensuring they function as intended.
Privacy and Data Governance: AI systems should ensure privacy protection, data governance, and respect for data rights.
Transparency: AI systems should be transparent, with clear communication about how they work, the data they use, and the decisions they make.
Diversity, Non-discrimination, and Fairness: AI systems should avoid bias, ensure fairness, and be inclusive, considering the needs of diverse user groups.
Societal and Environmental Well-being: AI systems should benefit society, contribute to environmental sustainability, and avoid harmful societal impacts.
Accountability: Mechanisms should be in place to ensure responsibility and accountability for AI systems and their outcomes.
4. Assessment List for Trustworthy AI
The guidelines include an Assessment List that organizations can use to evaluate whether their AI systems meet the requirements for trustworthiness. This list helps ensure that AI systems are developed and deployed responsibly.
5. Stakeholder Involvement and Continuous Improvement
The guidelines emphasize the importance of involving stakeholders throughout the AI lifecycle, including developers, users, and those affected by AI systems. Continuous monitoring and improvement of AI systems are also stressed to ensure they remain trustworthy over time.
6. Implementation and Operationalization
The guidelines provide recommendations for implementing and operationalizing trustworthy AI, including the need for education and awareness, the development of standards, and the promotion of research and innovation in ethical AI.
7. Global and European Perspectives
While the guidelines are designed within the European context, they are intended to be globally relevant and contribute to the development of a shared understanding of trustworthy AI worldwide.
Reference:
High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI. European Commission. https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419
Ethics Guidelines for trustworthy AI
© 2024. All rights reserved.

