From the EU Artificial Intelligence Act (EU AIA): four levels of risk in terms of their potential harm to society, and therefore important to address.
Principle #1: Human agency and oversight. The EU AIA team said, “AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights." This @DrSpotfire dashboard shows how:
Principle #2: Technical robustness and safety: The EU AIA team explains, “AI systems need to be resilient and secure. They need to be safe, have a fallback plan when something goes wrong, and be reliable and reproducible.
Principle #3: Privacy and data governance. The EU AI group warns, “AI systems must also ensure adequate data governance mechanisms, taking into account the quality and integrity of the data and ensuring legitimized access to data.”
Principle #4: Transparency: The EU AI team advises that data, systems, and AI business models be transparent and that traceability mechanisms help achieve this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned.
Principle #5: Diversity, non-discrimination, and fairness: Unfair bias must be avoided because it could have multiple negative implications, from the marginalization of vulnerable groups to the exacerbation of prejudice and discrimination.
Principle #6: Societal and environmental well-being: AI systems should benefit all humans, including future generations. Firms using AI should consider the environment, including other living beings, and their social and societal impact.
Principle #7: Accountability: The subject of regulation, oversight, and accountability for AI ethics is massive. Modern data fabric and Model Operationalization tools are the technological foundation of a new AI culture.