You searched for:
The Cookie Jar

EU publishes hotly-anticipated AI Ethics Guidelines

Background to and purpose of the Guidelines

On 8 April 2019, the EU's High-Level Expert Group (AI HLEG) on Artificial Intelligence (AI) published its ethical guidelines for “Trustworthy AI” (the Guidelines).  Following a public consultation, the hotly-anticipated Guidelines set out the EU’s framework to promote the development and deployment of AI while minimising the types of risks associated with this emerging area of technology, including those we have written about elsewhere on The Cookie Jar and in our whitepaper on public perception of AI

The AI HLEG was set up by the European Commission and consists of 52 representatives drawn from academia, industry and society.  The purpose of the Guidelines is to provide assistance to organisations developing and deploying AI to ensure that they are ethical, robust and trustworthy.  The European Commission has announced that businesses working with AI systems can sign up to a pilot phase to take place over this summer to test the Guidelines.

Summary of the Guidelines

The AI HLEG has based its Guidelines around three key characteristics that it believes any “trustworthy” AI system should meet, namely that it should always be:
  1. Lawful -  respecting all applicable laws and regulations;
  2. Ethical - respecting ethical principles and values; and
  3. Robust - both from a technical perspective while taking into account its social environment.

To promote these characteristics, the Guidelines set out seven key requirements that AI systems should meet in order to be considered trustworthy.  Underlying these seven requirements is an “AI trustworthiness assessment list”, intended to be a practical set of tools and criteria that organisations can use to help verify whether its development or use of AI meets each of the key requirements.  The seven requirements are as follows:
  • Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches;
  • Technical robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible;
  • Privacy and data governance: in addition to ensuring full respect for privacy and date protection, adequate data governance mechanisms must also be put in place, taking into account the quality and integrity of the data, and ensuring legitimised access to data;
  • Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations;
  • Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could have multiple negative implications, from the marginalisation of vulnerable groups, to the exacerbation of prejudice and discrimination;
  • Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered; and
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate an accessible redress should be ensured.
Key implications for organisations working with and using AI systems

The Guidelines represent a welcome contribution from the EU to help place Europe as a leader in producing and promoting legal, social and ethical standards for the use of AI.  Although they are not legally binding, the Guidelines clearly demonstrate the EU’s approach to regulating this emerging space and will likely form the basis of any future AI legislation, including the specific application of existing laws to AI, including data protection/GDPR, consumer protection and liability and safety rules.  This is in line with the EU’s stated aim to ensure global competitiveness in the “AI race” by promoting safe and secure AI that maximises opportunity while mitigating risk.

Of particular interest to organisations developing or using AI (or planning to do so) should be the “AI trustworthiness assessment list” that can be used as a practical checklist in AI risk assessments.  This type of operational guidance may be helpful to both developers and users of AI in assessing whether their own risk management and governance processes are in line with the EU’s thinking in this area.  Such organisations might consider signing up to pilot the assessment list on their AI developments and provide feedback on their applicability and appropriateness in an open consultation this summer to inform the AI HLEG’s revised assessment list that it intends to publish next year.

However, given how context-specific many AI systems and their use cases can be, a tick-box exercise against a general assessment list is unlikely to be sufficient to comply with law and standards going forward.  In this vein, the AI HLEG is planning to publish further reports on sector-specific approaches to its Guidelines.  In the meantime, organisations should continue to seek to comply with existing laws and regulations, as well as the best-practice risk and governance processes that are fast-developing in the AI sphere, including those we have written about. 

We are pleased to see the EU develop practical guidance that is much-needed by organisations seeking to design and deploy AI systems in a safe, responsible and compliant way.  Bristows’ AI team will continue to monitor developments in AI laws and standards as they emerge while providing our clients with strategic advice on compliance and best-practice in AI development and use.