- Since approval of the European Union General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that a ‘right to explanation’ of all decisions made by automated or artificially intelligent algorithmic systems will be legally mandated by the GDPR once it is in force, in 2018.
- However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the right to explanation of specific automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive meaningful, but properly limited, information (Articles 13–15) about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems, what we term a ‘right to be informed’.
- The ambiguity and limited scope of the ‘right not to be subject to automated decision-making’ contained in Article 22 (from which the alleged ‘right to explanation’ stems) raises questions over the protection actually afforded to data subjects.
- These problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless.
- We propose a number of legislative steps that, if implemented, may improve the transparency and accountability of automated decision-making when the GDPR comes into force in 2018.
By Professor Sandra Wachter – Turing Fellow, Professor Luciano Floridi- Turing Fellow, and Dr Brent Mittelstadt – Turing Fellow