WP29 Publishes New Guidelines on Profiling and Automated Decision-Making

With recent technological developments, especially AI and machine learning technology, the use of profiling and automated decision-making has been increasing across all sectors of society (e.g. finance, healthcare, insurance, marketing, etc.)

The GDPR expressly addresses these processing activities, but there has been a lot of uncertainties and questions about how the GDPR provisions will apply in practice. The WP29 released its new proposed guidelines this week on profiling and automated decision-making which attempt to answer some of these questions.

In these guidelines, the WP29 analyses separately the two legal frameworks set out in the GDPR for these processing activities: (i) Article 22 GDPR, which specifically deals with solely automated decision-making, including profiling, which produces legal or similarly significant effects on individuals, and (ii) the general legal framework, which is applicable to general profiling (including when it is used for non-solely automated decision-making).

In the first part of the guidelines, the WP29 makes interesting clarifications about the applicability of Article 22 GDPR.

  • It expressly states that Article 22 prohibits solely automated decision-making, including profiling, that produces legal or similarly significant effects, unless one of the three listed exceptions apply (e.g. contract necessity, law, or explicit consent). By doing so, the WP29 closes a longstanding debate on the scope of the provision: legitimate interest is not a valid legal basis for this type of processing.
  • To be considered not solely automated, the decision-making process needs meaningful oversight by someone that had the authority and competence to change the decision – human involvement cannot be fabricated.
  • A “legal effect” refers to the impact on the individual’s statutory as well as contractual rights. This includes, for example, refused entry at the border, denial of social benefit, automatic disconnection from phone service, as well as increased surveillance by the authorities.
  • Most typical cases of targeted advertising will not be considered as having “similarly significant effects,” although it may be the case under certain circumstances (e.g., targeted advertising directed at a minority group or vulnerable persons).
  • When a solely automated decision-making is permitted, controllers need to implement appropriate safeguards to protect the rights and freedoms of others. The WP29 provides good practice recommendations on what these safeguards should consists of, including, for example, quality checks to prevent errors, unfair or discriminatory results, algorithm auditing and testing, data minimisation measures, anonymisation and pseudonymisation, and the right for individuals to obtain human intervention and to contest the decision.
  • When solely automated decision-making is used, controllers need to provide individual with “meaningful information about the logic involved.” In most cases, this will require controllers to give information about the data used in the automated decision-making, the source of the data, how profiles are built, why they are relevant to the particular decision-making, and how it is used for the decision. The WP29 insists that this information (e.g. the rationale) will often be more relevant for individuals than a complex explanation about how algorithms or machine learning technologies work (although this should also be provided if necessary).

The second part of the guidelines analyses how the general principles of the GDPR, such as: transparency, fairness, data minimisation, accuracy, purpose compatibility, and storage limitation apply to general profiling activities. It also describes how data subjects can exercise their rights under the GDPR in this particular context and specifies that these rights will be actionable against both the controller that creates the profiles and the controller that uses them for automated decision-making (where there are different entities).

Annex I of the proposed guidelines provide further good practice recommendations on how to meet the GDPR requirements in the context of profiling.

The proposed guidelines are open for consultation until 28 November 2017.

OneTrust will publish later this week a white paper further detailing the content of these proposed guidelines and what they mean, in practice, for organisations.

How OneTrust Helps

OneTrust provides a tool to facilitate the process of creating, distributing and analysing Data Protection Impact Assessments (DPIAs) that review high risk activities, such as automated decision making. Our DPIAs are designed to increase organisation-wide adoption through role-based templates and self-service tools that are integrated into project lifecycles. All privacy projects across the organisation are consolidated into a central dashboard for a complete record of data protection activities.