The Recommendations on Data Protection in the Field of Artificial Intelligence (the "Recommendations") was published by the Turkish Personal Data Protection Authority (the "DPA")1 on its website on 15 September 2021.
The scope of the Recommendations address the Developers, Manufacturers, Service Providers and Decision Makers in accordance with the Law on the Protection of Personal Data numbered 6698 and its secondary legislation (the "Law"). This is the first time that DPA has published a document regarding data protection regarding AI-based applications.
The Recommendations consist of three parts, namely: (i) general recommendations; (ii) the recommendations for developers; manufacturers and service providers and (iii) recommendations for decision makers.
The concept of "Artificial Intelligence" within the scope of the Recommendations
Under the Recommendations the term Artificial Intelligence (the "AI") is defined as the human-specific abilities to be analysed and passed to machines. The AI focuses on creating algorithms and computer software, which can think, interpret and make decisions as humans.
The concepts of the "Developers", "Manufacturers", "Service Providers", and "Decision Makers" within the scope of the Recommendations
The Recommendations put forward the definitions of Developer, Manufacturer and Service Provider but do not define the Decision Maker. Considering the European Union documents on the issue, we believe that Decision Maker corresponds to the legislative organ and policy makers.
Further while the Developers are introduced as any real persons or legal entities developing content or application for the AI systems whereas the Manufacturers are real persons or legal entities who produce any products such as software and hardware systems that constitutes these systems.
Service Providers are defined as any real people or legal entities who offer a product and/or service using the AI based systems, data collection systems, software or devices under the Recommendations.
Under the General Recommendations section, the importance of protecting fundamental rights and freedoms of real persons whose data are being processed (the "Data Subject") in the process of developing and applying the AIs is emphasized.
In this context, the right to protection of human dignity should be respected and the principles of "compliance with the law, fairness, proportionality, transparency, accuracy and accuracy of personal data, specific and limited purpose of the use of personal data" should be a basis for the AI developments relying on the processing of personal data and data collection.
Considering the individual and social effects of the data processing activity conducted by the AI, the Data Subject should have the control over. The Recommendations include further guidelines on the issue for everyone working in the field.
Regarding AI developments relying on the processing of personal data;
- if there seems to be a high risk in protecting the data, privacy impact assessments are recommended to be conducted and legal compliance supervision must be done accordingly;
- the compliance with the Law should be ensured;
- a data protection compliance program specific to each project should be established and implemented;
- if special categories of personal data2 are being processed, measures should be taken within this framework by complying with the special data protection rules; and
- the status of being a data controller and a data processor should be determined at the beginning of the project.
While developing and applying artificial intelligence technologies, if reaching to the same result is possible without processing personal data, the data should be processed by anonymization3.
Recommendations for Developers, Manufacturers and Service Providers
Pursuant to the Recommendations, an approach that complies with national and international regulations which respects data privacy should be adopted in AI-oriented designs. In addition, Data Subject's rights regarding their personal data arising from both national and international regulations should be preserved.
Together with these, the points below are stressed in the scope of the Recommendations:
- the risk of discrimination and other adverse impacts that may result from any data processing phases must be prevented;
- the data minimisation principle should be considered and the accuracy of the developed model should be monitored;
- the risks of causing individual and social negative effects of algorithms that deviate from the design purpose should be evaluated;
- opinions of impartial experts and organizations should be considered;
- individuals should have right to object in relation to the processing that affect their personal development;
- Data Subject rights arising from national and international legislation should be protected;
- risk assessment based on the active participation of individuals should be encouraged and ensured;
- products and services should be designed ensuring that the individuals are not subject to a decision significantly affecting them based solely on an automated processing, regardless of their own views;
- alternatives should be offered in production and it should be ensured that users can make choices including less personal rights interventions;
- algorithms should be designed to provide accountability mechanisms for each stakeholder in compliance with the data protection legislation;
- Data Subject should have the right to seize the data processing;
- personal data should able to be deleted, disposed or anonymized; and
- Data Subject should be informed about the grounds, methods and results of personal data processing and a data processing consent mechanism should be designed.
The Recommendations for Decision Makers
This section includes advices for Decision Makers who are working in the field of personal data protection.
Pursuant to the Recommendations, the Decisions Makers should:
- observe the principle of accountability;
- adopt the risk assessment procedures;
- provide an application matrix based on sector/application/hardware and software;
- take measures such as establishing codes of conduct and certification mechanisms;
- determine the role of human intervention;
- preserve the right of individuals to not fully rely on the result of suggestions presented by the AI-based applications;
- consult the supervisory authorities when there is a possibility that the AI-based applications may interfere with personal rights;
- ensure the cooperation between supervisory authorities and other bodies regarding data privacy, consumer protection, promotion of competition and anti-discrimination;
- support implementation research based on measuring the human rights, ethical, sociological and psychological effects of artificial intelligence applications;
- encourage individuals, groups and stakeholders to discuss the roles that artificial intelligence and big data systems have over the society;
- promote open software-based mechanisms to create a digital ecosystem that adheres to the aforementioned principles;
- invest in digital literacy and educational resources; and
- encourage education on data privacy for the AI application developers.
This is the first time that DPA has published recommendations regarding AI-based applications. Since EU has been focusing on AI based works heavily, we believe the Recommendations published by DPA is a result of getting under this influence. While considering the DPA's Recommendations, the principles of human agency and oversight; privacy and data governance; transparency; diversity, non-discrimination and fairness and accountability should also be taken into account.
Although the recommendations presented are general and each of them are different matters of debate, this document signals the AI ethics as a rising toping that we can expect to hear more.
2 According to the Article 6 of the Law, "personal data relating to the race, ethnic origin, political opinion, philosophical belief, religion, religious sect or other belief, appearance, membership to associations, foundations or trade-unions, data concerning health, sexual life, criminal convictions and security measures, and the biometric and genetic data" are deemed to be special categories of personal data.
3 According to the Article 3 of the Law, anonymization is defined as "rendering personal data impossible to link with an identified or identifiable natural person, even through matching them with other data".
Selin Kaledelen (Associate), Elif Engin (Legal Intern) and Deniz Alkan (Summer Intern) of GKC Partners authored this publication.
This publication is provided for your convenience and does not constitute legal advice. This publication is protected by copyright.
© 2021 White & Case LLP