Enhancing Data Privacy in Artificial Intelligence : A Study on Corporate Practices and Regulatory Compliance
Nguyen, Khanh (2024-05-25)
Enhancing Data Privacy in Artificial Intelligence : A Study on Corporate Practices and Regulatory Compliance
Nguyen, Khanh
(25.05.2024)
Julkaisu on tekijänoikeussäännösten alainen. Teosta voi lukea ja tulostaa henkilökohtaista käyttöä varten. Käyttö kaupallisiin tarkoituksiin on kielletty.
avoin
Julkaisun pysyvä osoite on:
https://urn.fi/URN:NBN:fi-fe2024060645735
https://urn.fi/URN:NBN:fi-fe2024060645735
Tiivistelmä
The rapid development of Artificial Intelligence (AI) has been a requirement for the creation of innovative possibilities and efficiencies in the different sectors. Meanwhile, the reliance on AI for personal data during its training process involves various issues of privacy and data security as well. This thesis aims to explore the contemporary panorama of data privacy techniques and problems in AI design, including corporate strategies and the application of regulatory rules.
The thesis involves a hybrid of quantitative and qualitative techniques, using a survey of professionals in AI and a case studies’ approach for AI companies. The thesis shows a picture of a network between technical, legal, and ethical challenges during data privacy. Among the key challenges are ethics in getting people's consent, privacy in data usage, and transparency in AI decision-making. Then an analysis is carried out regarding whether the regulations including the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) are sufficient in dealing with this AI privacy concern. These laws and principles lay the groundwork for the protection of data, however, they are not enough in the context of AI development.
Taking into consideration the overall findings, the thesis develops the outline of a set of guidelines and recommendations to be followed to improve data protection in AI development. Such measures can consist of adopting Privacy-by-Design (PbD) principles, carrying out Privacy Impact Assessments (PIAs) regularly, developing within the organization a culture of privacy, and establishing privacy standards on an industry-wide basis. The lessons from the thesis also help to continue the discussion on the Ethics in AI development and supply practical guidance in the data privacy areas for the companies and policymakers in the AI era.
The thesis involves a hybrid of quantitative and qualitative techniques, using a survey of professionals in AI and a case studies’ approach for AI companies. The thesis shows a picture of a network between technical, legal, and ethical challenges during data privacy. Among the key challenges are ethics in getting people's consent, privacy in data usage, and transparency in AI decision-making. Then an analysis is carried out regarding whether the regulations including the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) are sufficient in dealing with this AI privacy concern. These laws and principles lay the groundwork for the protection of data, however, they are not enough in the context of AI development.
Taking into consideration the overall findings, the thesis develops the outline of a set of guidelines and recommendations to be followed to improve data protection in AI development. Such measures can consist of adopting Privacy-by-Design (PbD) principles, carrying out Privacy Impact Assessments (PIAs) regularly, developing within the organization a culture of privacy, and establishing privacy standards on an industry-wide basis. The lessons from the thesis also help to continue the discussion on the Ethics in AI development and supply practical guidance in the data privacy areas for the companies and policymakers in the AI era.