With the developments in the field of technology in recent years, artificial intelligence chatbots have entered our lives like many new tools and have evolved to respond to all our needs and have become an integral part of our lives. As a natural consequence of this, these artificial intelligence-based systems, which continuously collect data from their users, have raised various concerns and have become frequently discussed both by data protection authorities and professionals operating in data security and privacy issues.
While the issue of how artificial intelligence chatbots use and store data collected from users remains hot on the agenda through published decisions, publications and press releases, the Personal Data Protection Authority published an information note titled “Information Note on Chat Robots (ChatGPT Example)” on 08.11.2024, focusing on the intensive activities of these chatbots, which provide interaction between human and technology, on personal data collection and processing. The memorandum, which contains various explanations on artificial intelligence-supported chatbots, also shed light on what needs to be done for practitioners and developers within the framework of personal data protection law.
Chatbots can be defined as artificial intelligence applications that have the ability to interact with users to answer questions and provide information. The Agency defines this as “a software that simulates a human conversation with the end user, trying to fulfill the tasks/directions given to it by the user through an interface”.
It is known that all kinds of data are indispensable resources for artificial intelligence systems. The main reason for this is that artificial intelligence-supported systems need various and large amounts of data by nature in order to work with the best performance. In this context, data can be processed for many purposes such as providing better user experience, analyzing, improving services, developing new programs, although the nature of the data may vary. The processed data include, but are not limited to; name, contact information, payment card information, Internet Protocol (IP) address automatically sent by the browser or device, browser type and settings, location information, access times, type of computer or mobile device, search history, cookie information and text, voice and similar data transmitted by individuals.
Regarding artificial intelligence chatbot applications such as ChatGPT, there are different approaches to the protection of personal data in different countries. In terms of our country, it is not possible to keep artificial intelligence chatbots independent from the principles and regulations of the Law on the Protection of Personal Data, and the practices should be meticulously examined and evaluated within the framework of the Law. As a matter of fact, while these regulations aim to protect the rights of individuals regarding data privacy and security, they also impose various obligations on the providers of chatbots within this framework. Such artificial intelligence-based applications should be evaluated especially in terms of the basic principles regarding the procedures and principles in the processing of data regulated under Article 4 of the Law titled “General Principles”.
Within the scope of the Law on the Protection of Personal Data, it is of great importance to obtain explicit consent for the collection and processing of user data. Obtaining explicit consent from users interacting with chatbots and transparently explaining data processing processes are among the basic principles that are important in terms of compliance with the KVKK. Ensuring transparency in data processing processes is possible by providing sufficient information to the user on issues such as how and for what purposes the processed data is used, with whom it will be shared, which data will be stored for how long and the rights of individuals. These information texts, which must be created meticulously in order to ensure that individuals do not lose control over their data, must be created in a clear and simple language and must comply with the guidance and legislation on the Fulfillment of the Disclosure Obligation.
In addition to being transparently disclosed to individuals, another dimension of the requirements imposed by data processing processes is that the processed data must comply with the general principles determined by the Law. In this context, the personal data processed must be processed for specific, explicit and legitimate purposes and must be relevant, limited and proportionate to the purpose for which they are processed.
In the case of ChatGPT, the privacy policy states that personal data collected from users are processed for purposes such as providing services, analyzing, developing new product features, communicating with users, preventing misuse of services, fulfilling legal obligations and protecting the legitimate interests of the company. It is also made possible for users to change the relevant preferences in case they do not want this data to be used to train models within the scope of improving the services. At this point, it is possible to say that legal grounds are relied on in terms of data processing and that users are given control over their data in terms of whether they allow the use of chat history data for the development of the application. Giving users the right to choose whether or not their chat history is used in the development of the application is a positive step in terms of personal data protection. In terms of data destruction, it is stated that if the temporary chat mode is selected, the data will be kept for a maximum of 30 days for security purposes, which raises a question as to whether this commitment will provide sufficient protection for users in Turkey. This issue may also need to be examined in terms of the principle of “retention for the period necessary for the purpose” of the Law.
As a result, it is a fact that artificial intelligence-based chatbot applications contain some uncertainties in terms of the protection of personal data and that greater sensitivity should be shown regarding the protection of personal data in the development and use of such applications. The study published by the Authority in the form of an information note has been an important starting point in this regard. As a matter of fact, in the relevant memorandum, the Authority warned users not to overshare their personal data, and included important principles to be considered when developing a chatbot, such as “making a risk assessment before touching personal data”, “acting in accordance with the principle of accountability and the principles set forth in the Law”, and “complying with the obligation to inform”.
It is of great importance that applications such as chatbots are developed in accordance with the aforementioned principles and that the necessary steps are taken to protect users’ data, and data protection authorities should regularly conduct audits and studies in this area to ensure that this system can be carried out in compliance with the legislation. In the face of the ever-increasing complexity of artificial intelligence technologies, it is thought that the preparation of more detailed and comprehensive information notes will contribute to the protection of personal data and privacy of individuals and will be enlightening in terms of the development of applications in the sector and especially in accordance with the principles of personal data protection.
Att. Selin Ünverdi