The concept of artificial intelligence can be defined as machines that can imitate what humans can do, where the problem-solving ability inherent to humans can also be realized by machines.
Nowadays, the concept of artificial intelligence is gradually gaining a place in human life in many areas with the advancement of technology. So much so that artificial intelligence can appear in our lives in the form of driverless cars, computer search engines, smart phones, robot vacuum cleaners, etc., as well as artificial intelligence called AL, ChatGPT.
At this point, the need to determine the legal status of artificial intelligence and its legal responsibility in the context of this status has come to the fore. When the discussions in this context are examined, there are opinions that i) artificial intelligence should remain as goods/products, ii) artificial intelligence should be given legal personality, or iii) artificial intelligence should be accepted as a non-human person, electronic person, artificial human.
In European Law, it is accepted as the prevailing visibility that artificial intelligence is a “product” offered to end-users and producers, and that the damage caused by the said product should be evaluated within the scope of the producer’s liability. The European Council Directive 85/374 on the Liability of the Producer is an example of this view.
In Turkish Law, the Product Safety and Technical Regulations Law No. 7223, which entered into force on March 12, 2021, defines intangible goods and therefore artificial intelligence systems as “products”. As a result of the parallelism between the European Directive and the UGTDK, although there is a predominant opinion that the manufacturer may be held liable for the damages caused by artificial intelligence, there is no specific regulation on liability for the time being.
In addition, a report by the European Parliament published a series of proposals and recommendations in terms of granting personality to AIs. The report is the first official document to propose the status of personality for an AI entity, and it also introduced the concept of “electronic personality”.
In the evaluation of artificial intelligence in terms of criminal liability, since it is out of the question to punish artificial intelligence or a machine connected to it, punishing the manufacturer, programmer, owner or user for their actions would not be in line with the principle of “individual criminal responsibility”.
For this reason, first of all, the definition of the crimes and penalties that artificial intelligence can commit should be clearly included in the law, and accordingly, the manufacturer, programmer, owner or user of algorithms that can harm people with intent or negligence should be taken as a basis for punishment. In this context, it is important to ensure transparency in data sources. Clearly documenting where the data comes from, how it is collected and what pre-processing it goes through will make it easier to identify and reduce possible prejudices.
As a result, the lack of a parallel regulation within the scope of national and international legislation continues the debate on the legal nature and responsibility of artificial intelligence. As a result of the unpredictable spread and development of artificial intelligence, it is obvious that the need for legal statutes has increased.