The rapid development of artificial intelligence (AI) technology in recent years has led to its active involvement in the transformation processes of many industries. The insurance sector is also evolving into a new dimension with the integration of AI. Insurance companies benefit from the data analytics, automation, prediction, and analysis capabilities of this technology to create more efficient, customer-focused, and profitable processes. However, the involvement of AI in these processes brings not only technical transformation but also significant legal and ethical responsibilities.
Use of Artificial Intelligence in Damage Assessment
Damage assessment and evaluation are time-consuming and costly processes for insurance companies, where customer expectations are often high. AI provides time and cost savings by automating claims processes and damage analysis. In addition to accelerating the expertise process through comprehensive analysis and evaluations and increasing customer satisfaction, it also creates brand-new sales opportunities. In recent years, insurance companies have actively utilized AI applications in their claims management and customer service processes.
- Computer Vision: Using image processing technologies, the extent of damage and repair costs can be estimated from photos sent by the customer after an accident. It is also possible to detect whether the images are original or have been manipulated.
- Natural Language Processing (NLP): AI can accurately classify customer requests by interpreting policyholder claim notifications, conversations with customer representatives, or email correspondences.
- Machine Learning Models: Algorithms trained on previous damage data can predict damages in similar events. Machine learning algorithms are also used to classify damages by severity and legal process.
- Voice Recognition Systems: Damage records can be created from call center interactions. Additionally, the system can help detect customers using different names and phone numbers for fraudulent purposes.
Emerging Risks
AI provides significant advantages for both insurance companies and customers in terms of efficiency, speed, more objective decision-making, cost and time savings, customer satisfaction, and fraud detection. However, alongside these benefits, AI processes and stores vast amounts of data, raising concerns about the protection of customer information and compliance with legal regulations.
- Data Privacy and Security: One of the most critical risks of using AI is related to data privacy and security. Biometric authentication methods allow access to systems without passwords. However, the ability of AI to replicate a person’s face or voice presents serious security risks. Due to the lack of transparency in AI algorithms and the rise in cybercrimes, both personal and corporate data are at risk.
- Liability in Case of Errors: If an error occurs in AI-based detection, such as underpayment or delayed payments to the policyholder, the question arises as to who will bear responsibility. For example, in such a scenario, should the insurance company or the software provider that developed the AI system be held accountable?
Legal Liability in the Use of Artificial Intelligence
There are claims that AI could be subject to tort liability and considered a fictional legal entity in this context. However, whether AI can genuinely be considered a fictional personality remains a subject of debate. Naturally, this makes it difficult to determine the liable party, both in terms of legal personality and liability.
Regulatory bodies are increasingly demanding more transparency and accountability in AI’s decision-making processes. This may result in potential sanctions and financial consequences. When benefiting from such AI applications, the industry processes, stores, and transfers personal data. Therefore, AI and machine learning applications must be designed in a way that does not violate the personality rights of customers. At the same time, data subjects must act with awareness of their legal rights.
In Turkey, there is currently no comprehensive legal regulation specifically addressing the use of AI. Consequently, legal responsibilities and any arising disputes must be assessed under the Turkish Code of Obligations, the Turkish Commercial Code, the Personal Data Protection Law (KVKK), the regulations of the Insurance and Private Pension Regulation and Supervision Agency (SEDDK), and other relevant legislation.
Regulations in the United States and European Union
The United States and the European Union have taken various initiatives to regulate AI-related activities. In the U.S., efforts such as the American AI Initiative, the Draft AI Bill of Rights, and the Algorithmic Accountability Act aim to ensure the ethical and secure development of AI technologies.
The European Union, with the adoption of the Artificial Intelligence Act (AI Act) in 2024, regulates AI systems based on risk levels. The Act introduces strict rules and transparency obligations for high-risk AI applications. These include transparency requirements for datasets used in training AI systems, and a mandate that AI respects copyright laws and complies with ethical standards.
According to an announcement by the Directorate for EU Affairs of the Republic of Turkey’s Ministry of Foreign Affairs, this regulation aims to improve the functioning of the internal market while promoting human-centered and trustworthy AI technologies. The EU thus seeks to encourage innovation while protecting health, safety, the environment, democracy, the rule of law, and fundamental rights from potential harmful effects of AI systems.
The AI Act applies to all sectors of the economy, including insurance. In particular, the Artificial Intelligence Act (AI Act) has categorised some artificial intelligence applications in the insurance sector as ‘high risk’. Artificial intelligence systems used for risk assessment and underwriting in life and health insurance are considered ‘high risk’. The rationale for this is that such systems may pose a risk of discrimination or exclusion by affecting the financial access and health of individuals. This law states that there are a number of requirements that providers and users of high-risk AI systems must comply with. This categorisation means that insurance companies will be subject to strict obligations when using AI in these areas. For example, if an insurance company determines a life insurance premium with an AI-powered algorithm, it will have to establish a comprehensive risk management system for this system, use quality and non-discriminatory datasets, conduct a Fundamental Rights Impact Assessment (FRIA), and enable human control.













