Report outlines strategies for managing AI risks in insurance
November 6, 2023447 views0 comments
Cynthia Ezekwe
The growing sophistication of AI will have a major impact on the insurance industry, from marketing and customer service to underwriting and claims management, analysts say.
Read Also:
While AI could improve efficiency and accuracy, they opine that it also raises questions about bias, privacy, and accountability, particularly around consumer trust. This is because the insurance sector is highly regulated, and trust is essential, so transparency, explainability, and auditability are critical to regulators.
While AI can potentially automate the entire insurance process, from marketing to customer service, underwriting, and claims management, it raises concerns about consumer trust, as there is substantial public and scholarly debate around the lack of transparency and explainability in many algorithms.
The concept of trust is essential to insurance, and the ‘black box’ nature of AI algorithms has created scepticism about its use in the industry.
To address these concerns, IntellectAI, an insurance technology company, recently published a report titled, “To be or not to be – AI ready”, that offers insights into the technological and application-related risks and limitations associated with AI in the insurance industry.
IntellectAI stated that the risks related to AI in insurance can be divided into two broad categories: technological risks and usage risks.
According to the report, data confidentiality is the main technological risk. The scale at which data can be collected, stored, and processed by AI is unprecedented. Moreover, the emergence of generative AI, which can create new content from existing data, introduces an additional risk to corporate data confidentiality, which in turn increases scepticism in the industry.
The report noted that if the parameters of an algorithm are disclosed, third parties may be able to copy the model, resulting in economic and intellectual property losses for the model’s owner.
“Additionally, should the parameters of the AI algorithm model be modified illegally by a cyber attacker, it will cause the performance deterioration of the AI model and lead to undesirable consequences,’’ the report noted.
The report also highlighted the lack of transparency as a key technological risk of using AI in insurance. Because AI systems, especially generative AI, are ‘black boxes’, the decision-making process of these algorithms is difficult to understand.
The report also mentioned the usage risks associated with AI in insurance, particularly the issue of inaccuracy. It noted that the performance of an AI system depends heavily on the data from which it learns. If the data is inaccurate, biassed, or plagiarised, the AI system may produce poor results even if it is technically sound. In addition, there is a risk of abuse, as operators may misuse the system and programme it to produce undesirable outcomes.
In addition to the previous risks, the report mentioned over-reliance as a usage risk of AI, stating that users may start accepting incorrect AI recommendations — resulting in errors of commission. This risk arises from users’ lack of understanding about the capabilities and limitations of AI systems. Additionally, the over-reliance on AI can lead to a weakening of the user’s skills.
However, the report noted that mitigating the risks of AI adoption requires a governance approach that addresses both technical and usage risks. The report proposed a human-centric governance strategy that includes the following three approaches:
– Managing AI system performance by assessing system bias and error, and validating performance.
– Managing AI deployment by maintaining quality data, developing reliable systems, and developing contingency plans for system failure.
– Managing AI users by providing relevant information on the system’s capabilities and limitations, training users on system use, and monitoring users’ actions.
In addition to human-centric governance, the report recommended a technology-centric approach to mitigate the technical risks of AI adoption. This approach would require expanding the IT governance to include:
– A detailed data and system taxonomy to track the AI model’s inputs and usage, validation and testing cycles, and expected outputs.
– A risk register to assess the likelihood and impact of potential AI risks.
– A comprehensive analytics and testing strategy to assess and monitor the AI system’s inputs, outputs, and model components.
The report urged insurers to keep pace with the rapid development and deployment of AI technologies.
“Insurers should not shy from technology. Instead, insurers should contribute their insurance domain expertise to the development of AI technologies. Their ability to inform input data provenance and ensure data quality will contribute towards a safe and controlled application of AI to the insurance industry,’’ the insurtech company advised.