Written by Porter Peery, Esq.
Edited by Bill Pfund, Esq.
The improvement in artificial intelligence (AI) over the last few years has impacted several industries including insurance. In a survey by Accenture “a full 75% of 550 insurance executives said they believe that AI will either significantly alter or completely transform the overall insurance industry in the next three years.” Machine learning algorithms are already playing a key role in product design, sales, services, fraud detection, risk evaluation and claims resolution. In another Accenture survey, the two keys to a satisfactory customer claims experience were speed of settlement and transparency of process. AI can improve and streamline the claims process through automated data entry, compliance tracking, fraud screening and even analysis and predictive modeling. By removing what used to be manual steps, an adjuster will be freed up to utilize their experience where it can count the most. Increasing the efficiency of claims processing and reducing loss adjustment expenses will reduce overall costs and help make the carrier’s premiums stay more competitive. (1)
Despite these benefits, the use of AI presents hidden dangers for insurance companies, particularly in such areas as regulatory compliance, law and privacy. There is little apparent on the surface as to just how AI makes conclusions or solves problems while performing tasks. The concerns are especially relevant for the insurance industry which needs to comply with a number of industry and government regulations. Algorithms do malfunction and although these mistakes may be different from those typically made by humans, there will likely be legal ramifications. Who will be held responsible if a machine makes an incorrect conclusion resulting in the mishandling of a claim? Is it the designer, programmer or the company or claims professional using the technology? Issues such as evidence, responsibility, authentication and attestation will need to be examined. (2)
The insurance industry can do some things to safeguard against the potential legal and compliance risks associated with the use of AI. They should have a good understanding of just how the machine makes decisions. Courts and juries would likely frown upon reliance on a system which is not properly understood by those who utilize it. Companies should also consider whether they can sufficiently track a system’s performance to a degree which would satisfy regulators or legal requirements. Companies and insurance industry organizations should participate with regulatory agencies to encourage the development of realistic guidelines. In addition, there should be an awareness of just when it is prudent to rely on AI determinations versus human decision making when liabilities are at issue. Insurance companies need to consider how to allocate liability as well- this can be addressed in a contract with the developer of the AI if it is outsourced. Other concerns include cybersecurity and privacy. (3)
Both Google and Microsoft published a set of principles in 2018 to guide AI development. They each seek “to promote fairness, safety, reliability, privacy, security, inclusiveness, transparency and accountability”. 4 It is likely that new legislation will develop along these lines since traditional tort remedies may prove inadequate. Insurance carriers should consider such principles in their AI development and utilization because it certainly appears that AI is here to stay in the insurance industry. (4)
1. Instant Insights/ Artificial Intelligence in Insurance, Using AI and Automation to Transform Claims Handling by Joel Makhluf, January 9, 2018, Director of the Property Innovation Summit
2. CIO Risky AI Business: Navigating Regulatory and Legal Dangers to Come, Bob Violino, Feb. 19, 2018
3. Id., CIO Risky AI Business
4. Legal Aspects of Artificial Intelligence, Richard Kemp, September 2018