When Life Insurance Gives You AI, Should You Make Lemonade?

When Life Insurance Gives You AI, Should You Make Lemonade?

When Life Insurance Gives You AI, Should You Make Lemonade?

Artificial Intelligence is increasingly being used by insurers to determine claims determinations. Is AI, however, appropriate for Life Insurance?

Artificial intelligence is becoming increasingly popular among insurance businesses. Customers can sign up for insurance faster, process claims more effectively, and get 24/7 help owing to AI, according to advertisements promoting those techniques.

A recent Twitter thread from Lemonade, an AI-powered insurance company, offers insight on the practice’s possible drawbacks. People observed it and thought that the Lemonade AI method demonstrates how technology can both harm and benefit, depending on how it is used.

Transparency on Twitter Causes Concern

Many businesses are secretive about how they employ AI. The objective is to convey the impression of a futuristic service while preserving a company’s exclusive technology by keeping the AI hidden.

When Lemonade used Twitter to demonstrate how its AI works, the company began by outlining how it utilizes data. Lemonade, for example, revealed on Twitter that it gathers 100 times more data than typical insurance firms.

The post went on to detail how the company’s AI chatbot poses 13 questions to clients. It collects over 1,600 data points in the process. The message added, “This is in comparison to the 20-40 other insurers get.” Lemonade utilizes this data to assess a customer’s risk, which helps the firm reduce its operational expenses and loss ratio.

The fourth tweet in the seven-message chain raised even more eyebrows, implying that Lemonade AI analytics can detect nonverbal indicators connected with bogus claims. Customers use their phones to record videos detailing what transpired as part of the company’s procedure.

Users on Twitter questioned the ethics of this technique, pointing out the issues with unaccountable machines making decisions regarding life-altering claims like those for burned-down houses. “An much more obviously pseudoscientific version of a standard lie detector exam,” one observer said.

Even AI makes mistakes

Insurance AI technologies for detecting suspicious indicators and patterns aren’t the only way to detect fraud. Many institutions, for example, utilize it to highlight unusual expenses. However, the technology may and does misinterpret circumstances. Even the most proficient programmers are unable to provide faultless results.

Most individuals have had the uncomfortable experience of trying to purchase something and hearing the cashier say the transaction failed, despite having plenty of money in their accounts. It is typically as easy as the cardholder calling the issuer to explain the circumstances and approving the charge to resolve the issue.

When it comes to a claim for someone’s vital property, however, the issue potentially gets more serious. What if the AI makes a mistake and labels a policyholder’s real disaster as fraudulent? Someone who pays their insurance premiums on time, anticipating the coverage to provide peace of mind in the event of a disaster, may find themselves unprotected after all. A programming error produced by humans might result in an incorrect conclusion for an AI insurance firm consumer.

Lemonade allows consumers to cancel their policies at any moment and receive a refund for any remaining paid term. Many people openly expressed their desire to switch providers after reading its abusive Twitter thread. It’s too early to say how many people will participate.

Profiting at the expense of customers?

Lemonade’s tweets also indicated that the firm lost 368 percent of its profits in the first quarter of 2017. By the first quarter of 2021, it had dropped to barely 71%. The insurance business isn’t alone in increasing its AI spending in order to boost earnings.

The measures that corporate executives take to deploy AI have an influence on the outcomes. According to a BDO research, investing more in IT during AI adoption resulted in a 16 percent increase in income. However, without allocating greater resources to IT, the average growth was just 5%.

Whatever steps a company’s management takes when deploying artificial intelligence, the Lemonade debacle generated legitimate public concern. One of AI’s major drawbacks is that algorithms are sometimes unable to explain the causes that led to its conclusions.

Even the engineers who create them are unable to validate the different factors that lead an AI tool to make one judgment over another. This is a sobering truth for insurance AI products and many other companies that rely on AI to make important choices. In HDSR, some AI analysts sensibly advise against the usage of black-box models.

According to Lemonade’s website, AI makes a substantial portion of its claims determinations in seconds. That’s great news if the conclusion benefits the consumer. You might picture the added stress on an already stressed insured individual if the AI denies a genuine claim in less than a minute. Lemonade and other AI-driven insurers may not mind if the system helps them make money, but customers will if the system makes incorrect decisions.

Lemonade Backpedals

Lemonade’s managers promptly removed the contentious tweets and replaced them with an apology. Lemonade AI never automatically declines claims, according to the statement, and it doesn’t judge them based on a person’s gender or looks.

The company’s initial tweet suggested employing AI to analyse nonverbal signs, which users soon pointed out. The scenario became even more complicated when a Lemonade blog post claimed that the firm does not employ artificial intelligence to reject claims based on physical or personal characteristics.

Lemonade employs face recognition to indicate situations where the same person submits claims under many identities, according to the article. However, the first tweet emphasized nonverbal clues, which appear to be distinct from scrutinizing a person’s face to verify their identity.

“Lemonade AI employs face recognition for identification verification during the claims process,” for example, would prevent many individuals from drawing terrifying conclusions. The article also mentioned how behavioral research shows that witnessing oneself talk — such as via a phone’s selfie camera — makes people less likely to lie. It states that the method allows it to pay “valid claims faster while lowering expenses.” However, other insurance firms are likely to apply AI in a different way.

A possible risk with any AI insurance technology is that users may exhibit behaviors under stress that are similar to those of dishonest people. While creating a claims video, a policyholder may stutter, talk fast, repeat themselves, or look about. They can display such indications because they’re in a lot of pain, not because they’re lying. AI is also used in the human resources field when conducting interviews. Anyone under duress typically acts in ways that are out of character for them.

Use of AI and the Risk of Data Breach

As a tool obtains access to additional data, its AI algorithm performance usually increases. In its original tweets, Lemonade claimed to collect over 1,600 data points per consumer. That much money causes anxiety.

To begin, you may be curious as to what the algorithm understands and whether it has reached any wrong judgments. Another concern is if Lemonade as well as other insurance AI startups secure data sufficiently.

When targeting victims, cybercriminals attempt to do the most harm possible. This frequently entails attempting to penetrate networks and systems using the most data possible. Online criminals are also aware that AI requires a large amount of data to function properly. They also like stealing data to sell on the dark web later.

A data breach happened in February 2020 at Clearview AI, a face recognition startup. Unauthorized individuals gained access to CPO’s whole client list as well as information regarding their actions, according to the company. The FBI and the Department of Homeland Security were among the company’s customers.

Customers suffer from data breaches because they lose trust and are exposed to identity theft. Many consumers may be hesitant to let an AI insurance product collect information about them in the background because data theft and mishandling occur so regularly. This is especially true if a company’s data security and cybersecurity rules aren’t specified.

Convenience combined with caution

The benefits of AI in the insurance industry are numerous. Many individuals prefer entering their questions into chatbots and receiving near-instant replies over waiting on hold for a person.

There are clear benefits if an AI insurance claims tool comes to the proper findings and corporate people maintain data safe. This summary, however; cautions consumers that AI is not a perfect answer, and that businesses may abuse it to increase profits. As more insurers experiment with AI; tech analysts and customers must raise legitimate concerns to keep such institutions honest and ethical. As a result, consumers’ data will be more secure against cybercrime.

Leave a Reply