[ad_1]

Take medicine when the app tells you, exercise and eat well. As long as you “show good compliance” and share data, you will reduce health risks and insurance premiums.

Xie Guotong, the chief healthcare scientist of the Chinese insurance company Ping An, described the combination of insurance and digital “disease management” services for patients with type 2 diabetes. With the support of artificial intelligence, this is just one example of a major shift in the industry.

Artificial intelligence screens data and aims to learn like humans. It allows insurance companies to generate highly personalized customer risk profiles and develop them in real time. In some markets, it is used to improve or replace the traditional annual premium model, and to create contracts based on factors such as customer behavior.

In some cases, insurance companies will first use it to decide whether to accept customers.

Root, a New York-listed auto insurance company, offers potential customers Test drive, Use the app to track them, and then choose whether to insure them. It said that driving behavior is also the number one factor in policy prices.

The British startup Zego specializes in providing vehicle insurance for gig economy workers such as Uber drivers. The company provides a product that can monitor customers after they purchase insurance and promises to provide safer drivers with lower renewal prices.

The theory of such policies is that customers will eventually pay a fairer price for their personal risks, and insurance companies can better predict losses. Some insurance companies say this also gives them more opportunities to influence behavior and even prevent claims from occurring.

Root, a New York-listed auto insurance company, provides test drives to potential customers, uses the app to track them, and then chooses whether to insure them © Root insurance

Cristiano Borean, chief financial officer of Generali, Italy’s largest insurance company, said: “The insurance industry is shifting from claim after claim to prevention.”

For ten years, Zhongli has been offering a pay-by-driving policy to reward safer drivers with lower insurance premiums. In its domestic market, it also provides AI-enabled driver feedback in the app, and plans to conduct pilots in other countries/regions. “Everything that allows you to interact and reduce risks is in our interest as an insurance company.”

But the rise of artificial intelligence insurance has made researchers worry that this new way of doing things will cause unfairness and may even disrupt the risk-sharing model that is vital to the industry, making it impossible for some people to find insurance.

“Yes, you won’t pay for the claims of accident-prone neighbors, but then again, no one else will pay for your claims-only you,” said Duncan Minty, an independent consultant on industry ethics. He added that there is a danger of “social classification”, that is, people who are considered to be at higher risk cannot buy insurance.

Behavior-driven cover

Ping An’s type 2 diabetes insurance products are supported by AskBob, which is its artificial intelligence-driven “clinical decision support system” used by doctors across China.

For diabetics, artificial intelligence is trained with data showing the incidence of complications such as stroke. It then analyzes the health of individual customers through the application to develop a care plan, which is reviewed and adjusted by the doctor and the patient.

Artificial intelligence monitors patients through apps and blood glucose monitors, and fine-tunes its prediction of the likelihood of complications during the process. If a patient who purchases related insurance follows the plan, he will promise a lower premium at the time of renewal.

The chart shows that leaders in AI adoption also intend to invest more in the near future

But artificial intelligence experts worry about the consequences of using health data to calculate insurance premiums.

Mavis Mahirori, a senior researcher at the Ada Lovelace Institute, said this approach “consolidates the view that health is not human well-being and prosperity, but something driven by goals and costs.”

She added that this may benefit those who are digitally connected and living near open spaces, and that “the lack of clear regulations on the definition of health data opens the door to abuse.”

Zego’s “smart insurance”, as the company calls it, offers discounts to drivers registered for surveillance. Its pricing model uses mixed inputs, including information such as age, and a machine learning model that analyzes real-time data such as fast braking and turning. Zego said that safer driving should reduce the cost of updates. It also plans to provide feedback to customers through its app to help them manage risk.

“If you have signed a monthly renewal policy with us, we will consider tracking with you over time and show you what you can do to reduce monthly costs,” the chief technology officer of the startup Officer Vicky Wills said the officer.

She added: “I think this is a trend we will actually see more and more-insurance is increasingly becoming a proactive risk management tool, rather than the previous safety net.”

Monitoring deviation

However, activists warn that data can be taken out of context-there are usually good reasons to put the brakes on force. Some people worry about the long-term consequences of collecting so much data.

“Will your insurance company use your upcoming Instagram picture of a powerful car as a sign that you are an adventurous driver? They might,” said Nicolas Kayser-Bril, a reporter for AlgorithmWatch, a non-profit organization that studies “automated decision-making.” .

Regulators are clearly concerned that artificial intelligence systems may embed discrimination.A sort of work documents In May of this year, Eiopa, the EU’s highest insurance regulator, stated that companies should “make reasonable efforts to monitor and mitigate biases in data and artificial intelligence systems”.

Experts say that when artificial intelligence replicates an inherently biased human decision-making process or uses unrepresentative data, the problem will spread.

The percentage bar chart of surveyed U.S. insurance companies plans to increase spending in this area, showing that insurance companies are increasing spending on artificial intelligence and data

Shameek Kundu, director of financial services at TruEra, a company that analyzes AI models, puts forward four checks on insurance companies: whether the data is correctly interpreted in the context; the model is applicable to different groups of people; seeking permission from customers in transparent communication; if Customers believe that they have been abused and they can recourse.

Fraud detection

Insurance companies like Root are also using artificial intelligence to identify false claims, such as trying to Spot difference Between the time and place of the accident, and the information contained in the claim.

At the same time, third-party providers such as Shift Technology in France provide insurance companies with a service that can identify whether the same photo (such as a photo of a damaged car) has been used for multiple claims.

Lemonade, which is listed in the United States, is also a big user of AI. The company’s co-founder, Daniel Schreiber, stated that insurance is “a business that uses past data to predict future events.” “The more predictive data the insurance company has… the better.” It uses AI to speed up the process and Reduce claims processing costs.

but it Caused a stir on social media Earlier this year, it posted a video on Twitter about how its artificial intelligence uses “nonverbal clues” to find signs of fraud.

Lemonade later clarified that it uses facial recognition software to try to discover whether the same person has made multiple claims under different identities. It added that it did not allow artificial intelligence to automatically reject claims, and it has never used “phrenology” or “appearance”-assessing someone’s personality based on their facial features or expressions.

But this incident outlines the industry’s concerns about building a more detailed understanding of its customers.

“People often ask how ethical the company’s artificial intelligence is,” Minty said. “What they should ask is the degree to which the people who design artificial intelligence, provide data and use it for decision-making consider ethics.”

[ad_2]

Source link

Leave a Reply