Automating insurance claims management: how to do it right

When journey design makes all the difference

Share on
Table of contents

Automating claims management is achievable, cost-effective, and fully compatible with European regulation — provided you do not focus solely on operational efficiency. Insurers who succeed at this transformation work on three dimensions simultaneously: the regulatory framework (AI Act + GDPR), the behavioural design of customer journeys, and workforce transformation. Those who neglect even one of these dimensions get disappointing results, regardless of the technology they deploy.

The numbers speak for themselves. According to Insurance Nexus, only 22% of insurers consider their claims processes best-in-class. And according to Bain & Company, 87% of policyholders view their claims experience as decisive in their loyalty decision. In a market where acquiring a new customer costs five times more than retaining an existing one, automating badly is riskier than not automating at all.

This article explores the three dimensions that make the difference and shows how successful organisations stop asking “how do we automate more?” and start asking “how do we automate better?”

Table of contents

Two insurers, same technology, opposite results

Two insurers deploy the same AI to automate their claims management. Same algorithm, same budget, same ambition. Six months later, the first shows 20% adoption with declining customer satisfaction. The second achieves 60% adoption with a rising NPS.

What changed? Not the technology. The journey design.

Insurer A presents their service as follows: “You have been involved in a car accident. Would you like assistance from our chatbot to expedite your claim? [Yes] [No, I prefer to speak with an advisor]”

Insurer B presents the same service differently: “Your digital assistant will process your request immediately. You can speak with an advisor at any time by clicking here.”

Same technology. Radically different results. This case illustrates precisely why automation is first a design question, and only then a technology question.

Dimension 1: understanding the regulatory framework before automating

Many organisations hesitate to automate out of fear of sanctions. This fear is legitimate, but it should not be paralysing. The European framework is strict, and workable.

What the AI Act says for insurance

The European Artificial Intelligence Regulation, which came into force on 1 August 2024, classifies AI systems according to their level of risk. For insurance, the picture is clear: risk assessment and pricing in life and health insurance are classified as high-risk systems. This classification imposes three obligations: full transparency on how the algorithm works, traceability of every decision, and a fundamental rights impact assessment.

The penalties for non-compliance? Up to €30 million or 6% of global annual turnover, whichever is higher.

Routine claims management, however, is not automatically classified as high risk. A windshield claim processed automatically with an appeal mechanism? Acceptable. A coverage refusal fully automated with no possibility of human intervention? Problematic.

What GDPR Article 22 says

Since 2018, the GDPR has established a fundamental principle in Article 22: every person has the right not to be subject to a decision based solely on automated processing that produces significant legal effects, without any possibility of human intervention. Three exceptions exist: explicit consent, contractual necessity, or legal authorisation. In all cases, the policyholder must be able to obtain an explanation of the decision, contest it, and request a human review.

The workable model: segmenting by complexity

The pragmatic solution emerging across the industry is straightforward in principle, more demanding in practice: segment claims by complexity and stakes, applying the Human-in-the-loop principle — human supervision within the decision-making process.

Volume of casesType of claimsProcessing mode
70 to 80%Windshield claims, standard breakdown assistance, routine information requests (typically <€500)Supervised automation: AI handles end-to-end, an algorithm monitors for anomalies and escalates questionable cases to a human.
15 to 20%Claims with shared liability, high amounts, unusual circumstancesAI-human collaboration: AI collects, analyses, and suggests. The final decision belongs to the claims adjuster.
5 to 10%Vulnerability situations, contentious disputes, serious claims with personal injuryExclusive human intervention: empathy and nuanced judgement from the very first contact.

This architecture is not merely a regulatory constraint. It is also a quality guarantee: complex cases deserve a human eye, simple cases deserve immediate resolution.

Is your automation project compliant? Does the decision produce significant legal effects? Yes No Free automation Is human recourse possible and clearly visible? Yes No GDPR Art. 22 violation System classified as high risk under the AI Act? Yes No Compliant with recourse Transparency and traceability of every decision ensured? Yes No Non-compliant AI Act Fundamental rights impact assessment completed? Yes No Deployment blocked Project compliant AI Act + GDPR Article 22 Compliant Blocking

Three rules to follow absolutely:

  • Always maintain the possibility of human intervention, even if it is used in only 5% of cases.
  • Guarantee transparency and explainability: the policyholder must be able to understand why a decision was made.
  • Segment intelligently: not everything can be automated in the same way.

Key takeaway

The AI Act and the GDPR do not block automation: they set the conditions. Before deploying a solution, three questions to ask: does this case have a significant legal effect? Can the policyholder contest the decision? Is the decision explainable in plain language? If all three answers are yes, the project is on the right track. If any one is no, that is a risk to correct before deployment, not after.

Dimension 2: designing journeys that policyholders actually accept

Legal compliance sets the framework. But between what is permitted and what actually works, there is a gap that behavioural economics helps to understand. Returning to the example of the two insurers: the difference in results does not come from the technology. It comes from the default effect.

The default effect: the invisible force of design

A meta-analysis in behavioural economics (Johnson & Goldstein, 2003) shows that individuals have a 27% higher probability of choosing the option presented as the default, even when they are free to change. In a moment of stress such as a claim, presenting the fastest option as the natural path simplifies the decision. The policyholder retains full control, but is no longer paralysed by indecision.

And here is the paradox: users placed in an opt-out position (like Insurer B) feel more autonomous than those placed in an opt-in position (like Insurer A). Because they did not have to justify their choice from the outset. They simply moved forward, knowing they can exit at any time.

The 4 golden rules of automated journey design

Rule 1: present automation as the natural path, not an option requiring justification

❌ “Would you like to use our automated assistant, or would you prefer to speak with an advisor?”
✅ “Your request is being processed. You can speak with an advisor at any time.”

The difference? The first formulation forces a choice. The second offers reassurance.

Rule 2: make the exit easy and visible, not shameful

The “Speak with an advisor” button must be permanently visible, positively worded, and accessible without friction — no justification required. A Forrester study shows that users are 2.5 times more likely to use an automated system when they can clearly see how to exit it.

Rule 3: explain without overwhelming

The policyholder does not want a lecture on AI. They want to know three things: what is happening now, how long it will take, and how to exit if needed.

❌ “Our artificial intelligence system is analysing your file using machine learning algorithms compliant with GDPR standards…”
✅ “We are analysing your file. Response within 2 minutes. You can speak with an advisor here.”

Rule 4: recognise when a human is necessary

This is where Moravec’s paradox comes in: what is difficult for humans is easy for AI, and vice versa. Formulated in 1988 by researcher Hans Moravec, this observation applies directly to insurance.

What AI does better — and what humans do better

AI excels atHumans excel at
Processing 10,000 invoices in 30 secondsUnderstanding that an isolated elderly person needs reassurance before any procedure
Detecting inconsistencies in a claims declarationRecognising that a customer with a disability needs an adapted communication format
Identifying potential fraud patterns (€628M detected in 2024, +35% vs 2023, according to AGIRA)Adjudicating an edge case where two contractual principles conflict
Proposing an indemnification amount based on 50,000 similar casesAccompanying a family with empathy after a serious claim

The problem with many automation projects? They invert this logic. They route technically simple cases to AI, without asking whether those cases are simple in human terms. A windshield claim is technically simple and emotionally neutral: perfect for automation. A car accident with minor injuries is also technically simple, but emotionally charged. A person in shock after an accident may not be ready to interact with a chatbot, however capable.

The decision matrix: technical complexity × emotional load

Successful organisations do not ask “can we automate this case?” but rather “does this case deserve human intervention from the outset?” They cross two criteria: technical complexity and the emotional load of the claim.

 Low emotional loadHigh emotional load
Low technical complexityFull automation
e.g. windshield claim
Automation with highly visible exit
e.g. car accident without injury
High technical complexityAI-human collaboration
e.g. claim with shared liability
Human from the outset
e.g. death of a policyholder, claim with bodily injury
Which automation decision for each claim type? EMOTIONAL LOAD Low High TECHNICAL COMPLEXITY High Low Full automation Windshield claim Standard breakdown assistance Routine information request Simple mobile theft Appliance breakdown 70–80% of claims Automation + very visible exit Car accident without injury Minor water damage Burglary theft Limited fire 15–20% of claims AI + human collaboration Shared liability claim Complex agricultural claim Professional disputes Suspected fraud AI prepares, human decides Human from the first contact Claim with bodily injury Death of a policyholder Isolated elderly person Disability situation 5–10% of claims Armatis, based on Moravec's Paradox (H. Moravec, 1988)

The numbers confirm this approach. According to Capgemini, automation reduces processing times by 30% on average, but only when applied to the right cases. And according to Deloitte, 96% of insurers accelerating their digital transformation acknowledge that the primary barrier is not the technology: it is the buy-in of teams and customers.

Key takeaway

A good journey design is not tested on a mockup: it is tested with real users under simulated stress. Two indicators to monitor from the pilot: the exit rate to an advisor (if it exceeds 40%, the automated journey is not reassuring enough) and the drop-off point (if policyholders consistently leave at the same step, that is where the design is failing). These two signals allow adjustments before large-scale deployment, and avoid emergency corrections after launch.

Dimension 3: transforming roles, not just processes

The legal framework and behavioural design are necessary conditions. But they are not sufficient. The third dimension, often underestimated, is the workforce transformation that must accompany automation. Without it, the first two dimensions remain fragile.

The hidden cost of the status quo

Today, a claims adjuster spends an average of 70% of their time on administrative tasks: data entry, compliance verification, requests for additional information, and following standard procedures. This work is necessary, but neither rewarding nor differentiating.

Worse: it generates what specialists call brown-out: a loss of meaning linked to the repeated execution of tasks perceived as disconnected from the true purpose of the role. The symptoms are well-known: chronic fatigue, cynicism, high turnover. And the cost of replacing an experienced adjuster often exceeds 50% of their annual salary, without counting the loss of institutional knowledge.

The Centaur model: AI + human, not AI versus human

Well-designed automation does not eliminate jobs: it transforms them. The term “Centaur model” comes from chess. In 1997, Deep Blue defeats Kasparov. But that is not the end of the story. It is the beginning of “Centaur Chess,” where a human assisted by AI defeats both the human alone and the AI alone. Translated to insurance, this produces a clear distribution of responsibility.

AI handles (70 to 80% of volume):

  • Automatic data collection and verification
  • First-level analysis and categorisation
  • Anomaly and fraud detection
  • Full processing of simple claims (typically under €500)
 

The claims adjuster focuses on (20 to 30% of volume, but 80% of the value):

  • Supporting complex or emotionally charged situations
  • Negotiating in sensitive disputes
  • In-depth analysis of atypical files
  • Decisions requiring contextual judgement and empathy

The Centaur model: how the adjuster's time is redistributed Before automation 50 simple claims / day 70% Administrative tasks 20% Partial analysis 10% Expertise and relationship 10% differentiating value Centaur model After automation 10 complex claims / day 10% — AI supervision 30% Analysis and preparation 60% Expertise and client relationship Empathy, disputes, sensitive situations 60% differentiating value Administrative tasks and supervision Intermediate analysis and preparation Expertise, empathy, judgement — differentiating value Source: Capgemini World Insurance Report 2024

Instead of processing 50 simple files per day with little satisfaction, the adjuster handles 10 complex files with real impact. This is not a loss of position. It is an elevation of the role.

The three measurable benefits of a well-managed transformation

For policyholders: simple files are processed in minutes rather than days. Complex files receive the attention of an expert, not an overwhelmed administrator. Result: a 30% average reduction in processing times, according to Capgemini, with a concurrent improvement in customer satisfaction.

For employees: freed from repetitive tasks, adjusters rediscover the meaning of their work. The valued skills shift: emotional intelligence, deep technical expertise, capacity for analysis under ambiguity. Several insurers that have undergone this transformation report significant drops in turnover and improved attractiveness to younger talent.

For the organisation: automation reduces operational costs and creates a competitive advantage that is difficult to replicate. Anyone can buy the same technology. Deploying it well, with the right behavioural design, the right segmentation, and the right workforce transformation, is another matter entirely.

The three errors that cause automation projects to fail

Error 1: believing technology is enough. Deploying a chatbot without rethinking the customer journey is like buying a racing car to drive on a dirt track. Technology is only a tool. Journey design is what makes the difference.

Error 2: neglecting change management. Automation profoundly transforms roles. If employees are not trained, supported, and reassured, they will resist, consciously or not. Change management means explaining why automation is being introduced, training teams on new tools and expected new skills, and involving them in the design and continuous improvement process.

Error 3: automating without discernment. Not everything can be automated in the same way. The complexity/emotion matrix is not a theoretical concept: it is an operational decision-making tool. Automating a car accident with bodily injury in the same way as a windshield claim is a major risk to customer satisfaction and reputation.

Key takeaway

Workforce transformation cannot be improvised at deployment time. It must be prepared upstream, with three concrete questions: which tasks will adjusters stop doing? What new skills will they need to master? And how will their increased value be measured over time? Without answers to these three questions before launch, resistance to change is not a hypothetical risk — it is a certainty.

What Armatis observes in practice

With over 20 years of experience in customer relationship management for insurance players and 600 advisors trained in the specific challenges of the sector, Armatis supports its insurance clients through this transition. What we observe consistently: the projects that succeed are those that have worked on all three dimensions in parallel, not sequentially.

Regulatory compliance is established upstream, not scrambled at the last minute. Journey design is tested with real users before large-scale rollout. And team transformation is anticipated, with upskilling programmes focused on empathy, complex case analysis, and the management of sensitive situations.

A concrete example: a major insurance player we support has simultaneously improved the speed of its claims processing and the satisfaction of its policyholders, by combining specialist teams, CRM integration, and adaptive flow management. The customer retention rate exceeds 90%.

Conclusion: automate better, not more

The question is no longer “should we automate?” but “how do we automate intelligently?” The technology exists. The regulatory framework is established. The behavioural principles are documented. What separates success from failure is the ability to address all three dimensions together: respecting the legal framework, designing journeys that policyholders genuinely accept, and transforming roles rather than bypassing them.

In tomorrow’s insurance, the premium offer will not be absolute speed or maximum automation. It will be precision: being helped by a machine when it is efficient, being supported by a human expert when it is necessary. Between these two extremes, everything comes down to design.

Would you like to assess your claims management journeys? Our experts are available for a personalised diagnostic.

Frequently asked questions about automating claims management

What legal obligations apply to insurers before automating claims management?

Two texts define the framework: GDPR Article 22, which prohibits fully automated decisions with significant legal effects without any possibility of human recourse, and the European AI Act (in force since August 2024), which classifies certain AI systems in insurance as high risk, with obligations of transparency, traceability, and impact assessment. Compliance by design, built into journey design from the outset, is more effective and less costly than forced compliance after the fact.

What percentage of claims can realistically be automated?

The industry consensus is built around the Human-in-the-loop model: 70 to 80% of simple claims (windshield, standard breakdown assistance, routine information requests) can be processed automatically with algorithmic supervision. 15 to 20% require AI-human collaboration. And 5 to 10% require exclusive human intervention from the outset, particularly vulnerability situations or claims with personal injury.

How can insurers prevent policyholders from rejecting automated journeys?

Behavioural economics provides a clear answer: the default effect. Presenting automation as the natural path (rather than an option requiring justification), making the exit to an advisor visible and frictionless, and explaining the journey in three simple sentences (what, how long, how to exit) significantly increases adoption. A Forrester study shows that users are 2.5 times more likely to use an automated system when they can clearly see how to exit it.

Does automating claims management eliminate jobs?

Not if it is well designed. The Centaur model (AI + human) shows that automating administrative tasks frees adjusters to do what they do better than AI: supporting complex situations, negotiating sensitive disputes, and bringing empathy to difficult moments. Several insurers that have undergone this transformation report drops in turnover and improved role attractiveness.

What is the primary barrier to digital transformation in insurance?

Technology is not the problem. According to Deloitte, 96% of insurers accelerating their digital transformation acknowledge that the primary barrier is the buy-in of teams and customers. Change management (explaining, training, involving) is just as critical as the choice of technology.

Sources

Bain & Company (2023) — Customer Experience in Insurance: The Critical Role of Claimsbain.com

Capgemini (2024) — World Insurance Report 2024capgemini.com

Deloitte (2024) — Insurance Industry Outlookdeloitte.com

Forrester Research (2023) — The State of Customer Experience in Insuranceforrester.com

Insurance Nexus (2024) — Claims Processing Benchmark Studyinsurancenexus.com

AGIRA (2024) — Annual fraud report — agira.asso.fr

Johnson, E. J. & Goldstein, D. G. (2003) — Do Defaults Save Lives? — Science, 302(5649) — doi.org

Moravec, H. (1988) — Mind Children: The Future of Robot and Human Intelligence — Harvard University Press

Regulation (EU) 2024/1689 — AI Act — eur-lex.europa.eu

Regulation (EU) 2016/679 — GDPR, Article 22 — eur-lex.europa.eu

Share on

Armatis is one of the leading European outsourcing service providers (BPO) in the field of customer experience. For over 35 years, it has been supporting large companies and SMEs in managing and transforming their customer service. Present in France, Tunisia, Portugal, Poland, Madagascar and Germany, the group combines sector expertise, multi-site European presence and cutting‑edge technological integration to meet the requirements of European and international markets.

Want to discuss your automation project?

Contact our experts for a personalized assessment of your journeys and a roadmap adapted to your context.

Black Friday, holidays, sales, or unexpected peaks: Armatis helps you manage critical volumes, adapt your resources, and maintain customer quality.

Join the leaders who trust our multilingual and technological expertise.