Insurance: Automating Without Dehumanizing Claims Management

When journey design makes all the difference

Share

Two insurers deploy the same AI technology to automate their claims management. Same algorithm, same budget, same ambition.

Six months later, the first shows 20% adoption with declining customer satisfaction. The second achieves 60% adoption with rising NPS scores.

What changed? Not the technology. The journey design.

This difference is not anecdotal. An Insurance Nexus study reveals that only 22% of insurers consider their claims processes “best-in-class,” while 87% of policyholders (according to Bain & Company) view their “claims experience” as decisive in their loyalty decision. In a market where acquiring a new customer costs five times more than retaining an existing one, this equation changes everything.

Yet many insurers continue to automate by focusing solely on operational efficiency, neglecting three dimensions that are critical to success: the legal framework, human behavior, and workforce transformation.

This article explores these three dimensions and shows how successful organizations don’t ask “how to automate more,” but “how to automate better.”

Table of contents
Table of contents

Dimension 1: The Legal Framework

Many organizations hesitate to automate for fear of sanctions. This fear is legitimate, but it shouldn’t be paralyzing. The regulatory framework is strict, but it’s workable.

Clear Rules for Insurance

The regulatory landscape varies by jurisdiction, but common principles emerge globally:

In the European Union, the AI Act (effective August 1, 2024) classifies AI systems by risk level. For insurance, the situation is clear: risk assessment and pricing in life and health insurance are classified as high-risk systems.

This classification imposes three major obligations: full transparency on algorithm functioning, traceability of each decision, and fundamental rights impact assessment.

The penalties? Up to €30 million or 6% of global annual revenue, whichever is higher.

But routine claims management isn’t automatically classified as “high risk.” A windshield claim processed automatically with appeal rights? Acceptable. A fully automated coverage denial without human intervention? Problematic.

In the United States, while there’s no federal AI Act equivalent, a patchwork of state regulations and industry guidelines applies:

  • The NAIC’s Model Bulletin on Artificial Intelligence provides guidance on algorithmic transparency
  • State insurance departments increasingly scrutinize automated decision-making
  • Federal regulations like the Fair Credit Reporting Act (FCRA) apply to automated underwriting decisions

Globally, the trend is clear: automated decisions affecting consumers must be explainable, contestable, and subject to human review.

The Practical Model: Segment by Complexity

The pragmatic solution emerging across the industry? Segment cases by complexity and stakes, applying the “Human-in-the-loop” principle.

70-80% of cases: Supervised automation
Glass claims, standard roadside assistance, routine information requests. AI handles end-to-end. An algorithm monitors for anomalies and escalates questionable cases to a human.

You can: Fully automate these simple claims (typically < $500), provided you maintain appeal rights and explanation capabilities.

15-20% of cases: AI-human collaboration
Claims with shared liability, high amounts, unusual circumstances. AI collects information, performs initial analysis, suggests an assessment. But a claims adjuster makes the final decision.

You can: Use AI to pre-analyze and prepare complex files, with mandatory human decision.

5-10% of cases: Exclusive human intervention
Vulnerable situations (isolated elderly, disability), contentious disputes, serious claims with significant bodily injury. These cases require empathy and nuanced judgment from the start.

You must: Guarantee direct human access for these sensitive situations.

The Essentials for Action

The legal framework doesn’t block automation. It sets the boundaries. Three absolute rules to follow:

  1. Always maintain the possibility of human intervention, even if used in only 5% of cases
  2. Guarantee transparency and explainability, the policyholder must understand why a decision was made
  3. Segment intelligently, not everything can be automated the same way
 

This architecture isn’t just a constraint. It’s also a quality guarantee: complex cases deserve a human eye, simple cases deserve immediate resolution.

💡 Key Takeaway

Global regulatory frameworks don’t prohibit automation, they guide it. The key: segment cases by complexity level, always maintain human intervention possibility, and guarantee transparency and explainability. The “Human-in-the-loop” model reconciles compliance and efficiency.

Dimension 2: Behavioral Design

Legal compliance sets the framework. But between “what’s permitted” and “what actually works,” there’s a world. A world that behavioral economics helps us understand.

The Default Effect: The Most Powerful Invisible Force in Design

Consider two insurers automating their auto claims assistance service.

Insurer A presents their journey this way:
“You’ve had an auto accident. Would you like assistance from our chatbot to expedite your claim? [Yes] [No, I prefer to speak with an advisor]”

Insurer B presents the same journey differently:
“Your digital assistant will process your request immediately. You can speak with an advisor at any time by clicking here.”

Same technology. Same processing capability. But radically different results: 20% adoption for Insurer A, 60% for Insurer B.

Why? The default effect.

A meta-analysis of behavioral economics research (Johnson & Goldstein, 2003) shows that individuals have a 27% higher probability of choosing the option presented as “default” — even when they’re free to change.

This isn’t manipulation. It’s thoughtful design. In a moment of stress (a claim), presenting the fastest option as the default choice simplifies the decision. The policyholder always retains control, but is no longer paralyzed by indecision.

And here’s the paradox: users placed in opt-out (like Insurer B) feel more autonomous than those in opt-in (like Insurer A). Why? Because they didn’t have to “justify” their choice from the start. They simply moved forward, knowing they could exit at any time.

The 4 Golden Rules of Automated Journey Design

Based on these findings, here are four principles that differentiate between endured automation and adopted automation.

Rule 1: Present automation as the natural path, not an option to justify

❌ Bad design:
“Would you like to use our automated assistant or would you prefer to speak with an advisor?”

✅ Good design:
“Your request is being processed. At any time, you can speak with an advisor.”

The difference? The first forces a choice. The second offers security.

Rule 2: Make the exit easy and visible, not shameful

The “Speak to an advisor” button must be:

  • Permanently visible (not hidden in a menu)
  • Positively worded (not “Are you unsatisfied?”)
  • Frictionless (no justification required)
 

A Forrester study reveals that users are 2.5 times more likely to use an automated system when they can clearly see how to exit it.

Rule 3: Explain without drowning

The policyholder doesn’t want an AI course. They want to know three things:

  1. What’s happening now?
  2. How long will it take?
  3. How do I exit if I want?

Keep explanations short, concrete, reassuring.

❌ “Our artificial intelligence system is analyzing your file using machine learning algorithms compliant with privacy regulations…”

✅ “We’re analyzing your file. Response in 2 minutes. You can speak with an advisor here.”

Rule 4: Recognize when human intervention is necessary

And this is where a fascinating concept from AI research comes in: Moravec’s paradox.

Moravec’s Paradox: When AI Fails Where Humans Excel

In 1988, researcher Hans Moravec formulated a counter-intuitive observation: what’s difficult for humans is easy for AI, and vice versa.

Concrete examples in insurance:

AI excels at:

  • Processing 10,000 invoices in 30 seconds
  • Detecting inconsistencies in a claim declaration
  • Identifying potential fraud patterns ($628 million detected in 2024, +35% vs 2023 according to industry data)
  • Proposing a settlement amount based on 50,000 similar cases
 

Humans excel at:

  • Understanding that an isolated elderly person needs reassurance before any procedure
  • Detecting that a customer with a disability needs an adapted communication format
  • Arbitrating an edge case where two contractual principles conflict
  • Accompanying a family with empathy after a serious claim

The problem with many automations? They reverse this logic. They send “simple” cases (in the technical sense) to AI, without asking if these cases are “simple” in the human sense.

A windshield claim? Technically simple, emotionally neutral. Perfect for automation.

An auto claim with minor injury? Technically simple too, but emotionally charged. A person in shock after an accident may not be ready to interact with a chatbot, however performant.

The Design That Reconciles AI and Empathy

Successful organizations don’t ask “Can we automate this case?” but “Does this case deserve human intervention from the start?

They cross two criteria:

  1. Technical complexity (can AI handle it?)
  2. Emotional charge (does the policyholder need immediate empathy?)
 

This yields a simple matrix:

 Low emotional chargeHigh emotional charge
Low complexity✅ Full automation⚠️ Automation + easy exit
High complexity⚠️ AI-human collaboration🚫 Human from the start

Examples:

  • Windshield claim (low complexity + low emotion) → full automation
  • Auto claim without injury (low complexity + medium emotion) → automation with highly visible exit
  • Auto claim with bodily injury (medium complexity + high emotion) → human routing from start
  • Policyholder death (high complexity + high emotion) → human exclusively

 

What the Data Confirms

Industry figures support this approach:

  • A Capgemini study reveals that automation reduces processing times by 30% on average, but only when applied to the right cases.
  • But according to Deloitte, 96% of insurers accelerating their digital transformation recognize that the main obstacle isn’t technology, it’s adoption by teams and customers.

Successful automation doesn’t replace humans. It frees them for what they do best.

💡 Key Takeaway

The default effect shows that two identical journeys can produce radically different results.

The 4 golden rules:

  • present automation as the natural path,
  • make the exit easy,
  • explain without drowning,
  • recognize when human intervention is necessary. 
 

Moravec’s paradox reminds us that AI excels at technical complexity, humans at empathy and contextual judgment. Good design reconciles both.

Dimension 3: Workforce Transformation

The legal framework and behavioral design are essential. But they’re not enough. The third, often underestimated dimension is the workforce transformation that accompanies automation.

The Hidden Cost of Status Quo

Today, a claims adjuster spends an average of 70% of their time on administrative tasks: data entry, compliance verification, supplemental information requests, standard procedure follow-up.

This work is necessary. But it’s neither fulfilling nor differentiating.

Worse: it generates what specialists call “brown-out” — that loss of meaning linked to the repeated execution of tasks perceived as disconnected from the real purpose of the job.

The symptoms? Chronic fatigue, cynicism, high turnover. The cost? Replacing an experienced adjuster often costs more than 50% of their annual salary, not counting the loss of business knowledge and impact on service quality.

The Centaur Model: AI + Human, Not AI vs Human

Well-designed automation offers a way up and out. It doesn’t eliminate jobs: it transforms them.

The term “Centaur model” comes from chess. In 1997, Deep Blue beats Kasparov. End of the story? No. Beginning of a new era: “Centaur Chess,” where a human assisted by AI beats both the human alone AND the AI alone.

Transposed to insurance, this yields:

AI handles (70-80% of volume):

  • Automatic data collection and verification
  • First-level analysis and categorization
  • Anomaly and fraud detection
  • Complete processing of simple claims (< $500)

The adjuster focuses on (20-30% of volume, but 80% of value):

  • Supporting complex or emotionally charged situations
  • Negotiating sensitive disputes
  • In-depth analysis of atypical files
  • Decisions requiring contextual judgment and empathy

The result? Instead of processing 50 simple files per day with little satisfaction, the adjuster handles 10 complex files with real impact.

Three Measurable Benefits

When this transformation is well-executed, three benefits clearly emerge:

1. For policyholders: speed + quality

Simple files are processed in minutes instead of days. Complex files benefit from expert attention, not an overwhelmed administrator.

Result: according to Capgemini, 30% reduction in processing times on average, with concurrent customer satisfaction improvement.

2. For employees: rediscovered meaning

Freed from repetitive tasks, adjusters rediscover the meaning of their profession. They’re no longer “form processors,” but advisors, negotiators, experts.

Valued skills change: emotional intelligence, sharp technical expertise, analytical capability in ambiguity.

Several insurers who’ve operated this transformation report significant turnover reduction and improved career attractiveness to young talent.

3. For the company: efficiency + differentiation

Automation reduces operational costs (fewer errors, less re-entry, less wasted time). But above all, it creates a hard-to-copy competitive advantage: superior customer experience.

Anyone can buy the same technology. But deploying it well — with the right behavioral design, the right segmentation, and the right workforce transformation, that’s another story.

The 3 Failure-Causing Errors

If the benefits are clear, why do so many projects fail? Three errors recur systematically:

Error 1: Believing technology is enough

Deploying a chatbot without rethinking the customer journey is like buying a race car and driving on a dirt road. Technology is just a tool. Journey design makes the difference.

Error 2: Neglecting change management

Automation profoundly transforms jobs. If employees aren’t trained, supported, reassured, they’ll resist — consciously or not. And they’re right: no one adopts what they don’t understand.

Change management means:

  • Explaining why we automate (not to eliminate jobs, but to enhance them)
  • Training on new tools and expected new skills
  • Involving teams in design and continuous improvement

Error 3: Automating indiscriminately

Not everything can be automated the same way. The complexity/emotion matrix seen above isn’t just a theoretical concept: it’s an operational decision tool.

Automating an auto claim with bodily injury the same way as a windshield claim means taking major risks on customer satisfaction and reputation.

💡 Key Takeaway

Automation transforms jobs profoundly.

The Centaur model (AI + Human) frees employees from administrative tasks to refocus them on expertise and customer relations.

Three benefits:

  • speed + quality for policyholders,
  • rediscovered meaning for employees,
  • efficiency + differentiation for the company.
 

Three errors to avoid: believing technology is enough, neglecting change management, automating indiscriminately.

Conclusion: Automation as a Means, Human as the End

At the end of this journey, one thing is clear: the question is no longer “should we automate?” but “how do we automate intelligently?

The technologies are here. The regulatory framework is established. Behavioral principles are documented.

What makes the difference between success and failure? Three elements:

  1. Respect for the legal framework — Regulations aren’t obstacles, but guardrails that protect insurers and policyholders. Compliance by design is more effective and less costly than forced compliance.
  2. Application of behavioral science — The default effect can triple adoption without changing technology. Transparency and escalation possibilities create trust. These principles aren’t cosmetic: they’re determinant.
  3. Human-centeredness — Automation only makes sense if it frees humans for what they do best. Moravec’s paradox reminds us: AI excels at technical complexity, humans at empathy and contextual judgment.

In tomorrow’s insurance, luxury won’t be absolute speed or maximum automation. It will be appropriateness: being helped by a machine when it’s efficient, being accompanied by an expert human when it’s necessary.

Between these two extremes, everything is a matter of design. Design that respects the law, relies on science, and places the human, policyholder and employee, at the center.

That’s what it means to automate without dehumanizing.

Sources and References

Industry Studies and Reports

Bain & Company (2023)
“Customer Experience in Insurance: The Critical Role of Claims”
https://www.bain.com/insights/topics/customer-experience/

Capgemini (2024)
“World Insurance Report 2024”
https://www.capgemini.com/insights/research-library/world-insurance-report/

Deloitte (2024)
“Insurance Industry Outlook”
https://www2.deloitte.com/us/en/pages/financial-services/articles/insurance-industry-outlook.html

Forrester Research (2023)
“The State of Customer Experience in Insurance”
https://www.forrester.com/research/

Insurance Nexus (2024)
“Claims Processing Benchmark Study”
https://www.insurancenexus.com/research/

Coalition Against Insurance Fraud (2024)
Annual Fraud Report
https://insurancefraud.org/

Academic Research

Johnson, E. J., & Goldstein, D. G. (2003)
“Do Defaults Save Lives?”
Science, 302(5649), 1338-1339.
https://doi.org/10.1126/science.1091721

Moravec, H. (1988)
“Mind Children: The Future of Robot and Human Intelligence”
Harvard University Press

Regulatory Framework

European Union

Regulation (EU) 2024/1689 (AI Act)
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

Regulation (EU) 2016/679 (GDPR) – Article 22
https://eur-lex.europa.eu/eli/reg/2016/679/oj

United States

NAIC Model Bulletin on Use of Artificial Intelligence
National Association of Insurance Commissioners
https://content.naic.org/

Fair Credit Reporting Act (FCRA)
15 U.S.C. § 1681
https://www.ftc.gov/legal-library/browse/statutes/fair-credit-reporting-act

State Regulations on AI and Algorithmic Decision-Making
Various state insurance departments

Share

Want to discuss your automation project?

Contact our experts for a personalized assessment of your journeys and a roadmap adapted to your context.

Black Friday, holidays, sales, or unexpected peaks: Armatis helps you manage critical volumes, adapt your resources, and maintain customer quality.

Join the leaders who trust our multilingual and technological expertise.