
Automating claims management is achievable, cost-effective, and fully compatible with European regulation — provided you do not focus solely on operational efficiency. Insurers who succeed at this transformation work on three dimensions simultaneously: the regulatory framework (AI Act + GDPR), the behavioural design of customer journeys, and workforce transformation. Those who neglect even one of these dimensions get disappointing results, regardless of the technology they deploy.
The numbers speak for themselves. According to Insurance Nexus, only 22% of insurers consider their claims processes best-in-class. And according to Bain & Company, 87% of policyholders view their claims experience as decisive in their loyalty decision. In a market where acquiring a new customer costs five times more than retaining an existing one, automating badly is riskier than not automating at all.
This article explores the three dimensions that make the difference and shows how successful organisations stop asking “how do we automate more?” and start asking “how do we automate better?”
Two insurers deploy the same AI to automate their claims management. Same algorithm, same budget, same ambition. Six months later, the first shows 20% adoption with declining customer satisfaction. The second achieves 60% adoption with a rising NPS.
What changed? Not the technology. The journey design.
Insurer A presents their service as follows: “You have been involved in a car accident. Would you like assistance from our chatbot to expedite your claim? [Yes] [No, I prefer to speak with an advisor]”
Insurer B presents the same service differently: “Your digital assistant will process your request immediately. You can speak with an advisor at any time by clicking here.”
Same technology. Radically different results. This case illustrates precisely why automation is first a design question, and only then a technology question.
Many organisations hesitate to automate out of fear of sanctions. This fear is legitimate, but it should not be paralysing. The European framework is strict, and workable.
The European Artificial Intelligence Regulation, which came into force on 1 August 2024, classifies AI systems according to their level of risk. For insurance, the picture is clear: risk assessment and pricing in life and health insurance are classified as high-risk systems. This classification imposes three obligations: full transparency on how the algorithm works, traceability of every decision, and a fundamental rights impact assessment.
The penalties for non-compliance? Up to €30 million or 6% of global annual turnover, whichever is higher.
Routine claims management, however, is not automatically classified as high risk. A windshield claim processed automatically with an appeal mechanism? Acceptable. A coverage refusal fully automated with no possibility of human intervention? Problematic.
Since 2018, the GDPR has established a fundamental principle in Article 22: every person has the right not to be subject to a decision based solely on automated processing that produces significant legal effects, without any possibility of human intervention. Three exceptions exist: explicit consent, contractual necessity, or legal authorisation. In all cases, the policyholder must be able to obtain an explanation of the decision, contest it, and request a human review.
The pragmatic solution emerging across the industry is straightforward in principle, more demanding in practice: segment claims by complexity and stakes, applying the Human-in-the-loop principle — human supervision within the decision-making process.
| Volume of cases | Type of claims | Processing mode |
|---|---|---|
| 70 to 80% | Windshield claims, standard breakdown assistance, routine information requests (typically <€500) | Supervised automation: AI handles end-to-end, an algorithm monitors for anomalies and escalates questionable cases to a human. |
| 15 to 20% | Claims with shared liability, high amounts, unusual circumstances | AI-human collaboration: AI collects, analyses, and suggests. The final decision belongs to the claims adjuster. |
| 5 to 10% | Vulnerability situations, contentious disputes, serious claims with personal injury | Exclusive human intervention: empathy and nuanced judgement from the very first contact. |
This architecture is not merely a regulatory constraint. It is also a quality guarantee: complex cases deserve a human eye, simple cases deserve immediate resolution.
Three rules to follow absolutely:
Key takeaway
The AI Act and the GDPR do not block automation: they set the conditions. Before deploying a solution, three questions to ask: does this case have a significant legal effect? Can the policyholder contest the decision? Is the decision explainable in plain language? If all three answers are yes, the project is on the right track. If any one is no, that is a risk to correct before deployment, not after.
Legal compliance sets the framework. But between what is permitted and what actually works, there is a gap that behavioural economics helps to understand. Returning to the example of the two insurers: the difference in results does not come from the technology. It comes from the default effect.
A meta-analysis in behavioural economics (Johnson & Goldstein, 2003) shows that individuals have a 27% higher probability of choosing the option presented as the default, even when they are free to change. In a moment of stress such as a claim, presenting the fastest option as the natural path simplifies the decision. The policyholder retains full control, but is no longer paralysed by indecision.
And here is the paradox: users placed in an opt-out position (like Insurer B) feel more autonomous than those placed in an opt-in position (like Insurer A). Because they did not have to justify their choice from the outset. They simply moved forward, knowing they can exit at any time.
Rule 1: present automation as the natural path, not an option requiring justification
“Would you like to use our automated assistant, or would you prefer to speak with an advisor?”
“Your request is being processed. You can speak with an advisor at any time.”
The difference? The first formulation forces a choice. The second offers reassurance.
Rule 2: make the exit easy and visible, not shameful
The “Speak with an advisor” button must be permanently visible, positively worded, and accessible without friction — no justification required. A Forrester study shows that users are 2.5 times more likely to use an automated system when they can clearly see how to exit it.
Rule 3: explain without overwhelming
The policyholder does not want a lecture on AI. They want to know three things: what is happening now, how long it will take, and how to exit if needed.
“Our artificial intelligence system is analysing your file using machine learning algorithms compliant with GDPR standards…”
“We are analysing your file. Response within 2 minutes. You can speak with an advisor here.”
Rule 4: recognise when a human is necessary
This is where Moravec’s paradox comes in: what is difficult for humans is easy for AI, and vice versa. Formulated in 1988 by researcher Hans Moravec, this observation applies directly to insurance.
| AI excels at | Humans excel at |
|---|---|
| Processing 10,000 invoices in 30 seconds | Understanding that an isolated elderly person needs reassurance before any procedure |
| Detecting inconsistencies in a claims declaration | Recognising that a customer with a disability needs an adapted communication format |
| Identifying potential fraud patterns (€628M detected in 2024, +35% vs 2023, according to AGIRA) | Adjudicating an edge case where two contractual principles conflict |
| Proposing an indemnification amount based on 50,000 similar cases | Accompanying a family with empathy after a serious claim |
The problem with many automation projects? They invert this logic. They route technically simple cases to AI, without asking whether those cases are simple in human terms. A windshield claim is technically simple and emotionally neutral: perfect for automation. A car accident with minor injuries is also technically simple, but emotionally charged. A person in shock after an accident may not be ready to interact with a chatbot, however capable.
Successful organisations do not ask “can we automate this case?” but rather “does this case deserve human intervention from the outset?” They cross two criteria: technical complexity and the emotional load of the claim.
| Low emotional load | High emotional load | |
|---|---|---|
| Low technical complexity | Full automation e.g. windshield claim | Automation with highly visible exit e.g. car accident without injury |
| High technical complexity | AI-human collaboration e.g. claim with shared liability | Human from the outset e.g. death of a policyholder, claim with bodily injury |
The numbers confirm this approach. According to Capgemini, automation reduces processing times by 30% on average, but only when applied to the right cases. And according to Deloitte, 96% of insurers accelerating their digital transformation acknowledge that the primary barrier is not the technology: it is the buy-in of teams and customers.
Key takeaway
A good journey design is not tested on a mockup: it is tested with real users under simulated stress. Two indicators to monitor from the pilot: the exit rate to an advisor (if it exceeds 40%, the automated journey is not reassuring enough) and the drop-off point (if policyholders consistently leave at the same step, that is where the design is failing). These two signals allow adjustments before large-scale deployment, and avoid emergency corrections after launch.
The legal framework and behavioural design are necessary conditions. But they are not sufficient. The third dimension, often underestimated, is the workforce transformation that must accompany automation. Without it, the first two dimensions remain fragile.
Today, a claims adjuster spends an average of 70% of their time on administrative tasks: data entry, compliance verification, requests for additional information, and following standard procedures. This work is necessary, but neither rewarding nor differentiating.
Worse: it generates what specialists call brown-out: a loss of meaning linked to the repeated execution of tasks perceived as disconnected from the true purpose of the role. The symptoms are well-known: chronic fatigue, cynicism, high turnover. And the cost of replacing an experienced adjuster often exceeds 50% of their annual salary, without counting the loss of institutional knowledge.
Well-designed automation does not eliminate jobs: it transforms them. The term “Centaur model” comes from chess. In 1997, Deep Blue defeats Kasparov. But that is not the end of the story. It is the beginning of “Centaur Chess,” where a human assisted by AI defeats both the human alone and the AI alone. Translated to insurance, this produces a clear distribution of responsibility.
AI handles (70 to 80% of volume):
The claims adjuster focuses on (20 to 30% of volume, but 80% of the value):
Instead of processing 50 simple files per day with little satisfaction, the adjuster handles 10 complex files with real impact. This is not a loss of position. It is an elevation of the role.
For policyholders: simple files are processed in minutes rather than days. Complex files receive the attention of an expert, not an overwhelmed administrator. Result: a 30% average reduction in processing times, according to Capgemini, with a concurrent improvement in customer satisfaction.
For employees: freed from repetitive tasks, adjusters rediscover the meaning of their work. The valued skills shift: emotional intelligence, deep technical expertise, capacity for analysis under ambiguity. Several insurers that have undergone this transformation report significant drops in turnover and improved attractiveness to younger talent.
For the organisation: automation reduces operational costs and creates a competitive advantage that is difficult to replicate. Anyone can buy the same technology. Deploying it well, with the right behavioural design, the right segmentation, and the right workforce transformation, is another matter entirely.
Error 1: believing technology is enough. Deploying a chatbot without rethinking the customer journey is like buying a racing car to drive on a dirt track. Technology is only a tool. Journey design is what makes the difference.
Error 2: neglecting change management. Automation profoundly transforms roles. If employees are not trained, supported, and reassured, they will resist, consciously or not. Change management means explaining why automation is being introduced, training teams on new tools and expected new skills, and involving them in the design and continuous improvement process.
Error 3: automating without discernment. Not everything can be automated in the same way. The complexity/emotion matrix is not a theoretical concept: it is an operational decision-making tool. Automating a car accident with bodily injury in the same way as a windshield claim is a major risk to customer satisfaction and reputation.
Key takeaway
Workforce transformation cannot be improvised at deployment time. It must be prepared upstream, with three concrete questions: which tasks will adjusters stop doing? What new skills will they need to master? And how will their increased value be measured over time? Without answers to these three questions before launch, resistance to change is not a hypothetical risk — it is a certainty.
With over 20 years of experience in customer relationship management for insurance players and 600 advisors trained in the specific challenges of the sector, Armatis supports its insurance clients through this transition. What we observe consistently: the projects that succeed are those that have worked on all three dimensions in parallel, not sequentially.
Regulatory compliance is established upstream, not scrambled at the last minute. Journey design is tested with real users before large-scale rollout. And team transformation is anticipated, with upskilling programmes focused on empathy, complex case analysis, and the management of sensitive situations.
A concrete example: a major insurance player we support has simultaneously improved the speed of its claims processing and the satisfaction of its policyholders, by combining specialist teams, CRM integration, and adaptive flow management. The customer retention rate exceeds 90%.
The question is no longer “should we automate?” but “how do we automate intelligently?” The technology exists. The regulatory framework is established. The behavioural principles are documented. What separates success from failure is the ability to address all three dimensions together: respecting the legal framework, designing journeys that policyholders genuinely accept, and transforming roles rather than bypassing them.
In tomorrow’s insurance, the premium offer will not be absolute speed or maximum automation. It will be precision: being helped by a machine when it is efficient, being supported by a human expert when it is necessary. Between these two extremes, everything comes down to design.
Would you like to assess your claims management journeys? Our experts are available for a personalised diagnostic.
Two texts define the framework: GDPR Article 22, which prohibits fully automated decisions with significant legal effects without any possibility of human recourse, and the European AI Act (in force since August 2024), which classifies certain AI systems in insurance as high risk, with obligations of transparency, traceability, and impact assessment. Compliance by design, built into journey design from the outset, is more effective and less costly than forced compliance after the fact.
The industry consensus is built around the Human-in-the-loop model: 70 to 80% of simple claims (windshield, standard breakdown assistance, routine information requests) can be processed automatically with algorithmic supervision. 15 to 20% require AI-human collaboration. And 5 to 10% require exclusive human intervention from the outset, particularly vulnerability situations or claims with personal injury.
Behavioural economics provides a clear answer: the default effect. Presenting automation as the natural path (rather than an option requiring justification), making the exit to an advisor visible and frictionless, and explaining the journey in three simple sentences (what, how long, how to exit) significantly increases adoption. A Forrester study shows that users are 2.5 times more likely to use an automated system when they can clearly see how to exit it.
Not if it is well designed. The Centaur model (AI + human) shows that automating administrative tasks frees adjusters to do what they do better than AI: supporting complex situations, negotiating sensitive disputes, and bringing empathy to difficult moments. Several insurers that have undergone this transformation report drops in turnover and improved role attractiveness.
Technology is not the problem. According to Deloitte, 96% of insurers accelerating their digital transformation acknowledge that the primary barrier is the buy-in of teams and customers. Change management (explaining, training, involving) is just as critical as the choice of technology.
Bain & Company (2023) — Customer Experience in Insurance: The Critical Role of Claims — bain.com
Capgemini (2024) — World Insurance Report 2024 — capgemini.com
Deloitte (2024) — Insurance Industry Outlook — deloitte.com
Forrester Research (2023) — The State of Customer Experience in Insurance — forrester.com
Insurance Nexus (2024) — Claims Processing Benchmark Study — insurancenexus.com
AGIRA (2024) — Annual fraud report — agira.asso.fr
Johnson, E. J. & Goldstein, D. G. (2003) — Do Defaults Save Lives? — Science, 302(5649) — doi.org
Moravec, H. (1988) — Mind Children: The Future of Robot and Human Intelligence — Harvard University Press
Regulation (EU) 2024/1689 — AI Act — eur-lex.europa.eu
Regulation (EU) 2016/679 — GDPR, Article 22 — eur-lex.europa.eu
Armatis is one of the leading European outsourcing service providers (BPO) in the field of customer experience. For over 35 years, it has been supporting large companies and SMEs in managing and transforming their customer service. Present in France, Tunisia, Portugal, Poland, Madagascar and Germany, the group combines sector expertise, multi-site European presence and cutting‑edge technological integration to meet the requirements of European and international markets.
Contact our experts for a personalized assessment of your journeys and a roadmap adapted to your context.
Join the leaders who trust our multilingual and technological expertise.