Should AI Decide What Your Customers Can Buy? A Strategic Issue for Marketplaces

Share on

Taking into account the context of the Digital Services Act (DSA), false positives, and over-moderation: how marketplaces strike the right balance between AI and human judgment.

Insights from Stéphanie Akriche, Armatis Head of International Sales.

Table of contents

Every marketplace that scales beyond a certain volume ends up facing the same inflection point: automation is no longer just an efficiency gain, it becomes a governance decision. And governance decisions have consequences.

Between a system that misses too much and one that blocks too much, the real question is not whether to automate, but how to automate without destroying what you are trying to protect. Across the platforms Armatis supports, misaligned moderation does not just create compliance risk: it erodes seller trust, degrades the buyer experience, and quietly hollows out catalogue depth.

Table of contents

When compliance creates new risks

The Digital Services Act, applicable since February 2024 to all platforms operating in Europe, sets a clear framework: illegal content must be removed swiftly, appeal and redress mechanisms must be transparent, and for very large platforms (those exceeding 45 million monthly users), independent audits and risk assessments are mandatory.

The implication is direct: no marketplace can operate at scale without significant automation. Manual review of hundreds of thousands of daily listings is not a realistic option. But the same European Commission research that underpins the DSA is unambiguous about what automated systems cannot yet do: contextualise, interpret cultural variation, or handle ambiguity with consistent accuracy.

The result is a generation of moderation tools that are extremely efficient on clear-cut cases and structurally unreliable on everything else.

The real cost of false positives

When an algorithm casts too wide a net, the damage is not abstract. Legitimate sellers are blocked. Compliant products are pulled. A supplier who has been on a platform for three years receives an automated suspension notice for a kitchen utensil flagged as “suspicious content.”

Research cited by Automates Intelligents found that some AI moderation systems ban up to 40% of innocent users, a rate that no marketplace can absorb without consequences. Reduced catalogue depth, seller churn, eroded trust, and a buyer experience that becomes unpredictable: the cost of over-moderation is just as strategic as the cost of under-moderation.

A platform that systematically blocks legitimate content is not protecting its regulatory environment. It is damaging its own ecosystem.

Why AI alone is not enough

Algorithms can identify patterns, not intentions. They excel at large-scale detection, but frequently fail when it comes to understanding context. Academic research consistently confirms that the most reliable moderation remains hybrid, combining the efficiency of AI with the nuance of human judgement.

In short: AI detects, humans understand.

Stéphanie Akriche, Head of International Sales: “Some cases absolutely require human intervention: complex arbitration decisions involving conflicting rules, adaptation to a cultural context (particularly for multilingual marketplaces), and user reports flagging potential fraud. Depending on their nature, human intervention is necessary in 5 to 10% of cases reviewed.”

That 5 to 10% is not a failure of automation. It is where the most consequential decisions happen.

Towards an optimal model: symbiotic moderation

The highest-performing moderation frameworks we observe are built on three distinct layers.

1. Intelligent prevention (before publication)

The most advanced platforms act upstream: clear seller guidelines, automated pre-publication checks, and intelligent flagging of unusual listing volumes. In practice, up to 60% of problematic content can be intercepted before going live, simply by giving sellers better tools and distinguishing legitimate catalogues from flooding attempts at source.

2. Hybrid detection (during publication)

A well-designed system distributes responsibilities clearly: AI alone for manifestly illegal content; AI with human escalation, within a defined processing window, for uncertain or ambiguous cases; and a human expert for sensitive arbitrations or those requiring sector-specific contextualisation. European research consistently confirms that human oversight is essential to managing ambiguity and correcting structural biases in automated models.

3. Transparency and appeals (after the decision)

The DSA now mandates clear explanations of decisions, an effective right of appeal, and documented moderation practices. The most mature platforms go further: they publish their metrics, including false positive rates, and communicate proactively about their governance frameworks. Transparency here is not just a compliance requirement. It is a competitive differentiator.

The future: predictive moderation

Stéphanie Akriche, Head of International Sales: “By 2027, AI tools will integrate significantly more sophisticated contextual understanding. Human intervention will continue, but in a less volume-driven and more strategic capacity. Humans will need to supervise, arbitrate, and guarantee the fairness of AI decisions. They will become architects of the trust system. This is all the more relevant in the CX space, where our offerings will need to include model auditing, bias management, and fair arbitration of sensitive cases. The key challenges will be identifying and anticipating increasingly sophisticated fraud, ensuring the transparency demanded by regulators, and maintaining the balance between economic performance and user protection.”

Platforms are entering a new era. The challenge is no longer simply removing problematic content, but anticipating risks before they materialise, through detailed analysis of seller behaviour, detection of unusual patterns, and proactive identification of early warning signals. The next generation of AI tools will not replace human moderators: it will extend their capacity to anticipate, prioritise, and act with discernment.

Conclusion: Moderate better

In 2026, the winning marketplaces will not be those that moderate the most, or the fastest, but those that moderate the best. Artificial intelligence is a powerful accelerator, but without human intelligence to guide, contextualise, and correct it, it remains an incomplete tool.

The future does not lie in an AI versus human opposition, but in their complementarity: AI for speed and scale, humans for discernment and judgement. Both, together, for trust.

Armatis supports marketplaces in designing and operating content moderation and Trust & Safety frameworks that combine technological performance with human expertise. For retail and e-commerce players, we operate across the full chain: fraud prevention, DSA compliance, sensitive case management, and quality governance. Discover our expertise.

Sources

Automates Intelligents: AI moderation banning 40% of innocent users

European Commission, Digital Services Act: Digital Services Act – Keeping us safe online

AI4Media, Factsheet AI and Content Moderation: AI and Content Moderation factsheet

Share on

Is your marketplace ready for the Digital services Act?

Armatis supports marketplace operators throughout the entire moderation process: preventing false positives, ensuring DSA compliance, managing sensitive cases, and providing human arbitration. For over 35 years, we have partnered with platforms that refuse to choose between performance and trust.

Armatis is one of Europe’s leading providers of business process outsourcing (BPO) services in the field of customer experience. For over 35 years, it has supported large enterprises and SMEs in managing and transforming their customer service operations. With a presence in France, Tunisia, Portugal, Poland, Madagascar, and Germany, the group combines industry expertise, a multi-site European presence, and cutting-edge technology integration to meet the demands of European and international markets.

Looking for an outsourcing partner?

Contact our team to discuss your challenges.

Black Friday, holidays, sales, or unexpected peaks: Armatis helps you manage critical volumes, adapt your resources, and maintain customer quality.

Join the leaders who trust our multilingual and technological expertise.