Terug naar Kennisbank
AI ActliabilitySMElegislationchatbot

AI Liability: Who Pays When Your Chatbot Makes a Mistake?

ZeroCode Ventures25 maart 20267 min leestijd

AI Liability: Who Pays When Your Chatbot Makes a Mistake?

Your chatbot gives a customer the wrong advice. Your AI tool sends a quote with the wrong price. Your automated scheduling puts three technicians on the same job. Who's responsible — you, the software vendor, or "the AI"?

It's a question that concerns more and more SME owners. And with the EU AI Act coming into force step by step since 2025, it's no longer a theoretical issue.

The EU AI Act: What You Need to Know

The European AI Regulation is the world's first comprehensive AI law. The first prohibitions have been in effect since February 2025, and more rules are taking effect throughout 2026. The law classifies AI systems by risk:

  • Unacceptable risk: prohibited (think social scoring, manipulative AI)
  • High risk: strict requirements (medical diagnosis, credit assessment, HR selection)
  • Limited risk: transparency obligation (chatbots must disclose they're AI)
  • Minimal risk: no additional rules

Most AI tools used by SMEs — chatbots, scheduling tools, automated email — fall into the limited or minimal risk category. But that doesn't mean you're off the hook entirely.

You Are the Deployer

Here's the crux: the AI Act distinguishes between the provider (who builds the AI) and the deployer (who uses the AI). If you put a chatbot on your website, you're the deployer. That comes with obligations:

  • Transparency: You must tell customers they're communicating with AI
  • Oversight: You must monitor the output and intervene when errors occur
  • AI literacy: Since 2025, you're required to ensure your team understands how the AI works

That last one is new and often underestimated. AI literacy isn't vague advice — it's literally in the law. Your employees must understand what the AI does, where its limitations are, and when they need to step in.

Three Scenarios That Go Wrong

1. The Chatbot That Over-Promises

An installation company deploys an AI chatbot for customer inquiries. The bot promises a customer that a heat pump installation will be "sorted within two weeks." In reality, the lead time is six weeks. The customer holds the company to the promise.

Who's liable? The installation company. The chatbot is your representative. What the bot says, you say. Just like you're responsible for what an employee promises on the phone.

2. The AI That Sends Wrong Prices

A wholesaler automates quotes with AI. Due to a product data error, the system generates quotes with prices that are 40% too low. Three customers accept the quote before it's discovered.

Who's liable? It depends. If the AI tool had a bug, you can hold the provider accountable. But if the error was in your product data, you're responsible yourself. In practice: you either deliver, or you negotiate. But the loss is yours.

3. The Automated Rejection

A recruiter uses AI to screen CVs. The tool systematically rejects candidates over 50 without a clear reason. A rejected applicant takes it to court.

Who's liable? The recruiter as deployer. CV screening with AI falls under the AI Act as a high-risk application. Strict requirements for transparency, human oversight, and non-discrimination apply here.

How to Protect Yourself as an SME

You don't need to hire a legal team. But you do need to take a few basic measures:

1. Set boundaries on what your AI can do

Don't let your chatbot make hard promises about delivery times, prices, or guarantees. Build in fallbacks: "For an exact price, I'll get in touch with you." That's not a limitation — it's professional.

2. Monitor the output

"Set and forget" doesn't work with AI. Regularly check what your AI tools produce. Read the chat conversations. Review the generated quotes. It costs half an hour per week and prevents thousands of euros in problems.

3. Train your team

AI literacy is mandatory, but it's also just smart. Make sure everyone who works with AI tools understands:

  • What the tool can and can't do
  • When they need to step in
  • How to recognize and report errors

4. Document what you do

Record which AI tools you use, what for, and what measures you take. If a complaint ever comes in, you want to be able to show that you acted carefully.

5. Check your vendor

Ask your AI vendor: does the system comply with the AI Act? What risk category? What documentation do they provide? A reliable vendor has answers to these questions.

The Good News

The AI Act isn't a punitive law designed to make life difficult for SMEs. It's a quality standard. Companies that implement AI well — with the right guardrails, human oversight, and transparency — have nothing to fear. In fact, they stand out from competitors who deploy AI recklessly.

Fines for violations can reach up to 35 million euros or 7% of revenue. But those are aimed at serious violations by large players. For SMEs, it's much more relevant to prevent civil liability: customers holding you accountable for your AI's mistakes.

Conclusion: Using AI Isn't the Risk — Ignoring AI Is

The companies that think about liability now and take their AI implementation seriously will be the ones that win their customers' trust. Not because they don't use AI, but because they do it right.

Want to know if your AI tools comply with the regulations? Or want to deploy AI with the right guardrails from day one? Take the free AI Scan and discover where the opportunities are — without the risks.

Benieuwd wat AI voor jouw bedrijf kan doen?

Vraag een gratis AI-Scan aan en ontdek de mogelijkheden.

Gratis AI-Scan Aanvragen