Skip to main content

Resources · 5-min read

When should an IT issue escalate from AI to a human?

Published 19 April 2026 · IT Support Helpdesk

The short version

The AI handles everything it's confident about immediately. When it's not confident — or when the problem is in a category it's designed never to handle alone — it escalates to a UK engineer with the full conversation already attached. You don't have to ask, and you don't have to start over. You can also skip to a human at any time by typing "human please".

A reasonable concern when you first hear "AI IT support" is: what happens when the AI gets it wrong, or doesn't know what to do? Does it keep trying, make things worse, or quietly pretend everything is fine?

The answer is none of the above. Here's how escalation actually works, and the three categories of problem the AI uses to decide.

Category 1: Things the AI always handles

These are the straightforward, high-frequency tickets that account for the majority of support volume at any SME. The AI resolves them directly, typically within seconds.

  • Password resets and MFA re-enrolment
  • Software installation and update failures
  • Common Outlook, Teams, and Microsoft 365 errors
  • VPN connection issues with known resolutions
  • Printer queue jams and driver problems
  • Slow machine diagnostics (reading actual process and disk data from the endpoint agent)
  • Browser and application configuration questions

Example: Sarah at a 25-person accountancy practice messages at 8:47 am: "Outlook keeps crashing when I try to open attachments." The AI reads her message, checks the endpoint agent data on her laptop (sees that Outlook is running an outdated version and there's a known crash related to it), pushes the update silently in the background, and replies to let her know it's fixed and why. Total elapsed time: 40 seconds. No human involved.

Category 2: Things the AI always escalates

There are categories of problem where the AI is designed to escalate immediately, regardless of whether it thinks it knows what to do. This is a deliberate policy decision, not a limitation.

  • Suspected security incidents: Any report of unusual account activity, unexpected emails sent from an account, ransomware symptoms, or login alerts from unfamiliar locations. The AI does not attempt to handle security incidents alone.
  • Hardware failures with data risk: If the endpoint diagnostics show a drive with failing SMART data or error patterns consistent with imminent failure, a human engineer is looped in immediately — not after the AI tries a few things.
  • Regulated workflow changes: Anything that touches how data is processed, stored, or shared in a way that might have compliance implications. A human makes the call, not the AI.
  • Anything affecting multiple users simultaneously: A widespread outage affecting half the office is an incident, not a ticket. It goes straight to a human.

Example: James messages: "My computer is acting really strangely and I got an email from IT saying my password was just changed — I didn't change it." The AI immediately flags this as a potential account compromise, escalates to an engineer, and responds to James: "This looks like it may be a security issue. I've escalated this to an engineer who'll be with you in the next few minutes. Please don't click anything else on that machine for now." No heroics. Straight to a human.

Category 3: The in-between — confidence-based escalation

Most of the interesting cases sit between the two clear categories. The AI uses a confidence score to decide what to do.

If the AI is highly confident it knows the resolution — based on the problem description, the endpoint data, and prior ticket history — it proceeds. If its confidence is below a threshold, it escalates rather than guessing. The threshold is deliberately set conservatively: we'd rather escalate something the AI could have handled than have it attempt something and make it worse.

  • Unusual error codes not in the AI's training data → escalate
  • Problems where the user's description is ambiguous and endpoint data doesn't clarify → escalate
  • Issues that have already been attempted and failed once → escalate (don't try the same thing again)
  • Problems that look routine but have anomalous context → escalate

Example: Priya messages: "My Excel keeps giving error code 0x800A03EC when I try to run a macro." The AI looks this up — it's a common VBA permission error and there are known resolutions. Confidence: high. It walks Priya through the fix. But if the error code isn't one the AI recognises, or if Priya has tried the standard fixes already and they haven't worked, the AI says: "I want to make sure this gets properly resolved. I'm handing this to an engineer who'll have our full conversation in front of them."

You can always ask for a human

If you'd rather not go through the AI for a particular issue — or if you just want a human — type "human please" in the chat. That's it. You don't have to justify it, and the AI won't try to talk you out of it. The full context of your conversation is handed over, so you don't repeat yourself.

We're transparent about what the AI sees and how it makes decisions. You can read more on the Trust Centre — including what data the endpoint agent collects and what access we have to your machines.

Summary

  • Routine, well-understood problems: the AI resolves directly, usually in seconds.
  • Security incidents, hardware failures with data risk, regulated changes, and widespread outages: always escalated to a human immediately.
  • Everything else: decided by confidence score. If the AI isn't confident, it escalates rather than guessing.
  • You can always request a human by typing "human please" — no friction, full context handed over.

Ready to try AI-led IT support?

Sign up, install the agent, and start raising tickets. £10 per user per month. Cancel any time.