Gartner Says AI Will Cost More Than Offshore Agents by 2030
Gartner's January 2026 prediction: AI cost per resolution will exceed offshore human agent costs by 2030. Here's why, and what it means for your support budget.
AI Resolution Costs Will Exceed $3 by 2030. Offshore Agents Cost Less.
On January 26, 2026, Gartner published a prediction that caught the customer service industry off guard: by 2030, the cost per generative AI resolution will exceed $3, higher than many B2C offshore human agents. Offshore agents in the Philippines and India average $2.50 to $4.00 per resolution depending on complexity. Gartner's models show those lines crossing within four years as AI costs rise and subsidies disappear.
The reaction from AI vendors was predictable. "That won't apply to us." "Our costs are going down, not up." "Gartner doesn't understand our architecture." But Gartner's reasoning is grounded in three specific trends that are already visible.
Why AI Costs Are Rising, Not Falling
The first driver is data center economics. Training and running large language models requires GPU clusters that cost tens of millions of dollars. During the growth phase (2023-2025), AI companies subsidized pricing to grab market share. OpenAI, Anthropic, Google, and others priced API calls below cost. That subsidy era is ending. OpenAI raised prices on GPT-4 Turbo by 20% in late 2025. Anthropic's Claude API pricing has increased twice since launch.
The second driver is complexity creep. Early AI support use cases were simple: FAQ deflection, password resets, order status lookups. Those are cheap to automate. But companies are pushing AI into harder territory: multi-step troubleshooting, billing disputes, returns with exceptions, cross-system workflows. Harder problems require more tokens, more tool calls, more reasoning steps. A simple FAQ deflection might cost $0.03 in API calls. A complex billing resolution can cost $2.00 or more when you factor in multiple LLM calls, retrieval augmentation, and tool execution.
The third driver is vendor consolidation. The chatbot and AI support market has over 200 players right now. By 2028, Gartner expects that number to drop below 50. As smaller competitors fold or get acquired, the survivors will have pricing power. The race-to-the-bottom phase is temporary.
The Offshore Comparison Isn't Apples to Apples
Gartner's comparison is specifically about cost per resolution, which doesn't capture the full picture. Offshore agents have costs that don't show up in per-resolution metrics: management overhead, quality assurance, training, turnover (which runs 30-45% annually in offshore contact centers), and the 6-8 week ramp time for new agents.
AI doesn't have turnover. It doesn't need training on your product every quarter. It works at 3 AM on a Saturday. These advantages are real, and they're why the per-resolution cost comparison alone is misleading.
But Gartner's point isn't that offshore is better than AI. Their point is that the AI cost savings narrative that vendors have been selling since 2023 is based on temporarily subsidized pricing and simple use cases. As pricing normalizes and use cases get harder, the economics shift.
What This Means for Your Budget Planning
If you're building a business case for AI support in 2026, don't use today's API pricing as your baseline for 2028 costs. Build in a 15-25% annual price increase for LLM-based solutions. That might sound aggressive, but OpenAI's pricing trajectory supports it, and Gartner's models assume similar increases across vendors.
Also separate your use cases by complexity. Simple classification and routing will stay cheap. Complex multi-turn resolutions powered by LLMs will get more expensive. Your cost model should reflect this split.
Here's a rough framework:
The bottom three rows are where the lines cross. Simple automation stays cheaper than humans for the foreseeable future. Complex resolution is where Gartner's prediction bites.
The Architecture Matters More Than the Vendor
Gartner's prediction applies specifically to LLM-powered resolution, where a large language model generates responses, makes decisions, and executes multi-step workflows. That's the architecture most "AI agent" platforms use, and it's the architecture whose costs scale with model complexity and token volume.
Classification-based systems have a different cost structure. Supp runs a purpose-built classifier on our own infrastructure. It doesn't call OpenAI or Anthropic APIs. There's no per-token cost that scales with conversation length. Classification costs $0.20 regardless of whether the customer wrote two words or two paragraphs. Resolution actions cost $0.30.
Those prices aren't subsidized growth-phase pricing. They reflect the actual cost of running a small, specialized model. A purpose-built classifier is orders of magnitude cheaper to operate than a large generative model.
This isn't a knock on LLM-powered platforms. They can do things a classifier can't. But if your primary need is understanding what customers want and routing them to the right outcome, you don't need a 175B parameter model to do it. And you definitely don't want your costs tied to that model's pricing trajectory.
The Smart Play for 2026-2028
Use AI where it's structurally cheap: classification, routing, intent detection, priority scoring. These tasks use small models with fixed costs that won't follow Gartner's upward curve.
Use humans (onshore or offshore) where AI costs scale unpredictably: complex disputes, emotionally charged situations, multi-system troubleshooting that requires judgment calls.
Use LLM-powered tools selectively, for specific high-value tasks where the ROI justifies the cost even at 2x or 3x current pricing. Draft generation for human review. Summarization of long ticket histories. Suggested responses that agents can edit.
The companies that'll have the best support economics in 2030 won't be all-AI or all-human. They'll be the ones who matched each task to the right tool based on cost structure, not hype cycle positioning.