• Use Cases
  • Pricing
  • Security
  • Docs
Sign InStart free

The outbound integration layer for SaaS products: emit once, then let Meshes handle routing, retries, fan-out, and delivery history.

© Copyright 2026 Meshes, Inc. All Rights Reserved.

About
  • About
  • Security
  • Blog
  • Contact
  • FAQ
Product
  • Pricing
  • Demo
  • Integrations
  • Guides
  • Changelog
  • Status
Compare
  • All comparisons
  • Build vs buy
  • vs Zapier
  • vs Make
  • vs n8n
  • vs Paragon
  • vs Merge
Use Cases
  • All use cases
  • Payment failed
  • User signup fan-out
  • Churn prevention
  • Trial expired events
  • Lesson completion flows
  • Page completion triggers
  • Page visit Intercom flows
Developers
  • Documentation
  • Agents
  • API Reference
  • MCP Server
  • llms.txt
Legal
  • Terms of Service
  • Privacy Policy
  • Acceptable Use Policy
  • Cookie Policy

Why per-task integration pricing breaks when your users are AI agents

Per-task iPaaS pricing worked when humans generated the load. AI agents do not. A teardown of why the economics break, how other pricing models fare, and the questions to ask when evaluating integration pricing that survives agent workloads.

Cover Image for Why per-task integration pricing breaks when your users are AI agents

Integration platforms like Zapier and Make were priced for a world where a "task" was a recognizable unit of human intent. Someone filled out a form. Someone clicked a button in Slack. Someone updated a row in a spreadsheet. One input, one task, one charge. The pricing model tracked user behavior cleanly because users were humans and humans do a bounded number of things per day.

AI agents don't behave like humans. A single agent invocation can trigger dozens or hundreds of downstream actions. Per-task pricing was built assuming a human in the loop generating a roughly linear workload. Take the human out and the workload stops being linear. It's not that it gets expensive — plenty of things get expensive. It's that the cost model and the usage model stop tracking each other, and once that happens the pricing structure isn't just unpleasant, it's broken.

This is a teardown of why, with enough detail to help a buyer evaluate whether their current integration pricing survives agent workloads.

What a task used to mean

Zapier and Make — along with most iPaaS platforms built in the same era — use roughly the same billing primitive. An event enters the platform, the platform runs it through a workflow (Zap, Scenario, etc.), and each step that touches a connected app counts as a task. Lookup in Salesforce: one task. Create a HubSpot contact: one task. Send a Slack message: one task. A five-step Zap costs five tasks per run.

This maps cleanly to human usage because humans generate input events at a predictable rate. A sales team of twenty people might produce a few thousand tasks a month between form submissions, CRM updates, and notifications. Pricing tiers were built around those ranges. Zapier, Make, and equivalents at other vendors all structure tiers around monthly task counts in the low-to-mid thousands for small teams, tens of thousands for mid-market.

The model has two properties worth naming explicitly:

  1. Cost scales with output steps, not input events. A workflow that touches five destinations costs five times as much per run as a workflow that touches one.
  2. Input rate is bounded by human behavior. Humans generate input events at a rate bounded by the number of humans times how active each one is.

Both properties were load-bearing for the pricing model. Remove either and the economics stop working.

What a task means with agents in the loop

An AI agent isn't an input event. It's a loop.

Consider a realistic agent workflow: a sales research agent that takes a list of companies and enriches each with data from several sources. For each company, the agent might:

  • Look up the company in Salesforce
  • Pull recent activity from HubSpot
  • Check support history in Intercom
  • Cross-reference usage data from the product database
  • Append findings back to Salesforce
  • Notify the account owner in Slack
  • That's six destination touches per company. Run it across 200 companies and one agent invocation produces 1,200 tasks. Now imagine that agent runs on a schedule — every morning, every time a new lead enters the pipeline, or continuously as part of a larger orchestration.

    The numbers escalate fast because nothing in the system is bounded by how many humans are on the team. An agent workflow isn't "one user's activity." It's closer to a cron job crossed with a decision tree, and it can fan out as wide as the underlying LLM and the surrounding code decide to.

    Both of the properties that made per-task pricing work in the human era now invert:

    1. Cost still scales with output steps — but output steps are now multiplied by whatever the agent decides to do, which isn't knowable in advance.
    2. Input rate is no longer bounded by humans — it's bounded by compute, which is approximately unbounded for this purpose.

    The math, concretely

    Walk through a specific failure mode without inventing dollar amounts, because the argument doesn't depend on exact prices. It depends on the ratio between input events and billable units.

    Imagine a SaaS product that fires four product events per customer per day: signup, usage update, billing change, cancellation. For 1,000 customers, that's 4,000 events per day, or roughly 120,000 per month. Each event needs to fan out to four destinations: CRM, marketing platform, support tool, data warehouse.

    Under a per-task model, each fan-out destination is a separate task. 120,000 events × 4 destinations = 480,000 tasks per month. That number is already well into mid-market pricing tiers, and no agent activity has been added yet.

    Now add one AI agent that does account enrichment research, touching six tools per account, running nightly on the top 20% of accounts. That's 200 accounts × 6 tools × 30 nights = 36,000 tasks. Not catastrophic on its own.

    But that agent is one of many. A customer health scoring agent that touches four tools per customer, weekly, across all 1,000 customers. An expansion opportunity agent that touches five tools per customer, twice a week on active accounts. A support triage agent that touches three tools per ticket. These aren't hypothetical — they're the shapes of agent workflows teams are already running.

    Total task count stops being a predictable function of customer count. It becomes a function of how many agents are running and what each agent chooses to do, which changes every time the underlying LLM gets better or the prompts get tuned.

    This is the part per-task pricing cannot absorb. The number isn't just going up — it's no longer proportional to anything forecastable from business metrics.

    Why per-event with fan-out behaves differently

    The structural alternative is simple: charge per input event, include fan-out in the price. One signup event costs one event, whether it's delivered to one destination or ten.

    Under this model, 120,000 monthly product events remain 120,000 billable units regardless of how many destinations consume each one. Adding a fifth destination doesn't change the bill. Adding a sixth destination doesn't change the bill.

    For agent workloads, the comparison gets sharper. If agents read from the same event stream rather than generating new tasks, cost doesn't grow with agent count. If agents emit new events into the system, the event is paid for once, and whatever fan-out it triggers is included.

    Platforms that use this model — Meshes among a handful of others in the event routing and delivery category — price the input event as the billable unit and treat fan-out as infrastructure rather than metered output. The economic argument isn't that they're cheaper in all cases. The argument is that the cost curve matches the usage curve: both scale with input volume, and input volume is something a product team can forecast.

    How other pricing models fare

    Per-task isn't the only model. A few other structures show up in integration pricing, each with different behavior under agent load.

    Flat seat-based pricing. Charges per user regardless of volume. Survives agent workloads trivially because agents don't have seats. But it tends to be priced for low-volume, internal-team use cases — not product event routing at scale.

    Per-run or per-execution pricing. Charges per workflow invocation, not per task within the workflow. Better than per-task because a single agent invocation is billed once, but still breaks when agents invoke workflows in loops or in parallel across many records.

    Tiered volume pricing (per-event buckets). Charges in volume brackets — the first 100K events at one price, the next 500K at another, etc. Predictable for a given month, less predictable across months when agent activity is spiky.

    Metered per-event with fan-out included. Charges per input event. Fan-out to N destinations is included in the per-event price. Scales with product volume, not integration surface area.

    Usage-based on data volume. Charges per GB or per API call size. Rare in integration pricing, common in data platforms. Can work for agents, but ties cost to payload shape rather than business volume, which is its own forecasting problem.

    Per-task and per-execution are the two that specifically fail under agent load. Both assume the billable unit is proportional to something a human did. Neither is anymore.

    A buyer's checklist

    If the plan is to run AI agents against integration infrastructure — whether agents the team runs internally or agents the product exposes to customers — these are the questions that separate pricing models that survive from pricing models that don't.

    What is the billable unit? If it's "task" or "operation" or "action," it's per-task pricing. If it's "event" or "message," it's per-event pricing. Naming tends to be consistent across vendors in a category.

    Does the billable unit scale with fan-out? Ask directly: if one event is delivered to five destinations, is that one billable unit or five? If five, fan-out is metered. If one, fan-out is included.

    What happens to the bill if a sixth destination is added? The fan-out question stated as a forecasting question. Under per-task pricing, a new destination multiplies cost. Under per-event with fan-out included, it doesn't.

    How does pricing behave under spiky or unpredictable load? Agents generate spikier traffic than humans. Ask about overages, burst pricing, and how rate limiting is handled. Flat overage rates are predictable; tiered overage rates with steep cliffs aren't.

    Is there a separate rate for retries, deduplication, or DLQ replays? Some platforms bill retries as new tasks. Under heavy load this can double or triple effective cost. Platforms that treat retry delivery as infrastructure don't charge again.

    What's the pricing model's assumption about input rate? Read the tier descriptions. If they're framed around team size or user seats, the platform is priced for human-bounded workloads. If they're framed around event volume, the platform is priced for machine-bounded workloads.

    What does the cost look like at 10x current volume? Build a forecast at current event volume and one ten times larger. Under per-task pricing, that usually triggers a jump to enterprise pricing. Under per-event pricing, it usually stays in the tiered band structure.

    The core question behind all of these is the same: does the vendor's pricing model assume a human is producing the load, or does it assume the load is produced by code? If it's the first, agent workloads will break the economics — not immediately, and not visibly at first, but soon enough that it's worth knowing before the platform is chosen.

    Shopping for integration pricing that survives agent workloads? Join Meshes — one price per input event, fan-out to every destination included.