• Use Cases
  • Pricing
  • Security
  • Docs
Sign InStart free

The outbound integration layer for SaaS products: emit once, then let Meshes handle routing, retries, fan-out, and delivery history.

© Copyright 2026 Meshes, Inc. All Rights Reserved.

About
  • About
  • Security
  • Blog
  • Contact
  • FAQ
Product
  • Pricing
  • Demo
  • Integrations
  • Guides
  • Changelog
  • Status
Compare
  • All comparisons
  • Build vs buy
  • vs Zapier
  • vs Make
  • vs n8n
  • vs Paragon
  • vs Merge
Use Cases
  • All use cases
  • Payment failed
  • User signup fan-out
  • Churn prevention
  • Trial expired events
  • Lesson completion flows
  • Page completion triggers
  • Page visit Intercom flows
Developers
  • Documentation
  • Agents
  • API Reference
  • MCP Server
  • llms.txt
Legal
  • Terms of Service
  • Privacy Policy
  • Acceptable Use Policy
  • Cookie Policy

Idempotent Event Delivery - Why Your Webhooks Process Duplicates (And How to Stop)

Duplicate webhook deliveries are normal in at-least-once systems. Learn idempotency keys, dedup strategies, and Node.js patterns that prevent double-processing.

Cover Image for Idempotent Event Delivery - Why Your Webhooks Process Duplicates (And How to Stop)

If you send webhooks or product events long enough, you will deliver duplicates.

Duplicates are a normal consequence of building on networks, queues, timeouts, retries, and independent systems that do not share one transaction boundary. If your event producer retries aggressively enough to be reliable, your consumer will eventually see the same logical event more than once.

That is why idempotency matters. An idempotent consumer can process the same event twice and still produce one correct result.

This is the practical answer to duplicate delivery. Not magical "exactly once" marketing language. Not hoping your queue or webhook sender never retries. Just a system designed so duplicates are harmless.

Why duplicates happen in the first place

Most delivery systems operate with at-least-once semantics. The producer keeps trying until it sees evidence that the consumer accepted the message.

That sounds reasonable, but there is a gap between "the consumer processed the event" and "the producer knows it processed the event."

Here are a few common ways duplicates appear:

  • The consumer completes the work, but its HTTP response times out before the producer receives it.
  • The producer receives a 500 or connection reset after the consumer already committed the write.
  • A worker crashes after processing the event but before checkpointing success.
  • A queue redelivers the message because the acknowledgement was lost.
  • A retry engine intentionally resends after transient failures. That is exactly what a good retry system should do. We cover the retry side in Webhook Retry Logic Done Right.

In every case, the producer does not have enough information to safely assume "that event definitely did not apply." So it retries.

That is correct. The consumer still has to handle repeat delivery.

What an idempotency key actually is

An idempotency key is a stable identifier for the logical operation you want to happen once.

For webhooks and event delivery, that identifier usually comes from one of three places:

  • A provider event ID, such as evt_123 or a UUID in the payload
  • A delivery ID added by the event infrastructure
  • A deterministic key you derive from your own domain object, such as workspace_id + order_id + event_type

The important property is not where the key came from. The important property is that retries of the same logical event produce the same key.

That lets the consumer ask a simple question:

Have I already applied the side effects for this key?

If yes, return success.

If no, apply the work exactly once from the consumer's point of view, then record that the key has been processed.

Common failure modes

Teams usually understand the idea of idempotency before they get the implementation right. The bugs show up in the gap between the slogan and the storage strategy.

1. Treating payload equality as deduplication

Two deliveries with the same JSON body are not always the same logical event. Field ordering can differ. Timestamps can differ. Retries may include new transport metadata. Hashing the entire payload often creates false negatives and false positives.

Prefer a stable event or operation ID over payload comparison.

2. Using only an in-memory cache

An in-memory Set or process-local LRU cache looks fine in development. It falls apart in production:

  • a deploy clears it
  • a crash clears it
  • a second replica never had it
  • a retry that lands an hour later is outside the process lifetime

In-memory dedup can be a performance optimization. It is not your source of truth.

3. Recording the key outside the business transaction

This is one of the easiest ways to get subtle duplicates.

If you insert the dedup key after writing the business row, two concurrent workers can both reach the write before either one marks the event as processed. If you insert the dedup key before the business work, then fail midway, you may permanently mark an event as handled even though the side effect never completed.

The durable dedup record and the business write need to succeed or fail together.

4. Confusing time-windowed dedup with durable idempotency

A Redis TTL window can stop rapid duplicate storms. It does not protect you against a replay tomorrow, a backfill next week, or a queue redelivery after a long outage.

For low-risk notifications, a time window may be enough. For money movement, CRM updates, account state, or anything customer-visible, durable dedup is usually the safer choice.

5. Believing exactly-once claims too literally

Some brokers can provide exactly-once guarantees within a narrow boundary, such as a producer writing to a specific log with a specific transaction model. That is not the same thing as end-to-end exactly-once side effects across HTTP, your database, another queue, and a third-party API.

Once the event crosses multiple systems, you are back in the real world: partial failures, retries, and duplicated attempts. The practical pattern is still at-least-once delivery plus idempotent consumers.

Practical patterns that work in production

There is no single dedup pattern for every workload. The right choice depends on how expensive duplicates are and how long duplicate risk persists.

PatternBest forStrengthWeakness
Redis or cache TTL windowHigh-volume, low-risk duplicate burstsFast and cheapNot durable; misses late replays
Database dedup tableBusiness events with real side effectsDurable and easy to reason aboutAdds a write path and table growth
Unique constraint on business objectNatural upsert cases like customer_id + external_idSimple when the domain has a natural keyNot every side effect maps cleanly
Infrastructure delivery IDsStandardized producer-side event identityKeeps IDs consistent across destinationsStill requires consumer-side idempotent handling

Pattern 1: database-backed dedup with a unique constraint

For most webhook consumers, the most dependable pattern is a durable table with a unique key, wrapped in the same transaction as the business update.

Example schema:

create table processed_webhook_events (
  source text not null,
  idempotency_key text not null,
  processed_at timestamptz not null default now(),
  primary key (source, idempotency_key)
);

And a realistic Node.js handler using Postgres:

import { Pool } from 'pg';

const pool = new Pool({ connectionString: process.env.DATABASE_URL });

type BillingWebhook = {
  id: string;
  customerId: string;
  invoiceId: string;
  amount: number;
};

export async function handleInvoicePaid(event: BillingWebhook) {
  const client = await pool.connect();

  try {
    await client.query('BEGIN');

    const dedupResult = await client.query(
      `
        insert into processed_webhook_events (source, idempotency_key)
        values ($1, $2)
        on conflict do nothing
      `,
      ['billing-provider', event.id],
    );

    if (dedupResult.rowCount === 0) {
      await client.query('ROLLBACK');
      return { status: 200, duplicate: true };
    }

    await client.query(
      `
        insert into invoices (provider_invoice_id, customer_id, amount, status)
        values ($1, $2, $3, 'paid')
        on conflict (provider_invoice_id)
        do update set
          amount = excluded.amount,
          status = 'paid'
      `,
      [event.invoiceId, event.customerId, event.amount],
    );

    await client.query('COMMIT');
    return { status: 200, duplicate: false };
  } catch (error) {
    await client.query('ROLLBACK');
    throw error;
  } finally {
    client.release();
  }
}

A few things matter here:

  • The dedup insert and the invoice write are in the same transaction.
  • If the transaction rolls back, the dedup record rolls back too.
  • A second delivery of the same event sees the unique key conflict and exits cleanly.
  • The domain write itself is also idempotent because it uses an upsert keyed on the provider invoice ID.

That last point matters. The strongest handlers usually combine transport-level dedup with domain-level idempotency.

Pattern 2: Redis for short duplicate windows

If your main problem is rapid-fire duplicate attempts during retries, a cache-based window can help:

const wasInserted = await redis.set(
  `dedup:${event.id}`,
  '1',
  'NX',
  'EX',
  3600,
);

if (!wasInserted) {
  return { status: 200, duplicate: true };
}

This is useful for:

  • noisy retry bursts
  • rate-limited consumers
  • high-volume events where a one-hour window is good enough

It is less useful for:

  • financial events
  • systems that replay history
  • queues that can redeliver far outside the TTL

Redis is often a good first layer. It is rarely the only layer you want for important state changes.

Pattern 3: natural idempotency in the domain model

Sometimes the cleanest solution is not a separate dedup table. It is a unique constraint on the business object itself.

If every external order maps to one internal order, then external_order_id can be unique. If every upstream invoice maps to one invoice row, make that ID unique and use upserts.

This is especially effective when the side effect is "materialize or update a record" rather than "fire an external side effect."

Pattern 4: stable event IDs from delivery infrastructure

Good delivery infrastructure can help by attaching stable identifiers, normalizing retries, and giving you clearer observability around which logical event is being retried.

That reduces ambiguity, but it does not remove the need for consumer-side idempotency. A stable delivery ID is only useful if your consumer actually checks it before doing work.

At-least-once vs exactly-once: what is actually true

This is where a lot of teams get misled.

If someone says their webhook system is "exactly once," ask what boundary they mean.

Inside one broker? Maybe.

Across a queue acknowledgement, an HTTP request, your database commit, and a downstream API call? Usually no.

The producer can know it attempted the delivery. The consumer can know it committed a write. Neither side can retroactively make the network perfect.

That is why at-least-once delivery remains the standard design for reliable integration systems:

  • retry when the outcome is uncertain
  • include a stable event identifier
  • require the consumer to be idempotent

This is not a compromise. It is the pattern that survives contact with real distributed systems.

If you want a deeper view of what happens when retries finally stop helping, the next concept to understand is dead-letter queues for failed webhooks.

When infrastructure helps

You can absolutely implement idempotent handling yourself, and you should. This is consumer logic, not something a platform can magically do on your behalf.

Where infrastructure becomes useful is around the edges of the problem:

  • generating stable delivery IDs
  • standardizing retry behavior
  • isolating failed destinations
  • surfacing event and attempt history
  • storing dead letters and replay context

That is where a delivery layer or integration platform starts earning its keep. Meshes fits that category: it can centralize routing, retries, and delivery observability so you are not rebuilding the same transport plumbing for every destination. But the consumer still needs to treat duplicate delivery as normal and handle the idempotency key correctly.

If you are deciding whether to keep hand-rolling that delivery layer, our DIY comparison is the more relevant place to evaluate the build-vs-buy tradeoff.

The practical takeaway

Duplicate webhook delivery is not a bug to eliminate. It is a condition to design for.

The winning pattern is boring in the best possible way:

  1. accept at-least-once delivery as normal
  2. attach or consume a stable idempotency key
  3. make the business write and the dedup record succeed together
  4. return success on duplicates instead of reprocessing them

Do that consistently and duplicates stop being scary. They become just another retry outcome your system already knows how to absorb.

If this post was useful, the next two pieces that usually matter are how to implement retries without causing new failures and what to do when delivery still fails permanently.

Want reliable delivery without rebuilding the transport layer? Join Meshes and handle retries, routing, and observability in one place while your consumers stay idempotent.