Comparisons·9 min read·2025-09-15·Colm Byrne, Technical Product Manager

How RabbitMQ's Acknowledgement Model Guarantees At-Least-Once Delivery

The message stays in the queue until you acknowledge it. That one design decision makes RabbitMQ one of the most reliable message brokers ever built. Here's how it shapes webhook architectures.

RabbitMQ has been running in production at serious companies since 2007. Nearly two decades of uptime, incident post-mortems, and hard-won operational knowledge have accumulated around one core design decision: the message stays in the queue until the consumer explicitly acknowledges it. The RabbitMQ documentation covers the full ack model in detail.

That sounds simple. The implications are not.

When a consumer pulls a message and starts processing it, RabbitMQ does not delete the message. It marks it as unacknowledged. If the consumer crashes mid-processing — connection drops, process killed, exception thrown — RabbitMQ detects the disconnection and requeues the message. The next available consumer picks it up. No message coordinator, no separate retry scheduler, no application code required. The broker handles it.

This is at-least-once delivery at the protocol level. It is one of the cleanest expressions of that guarantee ever built into a production system.


The Acknowledgement Model in Detail

RabbitMQ's AMQP protocol exposes three methods that matter for reliability:

basic.ack — "I processed this successfully, you can delete it."

basic.nack — "I could not process this. Either requeue it or send it to the dead-letter exchange."

basic.reject — Like basic.nack but for a single message. Same semantics.

Here is a consumer that demonstrates the full flow:

import amqp from 'amqplib';

async function startConsumer() {
  const connection = await amqp.connect(process.env.RABBITMQ_URL);
  const channel = await connection.createChannel();

  const queue = 'webhook.stripe';

  await channel.assertQueue(queue, {
    durable: true, // survives broker restart
    arguments: {
      'x-dead-letter-exchange': 'webhook.dlx', // poison messages go here
    },
  });

  // prefetch: don't deliver more than 5 unacked messages to this consumer
  // prevents one slow consumer from accumulating an unbounded backlog
  channel.prefetch(5);

  channel.consume(queue, async (msg) => {
    if (!msg) return;

    const payload = JSON.parse(msg.content.toString());

    try {
      await processWebhookEvent(payload);

      // Only ack after successful processing
      // If this line is never reached, RabbitMQ requeues the message
      channel.ack(msg);
    } catch (err) {
      logger.error({ err, eventId: payload.id }, 'Webhook processing failed');

      const isPoison = payload._attempts >= 3;

      if (isPoison) {
        // Stop requeueing. Route to DLX for quarantine and inspection.
        channel.nack(msg, false, false);
      } else {
        // Increment attempt counter and requeue
        payload._attempts = (payload._attempts || 0) + 1;
        channel.nack(msg, false, true);
      }
    }
  });
}

The prefetch setting deserves attention. Without it, RabbitMQ will push all available messages to the consumer as fast as it can. If your consumer is slow — doing database writes, calling external APIs — you end up with thousands of unacked messages sitting in the consumer's memory while the queue appears empty. prefetch(5) tells RabbitMQ: send me at most 5 unacked messages at a time. The rest stay in the queue where they belong.

The ack goes out only after processWebhookEvent resolves successfully. If that function throws, throws again, and throws a third time, the third nack(msg, false, false) routes the message out of the main queue and into the dead-letter exchange. That is poison message handling built into the ack model — no separate mechanism required.


At-Least-Once vs. At-Most-Once

The alternative to explicit acks is autoAck: true, where RabbitMQ deletes the message the moment it is delivered to the consumer — before the consumer has done anything with it.

That is at-most-once delivery. The message will never be processed twice. It also might never be processed at all, because if the consumer crashes after receiving the message but before processing it, the message is gone.

For webhook events that represent business transactions — a payment confirmed, an order placed, a subscription cancelled — at-most-once is not acceptable. The event either processed or it did not, and "we received it but dropped it on crash" is a real failure mode in production. This is the root cause of many silent webhook failures that are hard to diagnose.

At-least-once means you will occasionally process the same event twice (on redelivery after consumer crash). That is why idempotency keys matter: use the Stripe event ID, the Shopify event ID, or any stable provider-assigned identifier as a unique key in your processing logic. See the Stripe webhook documentation for how Stripe assigns stable event IDs.

// Idempotent insert — second attempt with same event ID does nothing
await db.query(`
  INSERT INTO processed_events (event_id, processed_at)
  VALUES ($1, NOW())
  ON CONFLICT (event_id) DO NOTHING
`, [payload.stripeEventId]);

At-least-once plus idempotency equals effective exactly-once processing for your application state — without requiring exactly-once guarantees from the broker itself.


RabbitMQ vs. AWS SQS

AWS SQS is the other dominant queue in this space, and the comparison is instructive. For context on SQS operational overhead, see our post on SQS DLQ redrive overhead. The CloudAMQP blog has deep dives on RabbitMQ production patterns.

| | RabbitMQ | AWS SQS Standard | AWS SQS FIFO | |---|---|---|---| | Delivery guarantee | At-least-once | At-least-once | Exactly-once processing | | Ordering | Per-queue (not guaranteed) | No ordering | Per message group ID | | Dead-letter | Dead-letter exchange | Dead-letter queue | Dead-letter queue | | Hosting | Self-hosted (or CloudAMQP) | Fully managed | Fully managed | | Exchange routing | Rich (direct, topic, fanout, headers) | None — publish to queue directly |None | | Throughput | High | Very high | Lower (dedup overhead) | | Lock-in | None (AMQP is a standard) | AWS-specific | AWS-specific |

SQS Standard is at-least-once like RabbitMQ, but without exchange routing. You publish to a queue directly. If you need one event to fan out to multiple queues — say, both an order processor and an analytics collector — SQS requires SNS in front of it. RabbitMQ handles fan-out natively with a fanout exchange: publish once, deliver to every bound queue.

SQS FIFO adds exactly-once processing semantics with a 5-minute deduplication window, and ordering within a message group ID. It costs more and has lower throughput limits, but it removes the need for application-level idempotency logic.

The practical decision: teams already on AWS who want zero ops overhead typically choose SQS. Teams who want full broker control, rich exchange routing, or who are not AWS-committed typically choose RabbitMQ. Both are serious, proven infrastructure. RabbitMQ has a larger operational footprint; SQS trades that for managed simplicity with some loss of routing expressiveness.


Where RabbitMQ's Ack Model Doesn't Help

RabbitMQ's ack model is excellent. It handles everything that happens after a message enters the queue. The webhook debugging guide covers the complementary tools that handle what happens at the HTTP layer.

It says nothing about what happens before that.

A Stripe payment_intent.succeeded event hits your webhook receiver at 14:03:07. Your receiver is an Express app running on a $10 Render instance that went to sleep after 15 minutes of inactivity. The cold start takes 4 seconds. By then, Stripe's socket has timed out.

The webhook is never acknowledged by your HTTP server. Stripe queues a retry for 15 minutes later.

Or: your receiver is healthy, but your RabbitMQ connection is down for a rolling restart. The webhook arrives. Your handler calls channel.sendToQueue(...). It fails. You return 200 anyway (because someone wrote a try/catch that swallows the error). The message was never enqueued. Stripe thinks it was delivered.

RabbitMQ's ack model is a guarantee that applies inside the broker. The boundary where webhooks enter your system is outside the broker entirely. It is a raw HTTP request hitting a server that may or may not be healthy, connected, or even running.


The Architecture That Combines Both

The cleanest approach treats the HTTP boundary and the queue as separate concerns handled by separate tools.

Stripe ──► HookTunnel (stable URL, captures payload) ──► webhook receiver ──► RabbitMQ queue
                                                                              │
                                                              Consumer pulls message
                                                                              │
                                                              Processes event (ack on success)
                                                                              │
                                                              DLX on repeated failure

HookTunnel sits at the HTTP layer. It receives the webhook from Stripe, stores the full payload with headers, and forwards it to your receiver. If your receiver is down, HookTunnel holds the request and you can replay it — via the dashboard or API — to any target URL once the receiver is healthy.

The ack model then handles everything inside the broker boundary, which is what it was designed to do.

// Your receiver — now just an enqueue operation
app.post('/webhooks/stripe', express.raw({ type: 'application/json' }), async (req, res) => {
  let event;
  try {
    event = stripe.webhooks.constructEvent(
      req.body,
      req.headers['stripe-signature'],
      process.env.STRIPE_WEBHOOK_SECRET
    );
  } catch (err) {
    return res.status(400).send(`Signature verification failed: ${err.message}`);
  }

  // Acknowledge Stripe immediately
  res.status(200).send('ok');

  // Publish to RabbitMQ — application concern
  try {
    await channel.sendToQueue(
      'webhook.stripe',
      Buffer.from(JSON.stringify({
        id: event.id,
        type: event.type,
        data: event.data.object,
        _attempts: 0,
      })),
      { persistent: true } // survives broker restart
    );
  } catch (err) {
    // If enqueue fails, HookTunnel has the original payload.
    // You can replay from HookTunnel's dashboard after fixing the connection.
    logger.error({ err, eventId: event.id }, 'Failed to enqueue webhook event');
  }
});

The persistent: true flag on the message tells RabbitMQ to write the message to disk, not just memory. Combined with durable: true on the queue, messages survive a broker restart. This is the full durability stack: HTTP boundary capture + disk-persisted queue message + explicit ack before deletion.


The Correct Mental Model

RabbitMQ's ack model is one of the best reliability mechanisms in distributed systems. "The message stays in the queue until you explicitly release it" is an invariant you can build real systems on. Nineteen years of production deployments confirm this.

But the ack model begins at message publication. Getting the message into the queue in the first place requires a healthy HTTP receiver, a live RabbitMQ connection, and a successful channel.sendToQueue call.

The reliability boundary of a webhook-driven system starts at HTTP — before the queue sees anything. That is a separate layer with its own failure modes: cold starts, network partitions, provider timeouts, connection pool exhaustion. RabbitMQ does not operate at that layer. Something else has to.

HookTunnel is not a message broker. It does not compete with RabbitMQ. It is the HTTP layer that sits in front: stable URL, payload capture, delivery history, Pro replay when your receiver was down. RabbitMQ takes over from there, and it is very good at what it does.

Two layers. Both necessary. Neither sufficient alone.


HookTunnel is free to start — stable URL, full HTTP capture, 24-hour request history. Pro at $19/month adds unlimited history and replay to any target URL. Start free.

Stop guessing. Start proving.

Generate a webhook URL in one click. No signup required.

Get started free →