How AWS EventBridge Makes Cloud-Native Event Routing Scalable
Getting a Stripe webhook into your AWS microservices stack used to require custom Lambda glue code for every integration. EventBridge changed that.
Here is the problem that every team with more than three microservices eventually runs into. A Stripe payment_intent.succeeded event needs to do five things: update the order status in your database, send a receipt email, record a revenue event in your analytics pipeline, notify your billing team's Slack channel, and trigger a fulfillment workflow. The first instinct is to put all five operations in one Lambda handler. That handler grows. Engineers stop touching it because they are afraid of breaking something. Eventually it is a thousand-line function that does everything and can be tested by nobody.
The second instinct is to write four additional Lambda handlers and add a fanout mechanism that calls them all. Now you have five handlers to maintain and a bespoke fan-out system that breaks silently when one handler times out.
EventBridge is AWS's designed-for-this solution. It is worth understanding thoroughly — including where it wins decisively over alternatives and where its seams show. See the webhook vendor evaluation checklist for a broader framework.
What EventBridge Actually Does
EventBridge is a serverless event bus. Events arrive at a bus (you can have multiple). Rules on the bus evaluate against event content. Matching rules route events to one or more targets. Targets can be Lambda functions, SQS queues, SNS topics, Step Functions, API destinations (arbitrary HTTP endpoints), or a dozen other AWS services. See the AWS EventBridge documentation for the full target list.
The key property is content-based routing. Rules filter on event structure — you can route a Stripe payment_intent.succeeded to your billing Lambda and a customer.subscription.deleted to your churn analytics pipeline, and neither Lambda ever sees the other's events. The routing logic lives in the event bus, not in application code.
Fan-out is first-class. A single event can match multiple rules. Multiple targets on the same rule receive the event independently. Your Stripe payment event can simultaneously trigger your fulfillment Lambda, your analytics queue, and your receipt email Step Function — each with independent retry behavior and each with its own DLQ destination if processing fails.
{
"source": ["stripe.webhook"],
"detail-type": ["payment_intent.succeeded"],
"detail": {
"data": {
"object": {
"amount": [{ "numeric": [">=", 10000] }]
}
}
}
}
That rule matches only payment_intent.succeeded events where the amount is at least $100. Your high-value order handling Lambda only fires on purchases that matter to it. Everything else passes by without cost.
The Schema Registry: Underrated and Powerful
EventBridge's schema registry is the feature that separates it from a basic pub/sub system. When you enable schema discovery on a bus, EventBridge observes the events flowing through it and automatically infers their JSON Schema. Over time it builds a registry of every event shape your integrations produce.
That registry has two practical uses that matter for webhook integration work.
First: code binding generation. EventBridge can generate typed code bindings from discovered schemas in Java, Python, or TypeScript. Instead of guessing at the structure of a Stripe webhook payload or copypasting JSON from the documentation, you generate a typed interface directly from events your bus has actually seen.
// Auto-generated from EventBridge schema registry
import { PaymentIntentSucceededEvent } from './schema/stripe.webhook@payment_intent.succeeded';
export const handler = async (event: PaymentIntentSucceededEvent): Promise<void> => {
// event.detail.data.object is typed — amount, currency, customer, etc.
const { amount, currency, customer } = event.detail.data.object;
// ...
};
This is not theoretical convenience. Real Stripe webhook payloads have deeply nested structures with optional fields that vary by payment method. Getting those types wrong produces runtime errors that are painful to debug. The schema registry makes the types observable from production events rather than documentation.
Second: native SaaS integrations. EventBridge has built-in event sources for Shopify, Zendesk, Datadog, and a growing list of SaaS partners. These integrations deliver events directly to your EventBridge bus without you operating a receiver Lambda. For AWS-native teams using these providers, it eliminates an entire infrastructure layer.
Retry Policy and Dead Letter Queues
EventBridge's retry behavior is configurable per target and worth understanding precisely — it is substantively different from simple HTTP webhook delivery.
When a target Lambda returns an error or times out, EventBridge retries with exponential backoff — starting at 1 second, doubling, up to 24 hours of retry window. You configure the maximum retry attempts (default 185) and the maximum event age. When a target exhausts retries or the event ages out, EventBridge sends the event to a configurable DLQ destination — an SQS queue or SNS topic.
// CDK: EventBridge rule with retry policy and DLQ
import * as events from 'aws-cdk-lib/aws-events';
import * as targets from 'aws-cdk-lib/aws-events-targets';
import * as sqs from 'aws-cdk-lib/aws-sqs';
const dlq = new sqs.Queue(stack, 'FulfillmentDlq');
const fulfillmentRule = new events.Rule(stack, 'FulfillmentRule', {
eventBus: stripeBus,
eventPattern: {
source: ['stripe.webhook'],
detailType: ['payment_intent.succeeded'],
},
targets: [
new targets.LambdaFunction(fulfillmentLambda, {
deadLetterQueue: dlq,
retryAttempts: 3,
maxEventAge: Duration.hours(2),
}),
],
});
This is substantively different from simple HTTP webhook delivery. A webhook delivered to your handler gets one shot (plus whatever retries the provider configures). An event routed through EventBridge gets up to 185 retry attempts over 24 hours, with automatic DLQ isolation on exhaustion. The retry infrastructure is not your problem to build.
EventBridge vs Kafka: What Each Actually Wins
This comparison comes up frequently and the answer depends entirely on your workload profile.
Kafka's strengths are real. Partition-based ordering guarantees that events in the same partition arrive in sequence. See Apache Kafka documentation for the full technical model. Consumer groups allow independent consumers to read at their own pace — your analytics team can process events at batch speed while your fulfillment service processes in real time, both reading from the same topic. Retention-based replay is Kafka's most powerful property: you can rewind a consumer to any point in the event log and replay from there, days or weeks after original delivery. For high-throughput streams — millions of events per second — Kafka scales horizontally in ways EventBridge does not.
EventBridge's strengths are also real. Serverless pricing: you pay per event, with zero base cost and zero infrastructure to operate. Rule-based content filtering at the bus level: Kafka consumers receive all events on a topic and filter in application code; EventBridge rules filter before delivery, which matters when you have Lambda targets that cost money per invocation. Native SaaS integrations. Schema registry with code generation. First-class AWS service integration — routing directly to Step Functions, Kinesis, SQS, API Gateway without intermediate Lambda glue.
The honest split: EventBridge wins for serverless fan-out on AWS-native stacks at moderate volumes. Kafka wins for high-throughput ordered streams where replay-from-offset is a first-class requirement and you are willing to operate the cluster. A team doing 10 million Stripe events per day with strict ordering requirements for payment reconciliation should be looking at Kafka. A team doing 100K events per day routing to five different microservices should be looking at EventBridge.
Neither replaces the other. They solve adjacent problems at different throughput and operational cost curves.
The Gap EventBridge Does Not Cover
EventBridge does not receive raw HTTP webhooks. There is no EventBridge endpoint you give to Stripe and say "send webhooks here." Events enter the EventBridge bus from producer code — a Lambda function, an SDK call, a native SaaS integration.
Which means: you still need something at the HTTP boundary. Stripe sends a POST to a URL. Something has to receive that POST, verify the signature, and put the event onto the bus. That something is typically an API Gateway endpoint backed by a Lambda function:
Stripe
│
▼
API Gateway + Lambda (verify sig, put event, return 200)
│
▼
EventBridge bus
│
├──► Fulfillment Lambda
├──► Analytics SQS Queue
└──► Email Step Function
That receiver Lambda is simple. But it is infrastructure you operate. It has IAM roles, CloudWatch logs, concurrency limits, cold starts. And critically: it is where signature verification happens, and where you decide what metadata to attach to the event before it enters the bus.
HookTunnel is that boundary receiver. Your webhook provider sends to a stable HookTunnel URL. HookTunnel captures the raw HTTP request — every header, the exact body bytes, the delivery timestamp — and forwards to your actual handler. That handler can be your EventBridge producer Lambda.
The boundary capture pays dividends in two scenarios that EventBridge does not address.
Scenario 1: Debugging EventBridge rejections. You have an EventBridge rule that should match a Stripe payment_intent.succeeded event but isn't firing. The schema doesn't match what you expected. EventBridge received the event and the rule evaluated but did not match. What was the exact payload that arrived? Without capture at the HTTP boundary, you are looking at CloudTrail logs and guessing. With HookTunnel, you replay the exact HTTP request that the rule rejected and inspect it payload-by-payload until you find the field mismatch.
Scenario 2: Replay after a bad deploy. You deployed a bug in your fulfillment Lambda. Events went into the EventBridge bus and were delivered to your broken Lambda, which failed them. EventBridge retried up to your configured maximum. Events are now in the DLQ. You can requeue from the DLQ — but if the event structure in the DLQ is the EventBridge envelope rather than the original Stripe payload, replaying involves unwrapping. If you want to replay the original HTTP request exactly as Stripe sent it, HookTunnel's replay is the right tool — it resends the original POST to your receiver endpoint, which puts the event onto the bus fresh.
// HookTunnel replay → EventBridge producer
// Your receiver Lambda — same code, replayed event triggers the same path
export const handler = async (event: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => {
const stripeEvent = stripe.webhooks.constructEvent(
event.body!,
event.headers['stripe-signature']!,
process.env.STRIPE_WEBHOOK_SECRET!
);
await eventbridge.send(new PutEventsCommand({
Entries: [{
Source: 'stripe.webhook',
DetailType: stripeEvent.type,
Detail: JSON.stringify(stripeEvent),
EventBusName: process.env.EVENT_BUS_NAME,
}],
}));
return { statusCode: 200, body: JSON.stringify({ received: true }) };
};
Pricing and Practical Considerations
EventBridge pricing is $1.00 per million events. For teams doing under a million webhook events per month, this rounds to zero. For teams doing tens of millions, it is still competitive with the Lambda + SQS alternative. The schema registry has a separate cost for schema discovery — $0.10 per schema per month, typically negligible.
The operational cost of EventBridge is genuinely low. You are not operating a cluster. Cold starts on Lambda targets are the main operational concern, and those are addressed by provisioned concurrency if your latency budget is tight.
HookTunnel free tier gives you one webhook endpoint with 24-hour history at no cost. The flat $19/month Pro plan adds stable URL, full history, and replay. The two together — HookTunnel at the boundary, EventBridge for routing — is a reasonable architecture for a team that wants forensics and fan-out without operating Kafka or building a custom ingestion pipeline. If your team already uses Stripe webhooks, see our guide on Stripe duplicate webhook events for patterns that apply equally to EventBridge fan-out scenarios.
The Bottom Line
EventBridge is excellent for AWS-native event routing. The schema registry, content-based filtering, and native SaaS integrations are genuinely well-designed features that reduce operational complexity for teams living in the AWS ecosystem. It is not the right tool for high-throughput ordered streams — that is Kafka's territory — but for serverless fan-out at moderate volumes, it is hard to beat.
What EventBridge does not give you is visibility at the HTTP boundary. It does not store the raw webhook request. It does not give you payload-level forensics when a rule mismatch causes silent event loss. It does not give you replay of the original HTTP request.
There is a new tool that covers exactly that boundary, at $19/month, without requiring an AWS account. EventBridge handles what happens after the event enters your system. HookTunnel handles what arrives at the edge. They are not in competition. Make your own call — but it is worth knowing both exist before you assume the receiver Lambda is a solved problem.
Stop guessing. Start proving.
Generate a webhook URL in one click. No signup required.
Get started free →