Messaging & events
Evolve uses event-driven messaging for asynchronous communication between services. When something noteworthy happens (a product is published, a payment succeeds, an order is confirmed), the responsible service publishes an event to a message queue. Other services subscribe to the events they care about and react accordingly: updating search indexes, sending emails, or transitioning order states.
This decouples services from each other: the checkout service doesn't need to know about the email service, it just publishes an event and moves on.
Architecture
Events flow through the system in two ways:
- commercetools subscriptions deliver platform events (resource changes, message types) directly to a cloud queue
- Internal events are published by services to a shared event bus after processing commercetools events or handling webhooks
A service receives a raw commercetools event (e.g.,
ResourceCreated for a product), fetches the full resource, validates
it against a Zod schema, and publishes a normalized internal event
(e.g., product-created) to the internal event bus. Downstream
consumers only deal with the clean, validated internal events.
Publishing messages
The @evolve-packages/messaging package provides a publishMessage
factory that abstracts away the cloud provider. You give it a target
URL and it returns a publish function:
import { publishMessage } from "@evolve-packages/messaging";
const publish = publishMessage(config.INTERNAL_EVENTS_TARGET);
await publish(event, {
type: event.type,
origin: "catalog-commercetools",
messageGroupId: resource.id,
});
The factory inspects the target URL to determine which cloud provider to use. This means the same service code runs on any cloud without modification. The target is an environment variable that differs per deployment.
Every event is validated against a Zod schema before publishing. The
schemas are generated from JSON Schema definitions in the schemas/
directory and provide both compile-time types and runtime validation.
Provider implementations
| Provider | Target format | Transport |
|---|---|---|
| AWS EventBridge | arn:aws:events:... | PutEventsCommand |
| AWS SQS | https://sqs.*.amazonaws.com/... | SendMessageCommand |
| Azure Service Bus | *.servicebus.windows.net or connection string | ServiceBusClient |
| Azure Event Grid | *.eventgrid.azure.net | CloudEvent via EventGridPublisherClient |
| GCP Pub/Sub | projects/*/topics/* | PubSub.topic().publishMessage() |
| Redis | redis://host/queue | lPush (development only) |
All providers pass the event type and origin as message attributes
(or equivalent metadata), and use messageGroupId for ordering
guarantees where the transport supports it.
Redis is used for local development only. The
startDevelopmentListener polls the queue with brPop and supports
a dead-letter queue target for failed messages.
Handling messages
Each cloud provider has a corresponding handler factory that wraps the cloud-specific trigger (Lambda, Azure Function, Cloud Run HTTP) into a provider-agnostic callback:
// AWS Lambda consuming from SQS
const sqsHandler = createSQSHandler(handleInternalEvent);
export const handler = lambdaHandlerFactory("event-handler", () => sqsHandler);
// Azure Function consuming from Service Bus
const serviceBusHandler = createServiceBusHandler(handleEvent);
app.serviceBusQueue("event-queue", { handler: serviceBusHandler });
// GCP Cloud Run consuming from Pub/Sub push
const pubsubHandler = createPubSubHandler(handleEvent);
Inside the handler, a switch on event.type dispatches to the
appropriate business logic:
const handleInternalEvent = async (event: CloudEventsPayload) => {
switch (event.type) {
case "com.commercetools.product.change.ResourceCreated":
return handleProductCreate(event.data);
case "com.commercetools.product.message.ProductPublished":
return handleProductPublished(event.data);
// ...
}
};
Failed messages are retried by the cloud transport (SQS visibility timeout, Service Bus lock duration). After the maximum retry count (typically 4 attempts), messages move to a dead-letter queue for investigation.
Event types
Internal events
These normalized events are published to the internal event bus after processing commercetools events:
| Event | Published by | Consumed by |
|---|---|---|
product-created | Catalog | Search sync |
product-updated | Catalog | Search sync |
product-published | Catalog | Search sync |
product-unpublished | Catalog | Search sync |
product-deleted | Catalog | Search sync |
product-category-created | Catalog | CMS |
product-category-updated | Catalog | CMS |
product-category-deleted | Catalog | CMS |
content-modified-event | CMS | CDN invalidation |
account-created | Account | |
order-confirmed | Checkout | |
order-shipped | Checkout | |
password-reset-request | Account | |
password-reset-complete | Account |
commercetools subscription events
These are delivered directly from commercetools to service queues:
| Message type | Resource | Subscribed by |
|---|---|---|
ResourceCreated | product, category | Catalog |
ResourceUpdated | product, category | Catalog |
ResourceDeleted | product, category | Catalog |
ProductPublished | product | Catalog (search sync) |
ProductUnpublished | product | Catalog (search sync) |
ProductDeleted | product | Catalog (search sync) |
ProductPriceDiscountsSet | product | Catalog (search sync) |
PaymentStatusStateTransition | payment | Checkout |
OrderStateChanged | order | Checkout, Monitoring |
OrderShipmentStateChanged | order | Checkout |
commercetools subscriptions
commercetools subscriptions deliver events to cloud-native queues in CloudEvents 1.0 format. Each service configures its subscriptions in Terraform:
resource "commercetools_subscription" "internal" {
format {
type = "CloudEvents"
cloud_events_version = "1.0"
}
destination {
type = "SQS"
queue_url = aws_sqs_queue.events.id
}
changes {
resource_type_ids = ["category", "product"]
}
}
Subscriptions can listen for resource changes (any create, update, or
delete on a resource type) or specific message types (like
ProductPublished or OrderStateChanged). Each subscription targets
the appropriate cloud queue for its deployment: SQS on AWS, Service
Bus on Azure, Pub/Sub on GCP.
Every queue is configured with a dead-letter queue for messages that fail after the maximum number of delivery attempts.