Skip to main content

Messaging & events

Evolve uses event-driven messaging for asynchronous communication between services. When something noteworthy happens (a product is published, a payment succeeds, an order is confirmed), the responsible service publishes an event to a message queue. Other services subscribe to the events they care about and react accordingly: updating search indexes, sending emails, or transitioning order states.

This decouples services from each other: the checkout service doesn't need to know about the email service, it just publishes an event and moves on.

Architecture

Events flow through the system in two ways:

  1. commercetools subscriptions deliver platform events (resource changes, message types) directly to a cloud queue
  2. Internal events are published by services to a shared event bus after processing commercetools events or handling webhooks

A service receives a raw commercetools event (e.g., ResourceCreated for a product), fetches the full resource, validates it against a Zod schema, and publishes a normalized internal event (e.g., product-created) to the internal event bus. Downstream consumers only deal with the clean, validated internal events.

Publishing messages

The @evolve-packages/messaging package provides a publishMessage factory that abstracts away the cloud provider. You give it a target URL and it returns a publish function:

import { publishMessage } from "@evolve-packages/messaging";

const publish = publishMessage(config.INTERNAL_EVENTS_TARGET);

await publish(event, {
type: event.type,
origin: "catalog-commercetools",
messageGroupId: resource.id,
});

The factory inspects the target URL to determine which cloud provider to use. This means the same service code runs on any cloud without modification. The target is an environment variable that differs per deployment.

Every event is validated against a Zod schema before publishing. The schemas are generated from JSON Schema definitions in the schemas/ directory and provide both compile-time types and runtime validation.

Provider implementations

ProviderTarget formatTransport
AWS EventBridgearn:aws:events:...PutEventsCommand
AWS SQShttps://sqs.*.amazonaws.com/...SendMessageCommand
Azure Service Bus*.servicebus.windows.net or connection stringServiceBusClient
Azure Event Grid*.eventgrid.azure.netCloudEvent via EventGridPublisherClient
GCP Pub/Subprojects/*/topics/*PubSub.topic().publishMessage()
Redisredis://host/queuelPush (development only)

All providers pass the event type and origin as message attributes (or equivalent metadata), and use messageGroupId for ordering guarantees where the transport supports it.

Redis is used for local development only. The startDevelopmentListener polls the queue with brPop and supports a dead-letter queue target for failed messages.

Handling messages

Each cloud provider has a corresponding handler factory that wraps the cloud-specific trigger (Lambda, Azure Function, Cloud Run HTTP) into a provider-agnostic callback:

// AWS Lambda consuming from SQS
const sqsHandler = createSQSHandler(handleInternalEvent);
export const handler = lambdaHandlerFactory("event-handler", () => sqsHandler);

// Azure Function consuming from Service Bus
const serviceBusHandler = createServiceBusHandler(handleEvent);
app.serviceBusQueue("event-queue", { handler: serviceBusHandler });

// GCP Cloud Run consuming from Pub/Sub push
const pubsubHandler = createPubSubHandler(handleEvent);

Inside the handler, a switch on event.type dispatches to the appropriate business logic:

const handleInternalEvent = async (event: CloudEventsPayload) => {
switch (event.type) {
case "com.commercetools.product.change.ResourceCreated":
return handleProductCreate(event.data);
case "com.commercetools.product.message.ProductPublished":
return handleProductPublished(event.data);
// ...
}
};

Failed messages are retried by the cloud transport (SQS visibility timeout, Service Bus lock duration). After the maximum retry count (typically 4 attempts), messages move to a dead-letter queue for investigation.

Event types

Internal events

These normalized events are published to the internal event bus after processing commercetools events:

EventPublished byConsumed by
product-createdCatalogSearch sync
product-updatedCatalogSearch sync
product-publishedCatalogSearch sync
product-unpublishedCatalogSearch sync
product-deletedCatalogSearch sync
product-category-createdCatalogCMS
product-category-updatedCatalogCMS
product-category-deletedCatalogCMS
content-modified-eventCMSCDN invalidation
account-createdAccountEmail
order-confirmedCheckoutEmail
order-shippedCheckoutEmail
password-reset-requestAccountEmail
password-reset-completeAccountEmail

commercetools subscription events

These are delivered directly from commercetools to service queues:

Message typeResourceSubscribed by
ResourceCreatedproduct, categoryCatalog
ResourceUpdatedproduct, categoryCatalog
ResourceDeletedproduct, categoryCatalog
ProductPublishedproductCatalog (search sync)
ProductUnpublishedproductCatalog (search sync)
ProductDeletedproductCatalog (search sync)
ProductPriceDiscountsSetproductCatalog (search sync)
PaymentStatusStateTransitionpaymentCheckout
OrderStateChangedorderCheckout, Monitoring
OrderShipmentStateChangedorderCheckout

commercetools subscriptions

commercetools subscriptions deliver events to cloud-native queues in CloudEvents 1.0 format. Each service configures its subscriptions in Terraform:

resource "commercetools_subscription" "internal" {
format {
type = "CloudEvents"
cloud_events_version = "1.0"
}

destination {
type = "SQS"
queue_url = aws_sqs_queue.events.id
}

changes {
resource_type_ids = ["category", "product"]
}
}

Subscriptions can listen for resource changes (any create, update, or delete on a resource type) or specific message types (like ProductPublished or OrderStateChanged). Each subscription targets the appropriate cloud queue for its deployment: SQS on AWS, Service Bus on Azure, Pub/Sub on GCP.

Every queue is configured with a dead-letter queue for messages that fail after the maximum number of delivery attempts.