Skip to main content

Creating a new domain service

Every backend service in Evolve follows the same structure: init.ts configures infrastructure, server.ts composes GraphQL modules into a DomainService and manages the process lifecycle, context.ts builds the per-request context, and index.ts is a thin entry point. This guide walks through creating a service from scratch.

1. Scaffold the service

Create a new directory under backend/services/:

backend/services/loyalty/
├── src/
│ ├── index.ts
│ ├── init.ts
│ ├── server.ts
│ ├── context.ts
│ └── modules/
│ └── loyalty.ts
├── run.ts
├── terraform/
│ └── main.tf
└── package.json

Add the framework dependencies to package.json:

{
"dependencies": {
"@evolve-framework/core": "workspace:*",
"@evolve-framework/schemas": "workspace:*",
"@evolve-framework/commercetools": "workspace:*",
"@evolve-packages/observability": "workspace:*"
}
}

2. Define the GraphQL module

Create your module by extending AbstractGraphQLModule:

// src/modules/loyalty.ts
import { AbstractGraphQLModule } from "@evolve-framework/core";
import { gql } from "graphql-tag";

export class LoyaltyGraphQLModule extends AbstractGraphQLModule {
typedefs = gql`
type LoyaltyAccount {
points: Int!
tier: String!
}

extend type Customer {
loyalty: LoyaltyAccount
}
`;

resolvers = {
Customer: {
loyalty: loyaltyResolver,
},
};
}

3. Wire the service files

Entry point is a thin wrapper that calls startServer:

// src/index.ts
import { startServer } from "./server.ts";

export { startServer } from "./server.ts";

await startServer();

Local development entry initializes observability first:

// run.ts
import { initObservability } from "@evolve-packages/observability";

initObservability();

await import("./src/index.ts");

Server composes modules into a DomainService and manages the process lifecycle via ProcessManager:

// src/server.ts
import { ProcessManager } from "@evolve-framework/core";
import { DomainService } from "@evolve-framework/core";
import { GraphQLCompositeModule } from "@evolve-framework/commercetools";
import { LoyaltyGraphQLModule } from "./modules/loyalty.ts";
import { initEnvironment } from "./init.ts";
import { newContext } from "./context.ts";

const module = new GraphQLCompositeModule([
new LoyaltyGraphQLModule(),
]);

const createApp = () =>
new DomainService({
name: config.COMPONENT_NAME,
graphql: {
typeDefs: module.getTypedefs(),
resolvers: module.getResolvers(),
context: newContext(module.getConfig()),
plugins: [useClientContext()],
},
http: {
address: { host: config.HTTP_HOST, port: config.HTTP_PORT },
},
});

export const startServer = async () => {
let app: DomainService;

const pm = new ProcessManager({
start: async () => {
await initEnvironment();
app = createApp();
await app.start();
},
stop: async () => {
await app?.stop();
},
});

await pm.start();
};

Initialization configures the client factory and optional cache:

// src/init.ts
import { configureClientFactory } from "@evolve-framework/commercetools";
import { cache } from "@evolve-framework/core/cache";

export const initEnvironment = async () => {
await loadConfig();
await configureClientFactory(config);
await cache.configure(config.REDIS_URL);
};

4. Build the request context

The context factory is a higher-order function. The outer function receives the module config, the inner function runs per request. Use federated authentication (the standard for non-account services):

// src/context.ts
import { readStoreContextFromRequest } from "@evolve-framework/core";
import {
ClientContext,
clientFactory,
createDataLoaders,
RemoteClientContextLoader,
StoreContext,
} from "@evolve-framework/commercetools";
import { logger } from "@evolve-packages/observability/logging";
import type { YogaInitialContext } from "graphql-yoga";

export const newContext =
(moduleConfig: Record<string, unknown>) =>
async ({ request }: YogaInitialContext) => {
const commercetoolsClient = clientFactory.getSystemRequestBuilder();
const storeContext = new StoreContext(
readStoreContextFromRequest(request),
commercetoolsClient,
);
const loader = new RemoteClientContextLoader(
config.ACCOUNT_SERVICE_ENDPOINT,
clientFactory,
);
const clientContext = new ClientContext(storeContext, loader);

return {
log: logger,
loaders: createDataLoaders(commercetoolsClient, storeContext, clientContext),
clientContext,
storeContext,
globalScopedClient: () => commercetoolsClient,
config: moduleConfig,
};
};

5. Register in the gateway

Add your service as a subgraph in the gateway configuration so it is included in the federated supergraph. The schema registry (Hive) picks up the new subgraph automatically after deployment.

6. Add Terraform

Each service has a terraform/ directory with per-cloud subdirectories. All three clouds follow the same lifecycle (schema check, deploy, schema publish) but use different compute primitives.

Directory structure

terraform/
├── aws/
│ ├── main.tf # ECS service + Hive schema check/publish
│ ├── locals.tf # Service name, image, env_vars
│ ├── variables.tf # Inputs from Mach Composer
│ ├── data.tf # SSM parameter lookups
│ ├── commercetools.tf # CT client credentials (Secrets Manager)
│ ├── schema.generated.graphql # Generated schema (used by Hive)
│ ├── outputs.tf
│ └── versions.tf
├── azure/
│ ├── main.tf # Container Apps module + Hive lifecycle
│ ├── locals.tf # Service name, image, env_vars
│ ├── variables.tf # Inputs from Mach Composer
│ ├── data.tf # Resource group, ACR, Redis lookups
│ ├── secrets.tf # Key Vault + role assignments
│ ├── roles.tf # User Assigned Identity + RBAC
│ ├── commercetools.tf # CT client credentials (Key Vault)
│ ├── schema.generated.graphql # Generated schema (used by Hive)
│ ├── outputs.tf
│ └── versions.tf
└── gcp/
├── main.tf # Cloud Run module + Hive lifecycle
├── locals.tf # Service name, image, env_vars
├── variables.tf # Inputs from Mach Composer
├── data.tf # google_client_config lookup
├── commercetools.tf # CT client credentials (Secret Manager)
├── schema.generated.graphql # Generated schema (used by Hive)
├── outputs.tf
└── versions.tf

The Azure pattern (primary example)

main.tf: every service follows a three-phase lifecycle:

  1. Schema check: hive_schema_check validates the GraphQL schema against the registry before deployment
  2. Deploy: the compute module (Container Apps on Azure) deploys the container image
  3. Schema publish: hive_schema_publish registers the live endpoint after the service is up
resource "hive_schema_check" "graphql_schema_check" {
service = local.service_name
commit = var.component_version
schema = file("${path.module}/schema.generated.graphql")
context_id = "${local.service_name}/${var.component_version}"
}

module "service" {
source = "evolve-platform/app-container/azurerm"
version = "0.2.2"
tags = local.tags

name = "${var.azure.resource_prefix}-${local.service_name}"
cpu = var.variables.cpu
memory = var.variables.memory
min_replicas = var.variables.min_replicas
max_replicas = var.variables.max_replicas

container_app_environment_id = data.azurerm_container_app_environment.primary.id
resource_group_name = data.azurerm_resource_group.primary.name
image = local.image
identity_id = azurerm_user_assigned_identity.app.id
env_vars = local.env_vars

secrets = [
{
secret_id = module.commercetools_server_token.secret_id
secret_name = module.commercetools_server_token.secret_name
env_name = "CTP_CLIENT_SECRET"
}
]

healthcheck = {
path = "/healthcheck"
}

depends_on = [
azurerm_role_assignment.app_acrpull,
hive_schema_check.graphql_schema_check,
]
}

resource "hive_schema_publish" "graphql_schema_publish" {
service = local.service_name
commit = var.component_version
url = local.service_graphql_endpoint
schema = file("${path.module}/schema.generated.graphql")

depends_on = [module.service]
}

locals.tf: defines the service name, container image, and all environment variables. Secrets are never placed in env_vars; they go through Key Vault references in the secrets block instead:

locals {
service_name = "loyalty"
image = "${local.container_registry_name}.azurecr.io/${local.service_name}:${var.component_version}"

env_vars = {
NODE_ENV = "production"
SERVICE_NAME = local.service_name
SITE = var.site
# ... service-specific env vars
}
}

secrets.tf: creates a per-service Key Vault and assigns access roles for the deploy identity and the app identity:

module "keyvault" {
source = "evolve-platform/key-vault/azurerm"
version = "0.1.1"
tags = local.tags

name = "${var.azure.resource_prefix}-${local.service_name_short}"
tenant_id = data.azurerm_client_config.current.tenant_id
resource_group_name = data.azurerm_resource_group.primary.name
location = data.azurerm_resource_group.primary.location
}

# The deploy identity gets Key Vault Administrator to manage secrets during CI.
# The app identity gets Key Vault Secrets User for read-only runtime access.
resource "azurerm_role_assignment" "keyvault_app" {
scope = module.keyvault.key_vault_id
role_definition_name = "Key Vault Secrets User"
principal_id = azurerm_user_assigned_identity.app.principal_id
}

roles.tf: creates a User Assigned Identity and grants it acrpull on the container registry plus any data-plane access (e.g. Redis Data Owner):

resource "azurerm_user_assigned_identity" "app" {
name = "${var.azure.resource_prefix}-${local.service_name}-uai"
resource_group_name = data.azurerm_resource_group.primary.name
location = data.azurerm_resource_group.primary.location
tags = local.tags
}

resource "azurerm_role_assignment" "app_acrpull" {
scope = data.azurerm_container_registry.acr.id
role_definition_name = "acrpull"
principal_id = azurerm_user_assigned_identity.app.principal_id
}

AWS and GCP alternatives

The AWS and GCP directories follow the same structure but use different compute modules:

CloudModuleCompute
Azureevolve-platform/app-container/azurermContainer Apps
AWSevolve-platform/ecs-service/awsECS Fargate
GCPevolve-platform/cloud-run-service/googleCloud Run

Secret management also differs per cloud: Azure uses Key Vault, AWS uses Secrets Manager (referenced via CTP_CREDENTIALS_SECRET_NAME), and GCP uses Secret Manager with Cloud Run secret mounts.

Reference an existing service's terraform directories for the full per-cloud boilerplate: backend/services/order-commercetools/terraform/.

Register in Mach Composer

Add the component to the site configuration YAML. This registers the Terraform module with Mach Composer and wires up variables and secrets:

First, register the component. Azure configs use a shared _components.yaml file (referenced via $ref), while AWS and GCP define components inline:

# config/azure/_components.yaml (or inline in aws/demo.yaml)
- name: loyalty
source: ../../backend/services/loyalty/terraform/azure/
version: "$LATEST"
branch: "main"
integrations:
- azure
- commercetools
- sentry
- hive

Then add the site-level config that passes variables and secrets:

# In the site's components list
- name: loyalty
variables:
account_service_endpoint: ${component.account-commercetools.endpoint}
secrets:
hive_token: ${var.secrets.hive.api_token}

The integrations list tells Mach Composer to inject the relevant provider configuration (e.g. commercetools injects ct_project_key and related variables). The ${component.*} syntax references outputs from other components. Note that output names differ per cloud: Azure uses .endpoint, AWS uses .service_endpoint, and GCP uses .url.

Further reading