Worker Service
List of all available properties for a 'Worker Service'
manifest. To learn about Copilot services, see the Services concept page.
Sample worker service manifests
# Collect messages from multiple topics published from other services to a single SQS queue.
name: cost-analyzer
type: Worker Service
image:
build: ./cost-analyzer/Dockerfile
subscribe:
topics:
- name: products
service: orders
filter_policy:
event:
- anything-but: order_cancelled
- name: inventory
service: warehouse
queue:
retention: 96h
timeout: 30s
dead_letter:
tries: 10
cpu: 256
memory: 512
count: 3
exec: true
secrets:
DB:
secretsmanager: 'mysql'
# Burst to Fargate Spot tasks if capacity is available.
name: cost-analyzer
type: Worker Service
image:
build: ./cost-analyzer/Dockerfile
subscribe:
topics:
- name: products
service: orders
- name: inventory
service: warehouse
cpu: 256
memory: 512
count:
range:
min: 1
max: 10
spot_from: 2
queue_delay: # Ensure messages are processed within 10mins assuming a single message takes 250ms to process.
acceptable_latency: 10m
msg_processing_time: 250ms
exec: true
# Assign individual queues to each topic.
name: cost-analyzer
type: Worker Service
image:
build: ./cost-analyzer/Dockerfile
subscribe:
topics:
- name: products
service: orders
queue:
retention: 5d
timeout: 1h
dead_letter:
tries: 3
- name: inventory
service: warehouse
queue:
retention: 1d
timeout: 5m
count: 1
name
String
The name of your service.
type
String
The architecture type for your service. Worker Services are not reachable from the internet or elsewhere in the VPC. They are designed to pull messages from their associated SQS queues, which are populated by their subscriptions to SNS topics created by other Copilot services' publish
fields.
subscribe
Map
The subscribe
section allows worker services to create subscriptions to the SNS topics exposed by other Copilot services in the same application and environment. Each topic can define its own SQS queue, but by default all topics are subscribed to the worker service's default queue.
The URI of the default queue will be injected into the container as an environment variable, COPILOT_QUEUE_URI
.
subscribe:
topics:
- name: events
service: api
queue: # Define a topic-specific queue for the api-events topic.
timeout: 20s
- name: events
service: fe
queue: # By default, messages from all topics will go to a shared queue.
timeout: 45s
retention: 96h
delay: 30s
subscribe.queue
Map
By default, a service level queue is always created. queue
allows customization of certain attributes of that default queue.
subscribe.queue.delay
Duration
The time in seconds for which the delivery of all messages in the queue is delayed. Default 0s. Range 0s-15m.
subscribe.queue.retention
Duration
Retention specifies the time a message will remain in the queue before being deleted. Default 4d. Range 60s-336h.
subscribe.queue.timeout
Duration
Timeout defines the length of time a message is unavailable after being delivered. Default 30s. Range 0s-12h.
subscribe.queue.fifo
Boolean or Map
Enable FIFO (first in, first out) ordering on your SQS queue to handle scenarios where the order of operations and events is critical, or where duplicates can't be tolerated.
subscribe:
topics:
- name: events
service: api
- name: events
service: fe
queue: # Messages from both FIFO SNS Topics go to the shared FIFO SQS Queue.
fifo: true
Alternatively, you can also specify advanced SQS FIFO queue configurations:
subscribe:
topics:
- name: events
service: api
queue: # Define a topic-specific Standard queue for the api-events topic.
timeout: 20s
- name: events
service: fe
queue: # By default, messages from all FIFO topics will go to a shared FIFO queue.
fifo:
content_based_deduplication: true
high_throughput: true
subscribe.queue.fifo.content_based_deduplication
Boolean
If the message body is guaranteed to be unique for each published message, you can enable content-based deduplication for the SNS FIFO topic.
subscribe.queue.fifo.deduplication_scope
String
For high throughput for FIFO queues, specifies whether message deduplication occurs at the message group or queue level. Valid values are "messageGroup" and "queue".
subscribe.queue.fifo.throughput_limit
String
For high throughput for FIFO queues, specifies whether the FIFO queue throughput quota applies to the entire queue or per message group. Valid values are "perQueue" and "perMessageGroupId".
subscribe.queue.fifo.high_throughput
Boolean
If enabled, provides higher transactions per second (TPS) for messages in FIFO queues. Mutually exclusive with deduplication_scope
and throughput_limit
.
subscribe.queue.dead_letter.tries
Integer
If specified, creates a dead letter queue and a redrive policy which routes messages to the DLQ after tries
attempts. That is, if a worker service fails to process a message successfully tries
times, it will be routed to the DLQ for examination instead of redriven.
subscribe.topics
Array of topic
s
Contains information about which SNS topics the worker service should subscribe to.
subscribe.topics.topicname
String
Required. The name of the SNS topic to subscribe to.
subscribe.topics.topicservice
String
Required. The service this SNS topic is exposed by. Together with the topic name, this uniquely identifies an SNS topic in the copilot environment.
subscribe.topics.topicfilter_policy
Map
Optional. Specify a SNS subscription filter policy to evaluate incoming message attributes against the policy.
The filter policy can be specified in JSON, for example:
filter_policy: {"store":["example_corp"],"event":[{"anything-but":"order_cancelled"}],"customer_interests":["rugby","football","baseball"],"price_usd":[{"numeric":[">=",100]}]}
filter_policy:
store:
- example_corp
event:
- anything-but: order_cancelled
customer_interests:
- rugby
- football
- baseball
price_usd:
- numeric:
- ">="
- 100
subscribe.topics.topic.queue
Boolean or Map
Optional. Specify SQS queue configuration for the topic. If specified as true
, the queue will be created with default configuration. Specify this field as a map for customization of certain attributes for this topic-specific queue.
If you specify one or more topic-specific queues, you can access those queue URIs via the COPILOT_TOPIC_QUEUE_URIS
variable.
This variable is a JSON map from a unique identifier for the topic-specific queue to its URI.
For example, a worker service with a topic-specific queue for the orders
topic from the merchant
service and a FIFO
topic transactions
from the merchant
service will have the following JSON structure.
// COPILOT_TOPIC_QUEUE_URIS
{
"merchantOrdersEventsQueue": "https://sqs.eu-central-1.amazonaws.com/...",
"merchantTransactionsfifoEventsQueue": "https://sqs.eu-central-1.amazonaws.com/..."
}
subscribe.topics.topic.queue.fifo
Boolean or Map
Optional. Specify SQS FIFO queue configuration for the topic. If specified as true
, the FIFO queue will be created with the default FIFO configuration.
Specify this field as a map for customization of certain attributes for this topic-specific queue.
image
Map
The image section contains parameters relating to the Docker build configuration or referring to an existing container image.
image.build
String or Map
Build a container from a Dockerfile with optional arguments. Mutually exclusive with image.location
.
If you specify a string, Copilot interprets it as the path to your Dockerfile. It will assume that the dirname of the string you specify should be the build context. The manifest:
image:
build: path/to/dockerfile
$ docker build --file path/to/dockerfile path/to
You can also specify build as a map:
image:
build:
dockerfile: path/to/dockerfile
context: context/dir
target: build-stage
cache_from:
- image:tag
args:
key: value
$ docker build --file path/to/dockerfile --target build-stage --cache-from image:tag --build-arg key=value context/dir
.
You can omit fields and Copilot will do its best to understand what you mean. For example, if you specify context
but not dockerfile
, Copilot will run Docker in the context directory and assume that your Dockerfile is named "Dockerfile." If you specify dockerfile
but no context
, Copilot assumes you want to run Docker in the directory that contains dockerfile
.
All paths are relative to your workspace root.
image.location
String
Instead of building a container from a Dockerfile, you can specify an existing image name. Mutually exclusive with image.build
.
The location
field follows the same definition as the image
parameter in the Amazon ECS task definition.
Warning
If you are passing in a Windows image, you must add platform: windows/x86_64
to your manifest.
If you are passing in an ARM architecture-based image, you must add platform: linux/arm64
to your manifest.
image.credentials
String
An optional credentials ARN for a private repository. The credentials
field follows the same definition as the credentialsParameter
in the Amazon ECS task definition.
image.labels
Map
An optional key/value map of Docker labels to add to the container.
image.depends_on
Map
An optional key/value map of Container Dependencies to add to the container. The key of the map is a container name and the value is the condition to depend on. Valid conditions are: start
, healthy
, complete
, and success
. You cannot specify a complete
or success
dependency on an essential container.
For example:
image:
build: ./Dockerfile
depends_on:
nginx: start
startup: success
nginx
sidecar has started and the startup
container has completed successfully.
image.healthcheck
Map
Optional configuration for container health checks.
image.healthcheck.command
Array of Strings
The command to run to determine if the container is healthy.
The string array can start with CMD
to execute the command arguments directly, or CMD-SHELL
to run the command with the container's default shell.
image.healthcheck.interval
Duration
Time period between health checks, in seconds. Default is 10s.
image.healthcheck.retries
Integer
Number of times to retry before container is deemed unhealthy. Default is 2.
image.healthcheck.timeout
Duration
How long to wait before considering the health check failed, in seconds. Default is 5s.
image.healthcheck.start_period
Duration
Length of grace period for containers to bootstrap before failed health checks count towards the maximum number of retries. Default is 0s.
cpu
Integer
Number of CPU units for the task. See the Amazon ECS docs for valid CPU values.
memory
Integer
Amount of memory in MiB used by the task. See the Amazon ECS docs for valid memory values.
platform
String or Map
Operating system and architecture (formatted as [os]/[arch]
) to pass with docker build --platform
. For example, linux/arm64
or windows/x86_64
. The default is linux/x86_64
.
Override the generated string to build with a different valid osfamily
or architecture
. For example, Windows users might change the string
platform: windows/x86_64
WINDOWS_SERVER_2019_CORE
, using a map:
platform:
osfamily: windows_server_2019_full
architecture: x86_64
platform:
osfamily: windows_server_2022_core
architecture: x86_64
platform:
osfamily: windows_server_2022_full
architecture: x86_64
count
Integer or Map
The number of tasks that your service should maintain.
If you specify a number:
count: 5
count.spot
Integer
If you want to use Fargate Spot capacity to run your services, you can specify a number under the spot
subfield:
count:
spot: 5
Info
Fargate Spot is not supported for containers running on ARM architecture.
Alternatively, you can specify a map for setting up autoscaling:
count:
range: 1-10
cpu_percentage: 70
memory_percentage:
value: 80
cooldown:
in: 80s
out: 160s
queue_delay:
acceptable_latency: 10m
msg_processing_time: 250ms
cooldown:
in: 30s
out: 60s
count.range
String or Map
You can specify a minimum and maximum bound for the number of tasks your service should maintain, based on the values you specify for the metrics.
count:
range: n-m
MinCapacity
of n
and MaxCapacity
of m
.
Alternatively, if you wish to scale your service onto Fargate Spot instances, specify min
and max
under range
and then specify spot_from
with the desired count you wish to start placing your services onto Spot capacity. For example:
count:
range:
min: 1
max: 10
spot_from: 3
This will set your range as 1-10 as above, but will place the first two copies of your service on dedicated Fargate capacity. If your service scales to 3 or higher, the third and any additional copies will be placed on Spot until the maximum is reached.
count.range.min
Integer
The minimum desired count for your service using autoscaling.
count.range.max
Integer
The maximum desired count for your service using autoscaling.
count.range.spot_from
Integer
The desired count at which you wish to start placing your service using Fargate Spot capacity providers.
count.cooldown
Map
Cooldown scaling fields that are used as the default cooldown for all autoscaling fields specified.
count.cooldown.in
Duration
The cooldown time for autoscaling fields to scale up the service.
count.cooldown.out
Duration
The cooldown time for autoscaling fields to scale down the service.
The following options cpu_percentage
and memory_percentage
are autoscaling fields for count
which can be defined either as the value of the field, or as a Map containing advanced information about the field's value
and cooldown
:
value: 50
cooldown:
in: 30s
out: 60s
count.cpu_percentage
Integer or Map
Scale up or down based on the average CPU your service should maintain.
count.memory_percentage
Integer or Map
Scale up or down based on the average memory your service should maintain.
count.queue_delay
Map
Scale up or down to maintain an acceptable queue latency by tracking against the acceptable backlog per task.
The acceptable backlog per task is calculated by dividing acceptable_latency
by msg_processing_time
. For example, if you can tolerate consuming a message within 10 minutes
of its arrival and it takes your task on average 250 milliseconds to process a message, then acceptableBacklogPerTask = 10 * 60 / 0.25 = 2400
. Therefore, each task can hold up to
2,400 messages.
A target tracking policy is set up on your behalf to ensure your service scales up and down to maintain <= 2400 messages per task. To learn more see docs.
count.queue_delay.acceptable_latency
Duration
The acceptable amount of time that a message can sit in the queue. For example, "45s"
, "5m"
, 10h
.
count.queue_delay.msg_processing_time
Duration
The average amount of time it takes to process an SQS message. For example, "250ms"
, "1s"
.
count.queue_delay.cooldown
Map
Scale up and down cooldown fields for queue delay autoscaling.
exec
Boolean
Enable running commands in your container. The default is false
. Required for $ copilot svc exec
.
deployment
Map
The deployment section contains parameters to control how many tasks run during the deployment and the ordering of stopping and starting tasks.
deployment.rolling
String
Rolling deployment strategy. Valid values are
"default"
: Creates new tasks as many as the desired count with the updated task definition, before stopping the old tasks. Under the hood, this translates to setting theminimumHealthyPercent
to 100 andmaximumPercent
to 200."recreate"
: Stop all running tasks and then spin up new tasks. Under the hood, this translates to setting theminimumHealthyPercent
to 0 andmaximumPercent
to 100.
deployment.rollback_alarms
Array of Strings or Map
Info
If an alarm is in "In alarm" state at the beginning of a deployment, Amazon ECS will NOT monitor alarms for the duration of that deployment. For more details, read the docs here.
As a list of strings, the names of existing CloudWatch alarms to associate with your service that may trigger a deployment rollback.
deployment:
rollback_alarms: ["MyAlarm-ELB-4xx", "MyAlarm-ELB-5xx"]
deployment:
rollback_alarms:
cpu_utilization: 70 // Percentage value at or above which alarm is triggered.
memory_utilization: 50 // Percentage value at or above which alarm is triggered.
messages_delayed: 5 // Number of delayed messages in the queue at or above which alarm is triggered.
entrypoint
String or Array of Strings
Override the default entrypoint in the image.
# String version.
entrypoint: "/bin/entrypoint --p1 --p2"
# Alteratively, as an array of strings.
entrypoint: ["/bin/entrypoint", "--p1", "--p2"]
command
String or Array of Strings
Override the default command in the image.
# String version.
command: ps au
# Alteratively, as an array of strings.
command: ["ps", "au"]
network
Map
The network
section contains parameters for connecting to AWS resources in a VPC.
network.connect
Bool or Map
Enable Service Connect for your service, which makes the traffic between services load balanced and more resilient. Defaults to false
.
When using it as a map, you can specify which alias to use for this service. Note that the alias must be unique within the environment.
network.connect.alias
String
A custom DNS name for this service exposed to Service Connect. Defaults to the service name.
network.vpc
Map
Subnets and security groups attached to your tasks.
network.vpc.placement
String or Map
When using it as a string, the value must be one of 'public'
or 'private'
. Defaults to launching your tasks in public subnets.
Info
If you launch tasks in 'private'
subnets and use a Copilot-generated VPC, Copilot will automatically add NAT Gateways to your environment for internet connectivity. (See pricing.) Alternatively, when running copilot env init
, you can import an existing VPC with NAT Gateways, or one with VPC endpoints for isolated workloads. See our custom environment resources page for more.
When using it as a map, you can specify in which subnets Copilot should launch ECS tasks. For example:
network:
vpc:
placement:
subnets: ["SubnetID1", "SubnetID2"]
network.vpc.placement.subnets
Array of Strings or Map
As a list of strings, the subnet IDs where Copilot should launch ECS tasks.
As a map, the name-value pairs by which to filter your subnets. Note that the filters are joined with an AND
, and the values for each filter are joined by an OR
. For example, both subnets with tag set org: bi
and type: public
, and subnets with tag set org: bi
and type: private
will be matched by
network:
vpc:
placement:
subnets:
from_tags:
org: bi
type:
- public
- private
network.vpc.placement.subnetsfrom_tags
Map of String and String or Array of Strings
Tag sets by which to filter subnets where Copilot should launch ECS tasks.
network.vpc.security_groups
Array of Strings or Map
Additional security group IDs associated with your tasks.
network:
vpc:
security_groups: [sg-0001, sg-0002]
Map
form:
network:
vpc:
security_groups:
deny_default: true
groups: [sg-0001, sg-0002]
network.vpc.security_groups.from_cfn
String
The name of a CloudFormation stack export.
network.vpc.security_groups.deny_default
Boolean
Disable the default security group that allows ingress from all services in your environment.
network.vpc.security_groups.groups
Array of Strings
Additional security group IDs associated with your tasks.
network.vpc.security_groups.groupsfrom_cfn
String
The name of a CloudFormation stack export.
variables
Map
Key-value pairs that represent environment variables that will be passed to your service. Copilot will include a number of environment variables by default for you.
variables.from_cfn
String
The name of a CloudFormation stack export.
env_file
String
The path to a file from the root of your workspace containing the environment variables to pass to the main container. For more information about the environment variable file, see Considerations for specifying environment variable files.
secrets
Map
Key-value pairs that represent secret values from AWS Systems Manager Parameter Store or AWS Secrets Manager that will be securely passed to your service as environment variables.
secrets.from_cfn
String
The name of a CloudFormation stack export.
storage
Map
The Storage section lets you specify external EFS volumes for your containers and sidecars to mount. This allows you to access persistent storage across availability zones in a region for data processing or CMS workloads. For more detail, see the storage page. You can also specify extensible ephemeral storage at the task level.
storage.ephemeral
Int
Specify how much ephemeral task storage to provision in GiB. The default value and minimum is 20 GiB. The maximum size is 200 GiB. Sizes above 20 GiB incur additional charges.
To create a shared filesystem context between an essential container and a sidecar, you can use an empty volume:
storage:
ephemeral: 100
volumes:
scratch:
path: /var/data
read_only: false
sidecars:
mySidecar:
image: public.ecr.aws/my-image:latest
mount_points:
- source_volume: scratch
path: /var/data
read_only: false
storage.readonly_fs
Boolean
Specify true to give your container read-only access to its root file system.
storage.volumes
Map
Specify the name and configuration of any EFS volumes you would like to attach. The volumes
field is specified as a map of the form:
volumes:
<volume name>:
path: "/etc/mountpath"
efs:
...
storage.volumes.<volume>
Map
Specify the configuration of a volume.
storage.volumes.<volume>
.path
String
Required. Specify the location in the container where you would like your volume to be mounted. Must be fewer than 242 characters and must consist only of the characters a-zA-Z0-9.-_/
.
storage.volumes.<volume>
.read_only
Boolean
Optional. Defaults to true
. Defines whether the volume is read-only or not. If false, the container is granted elasticfilesystem:ClientWrite
permissions to the filesystem and the volume is writable.
storage.volumes.<volume>
.efs
Boolean or Map
Specify more detailed EFS configuration. If specified as a boolean, or using only the uid
and gid
subfields, creates a managed EFS filesystem and dedicated Access Point for this workload.
// Simple managed EFS
efs: true
// Managed EFS with custom POSIX info
efs:
uid: 10000
gid: 110000
storage.volumes.<volume>
.efs.id
String
Required. The ID of the filesystem you would like to mount.
storage.volumes.<volume>
.efs.id.from_cfn
String Added in v1.30.0
The name of a CloudFormation stack export.
storage.volumes.<volume>
.efs.root_dir
String
Optional. Defaults to /
. Specify the location in the EFS filesystem you would like to use as the root of your volume. Must be fewer than 255 characters and must consist only of the characters a-zA-Z0-9.-_/
. If using an access point, root_dir
must be either empty or /
and auth.iam
must be true
.
storage.volumes.<volume>
.efs.uid
Uint32
Optional. Must be specified with gid
. Mutually exclusive with root_dir
, auth
, and id
. The POSIX UID to use for the dedicated access point created for the managed EFS filesystem.
storage.volumes.<volume>
.efs.gid
Uint32
Optional. Must be specified with uid
. Mutually exclusive with root_dir
, auth
, and id
. The POSIX GID to use for the dedicated access point created for the managed EFS filesystem.
storage.volumes.<volume>
.efs.auth
Map
Specify advanced authorization configuration for EFS.
storage.volumes.<volume>
.efs.auth.iam
Boolean
Optional. Defaults to true
. Whether or not to use IAM authorization to determine whether the volume is allowed to connect to EFS.
storage.volumes.<volume>
.efs.auth.access_point_id
String
Optional. Defaults to ""
. The ID of the EFS access point to connect to. If using an access point, root_dir
must be either empty or /
and auth.iam
must be true
.
publish
Map
The publish
section allows services to publish messages to one or more SNS topics.
publish:
topics:
- name: orderEvents
In the example above, this manifest declares an SNS topic named orderEvents
that other worker services deployed to the Copilot environment can subscribe to. An environment variable named COPILOT_SNS_TOPIC_ARNS
is injected into your workload as a JSON string.
In JavaScript, you could write:
const {orderEvents} = JSON.parse(process.env.COPILOT_SNS_TOPIC_ARNS)
publish.topics
Array of topics
List of topic
objects.
publish.topics.topic
Map
Holds configuration for a single SNS topic.
publish.topics.topic.name
String
Required. The name of the SNS topic. Must contain only upper and lowercase letters, numbers, hyphens, and underscores.
publish.topics.topic.fifo
Boolean or Map
FIFO (first in, first out) SNS topic configuration.
If you specify true
, Copilot will create the topic with FIFO ordering.
publish:
topics:
- name: mytopic
fifo: true
Alternatively, you can also configure advanced SNS FIFO topic settings.
publish:
topics:
- name: mytopic
fifo:
content_based_deduplication: true
publish.topics.topic.fifo.content_based_deduplication
Boolean
If the message body is guaranteed to be unique for each published message, you can enable content-based deduplication for the SNS FIFO topic.
logging
Map
The logging section contains log configuration. You can also configure parameters for your container's FireLens log driver in this section (see examples here).
logging.retention
Integer
Optional. The number of days to retain the log events. See this page for all accepted values. If omitted, the default is 30.
logging.image
Map
Optional. The Fluent Bit image to use. Defaults to public.ecr.aws/aws-observability/aws-for-fluent-bit:stable
.
logging.destination
Map
Optional. The configuration options to send to the FireLens log driver.
logging.enableMetadata
Map
Optional. Whether to include ECS metadata in logs. Defaults to true
.
logging.secretOptions
Map
Optional. The secrets to pass to the log configuration.
logging.configFilePath
Map
Optional. The full config file path in your custom Fluent Bit image.
logging.env_file
String
The path to a file from the root of your workspace containing the environment variables to pass to the logging sidecar container. For more information about the environment variable file, see Considerations for specifying environment variable files.
observability
Map
The observability
section lets you configure ways to measure your service's current state. Currently, only tracing configuration is supported.
For more details, see the observability page.
observability.tracing
String
The vendor to use for tracing. Currently, only awsxray
is supported.
taskdef_overrides
Array of Rules
The taskdef_overrides
section allows users to apply overriding rules to their ECS Task Definitions (see examples here).
taskdef_overrides.path
String
Required. Path to the Task Definition field to override.
taskdef_overrides.value
Any
Required. Value of the Task Definition field to override.
environments
Map
The environment section lets you override any value in your manifest based on the environment you're in. In the example manifest above, we're overriding the count parameter so that we can run 2 copies of our service in our 'prod' environment, and 2 copies using Fargate Spot capacity in our 'staging' environment.