Worker Service

List of all available properties for a 'Worker Service' manifest. To learn about Copilot services, see the Services concept page.

Sample manifest for a worker service
# Your service name will be used in naming your resources like log groups, ECS services, etc.
name: orders-worker
type: Worker Service

image:
  build: ./orders/Dockerfile

subscribe:
  topics:
    - name: events
      service: api
    - name: events
      service: fe
  queue:
    retention: 96h
    timeout: 30s
    dead_letter:
      tries: 10

cpu: 256
memory: 512
count: 1

variables:
  LOG_LEVEL: info
secrets:
  GITHUB_TOKEN: GITHUB_TOKEN

# You can override any of the values defined above by environment.
environments:
  production:
    count:
      range:
        min: 1
        max: 50
        spot_from: 26
      queue_delay:
        acceptable_latency: 1m
        msg_processing_time: 250ms

name String The name of your service.

type String
The architecture type for your service. Worker Services are not reachable from the internet or elsewhere in the VPC. They are designed to pull messages from their associated SQS queues, which are populated by their subscriptions to SNS topics created by other Copilot services' publish fields.

subscribe Map The subscribe section allows worker services to create subscriptions to the SNS topics exposed by other Copilot services in the same application and environment. Each topic can define its own SQS queue, but by default all topics are subscribed to the worker service's default queue.

subscribe:
  topics:
    - name: events
      service: api
      queue: # Define a topic-specific queue for the api-events topic.
        timeout: 20s
    - name: events
      service: fe
  queue: # By default, messages from all topics will go to a shared queue.
    timeout: 45s
    retention: 96h
    delay: 30s

subscribe.queue Map By default, a service level queue is always created. queue allows customization of certain attributes of that default queue.

subscribe.queue.delay Duration The time in seconds for which the delivery of all messages in the queue is delayed. Default 0s. Range 0s-15m.

subscribe.queue.retention Duration Retention specifies the time a message will remain in the queue before being deleted. Default 4d. Range 60s-336h.

subscribe.queue.timeout Duration Timeout defines the length of time a message is unavailable after being delivered. Default 30s. Range 0s-12h.

subscribe.queue.dead_letter.tries Integer If specified, creates a dead letter queue and a redrive policy which routes messages to the DLQ after tries attempts. That is, if a worker service fails to process a message successfully tries times, it will be routed to the DLQ for examination instead of redriven.

subscribe.topics Array of topics Contains information about which SNS topics the worker service should subscribe to.

topic.name String Required. The name of the SNS topic to subscribe to.

topic.service String Required. The service this SNS topic is exposed by. Together with the topic name, this uniquely identifies an SNS topic in the copilot environment.

topic.queue Boolean or Map Optional. Specify SQS queue configuration for the topic. If specified as true, the queue will be created with default configuration. Specify this field as a map for customization of certain attributes for this topic-specific queue.

image Map
The image section contains parameters relating to the Docker build configuration and exposed port.

image.build String or Map
If you specify a string, Copilot interprets it as the path to your Dockerfile. It will assume that the dirname of the string you specify should be the build context. The manifest:

image:
  build: path/to/dockerfile
will result in the following call to docker build: $ docker build --file path/to/dockerfile path/to

You can also specify build as a map:

image:
  build:
    dockerfile: path/to/dockerfile
    context: context/dir
    target: build-stage
    cache_from:
      - image:tag
    args:
      key: value
In this case, Copilot will use the context directory you specified and convert the key-value pairs under args to --build-arg overrides. The equivalent docker build call will be: $ docker build --file path/to/dockerfile --target build-stage --cache-from image:tag --build-arg key=value context/dir.

You can omit fields and Copilot will do its best to understand what you mean. For example, if you specify context but not dockerfile, Copilot will run Docker in the context directory and assume that your Dockerfile is named "Dockerfile." If you specify dockerfile but no context, Copilot assumes you want to run Docker in the directory that contains dockerfile.

All paths are relative to your workspace root.

image.location String
Instead of building a container from a Dockerfile, you can specify an existing image name. Mutually exclusive with image.build. The location field follows the same definition as the image parameter in the Amazon ECS task definition.

image.credentials String An optional credentials ARN for a private repository. The credentials field follows the same definition as the credentialsParameter in the Amazon ECS task definition.

image.port Integer
The port exposed in your Dockerfile. Copilot should parse this value for you from your EXPOSE instruction.

image.labels Map
An optional key/value map of Docker labels to add to the container.

image.depends_on Map
An optional key/value map of Container Dependencies to add to the container. The key of the map is a container name and the value is the condition to depend on. Valid conditions are: start, healthy, complete, and success. You cannot specify a complete or success dependency on an essential container.

For example:

image:
  build: ./Dockerfile
  depends_on:
    nginx: start
    startup: success
In the above example, the task's main container will only start after the nginx sidecar has started and the startup container has completed successfully.

image.healthcheck Map
Optional configuration for container health checks.

image.healthcheck.command Array of Strings
The command to run to determine if the container is healthy. The string array can start with CMD to execute the command arguments directly, or CMD-SHELL to run the command with the container's default shell.

image.healthcheck.interval Duration
Time period between health checks, in seconds. Default is 10s.

image.healthcheck.retries Integer
Number of times to retry before container is deemed unhealthy. Default is 2.

image.healthcheck.timeout Duration
How long to wait before considering the health check failed, in seconds. Default is 5s.

image.healthcheck.start_period Duration Length of grace period for containers to bootstrap before failed health checks count towards the maximum number of retries. Default is 0s.

cpu Integer
Number of CPU units for the task. See the Amazon ECS docs for valid CPU values.

memory Integer
Amount of memory in MiB used by the task. See the Amazon ECS docs for valid memory values.

platform String
Operating system and architecture (formatted as [os]/[arch]) to pass with docker build --platform.

count Integer or Map
If you specify a number:

count: 5
The service will set the desired count to 5 and maintain 5 tasks in your service.

count.spot Integer

If you want to use Fargate Spot capacity to run your services, you can specify a number under the spot subfield:

count:
  spot: 5

Alternatively, you can specify a map for setting up autoscaling:

count:
  range: 1-10
  cpu_percentage: 70
  memory_percentage: 80
  queue_delay:
    acceptable_latency: 10m
    msg_processing_time: 250ms

count.range String or Map
You can specify a minimum and maximum bound for the number of tasks your service should maintain, based on the values you specify for the metrics.

count:
  range: n-m
This will set up an Application Autoscaling Target with the MinCapacity of n and MaxCapacity of m.

Alternatively, if you wish to scale your service onto Fargate Spot instances, specify min and max under range and then specify spot_from with the desired count you wish to start placing your services onto Spot capacity. For example:

count:
  range:
    min: 1
    max: 10
    spot_from: 3

This will set your range as 1-10 as above, but will place the first two copies of your service on dedicated Fargate capacity. If your service scales to 3 or higher, the third and any additional copies will be placed on Spot until the maximum is reached.

range.min Integer
The minimum desired count for your service using autoscaling.

range.max Integer
The maximum desired count for your service using autoscaling.

range.spot_from Integer
The desired count at which you wish to start placing your service using Fargate Spot capacity providers.

count.cpu_percentage Integer
Scale up or down based on the average CPU your service should maintain.

count.memory_percentage Integer
Scale up or down based on the average memory your service should maintain.

count.queue_delay Integer
Scale up or down to maintain an acceptable queue latency by tracking against the acceptable backlog per task.
The acceptable backlog per task is calculated by dividing acceptable_latency by msg_processing_time. For example, if you can tolerate consuming a message within 10 minutes of its arrival and it takes your task on average 250 milliseconds to process a message, then acceptableBacklogPerTask = 10 * 60 / 0.25 = 2400. Therefore, each task can hold up to 2,400 messages.
A target tracking policy is set up on your behalf to ensure your service scales up and down to maintain <= 2400 messages per task. To learn more see docs.

count.queue_delay.acceptable_latency Duration
The acceptable amount of time that a message can sit in the queue. For example, "45s", "5m", 10h.

count.queue_delay.msg_processing_time Duration
The average amount of time it takes to process an SQS message. For example, "250ms", "1s".

exec Boolean
Enable running commands in your container. The default is false. Required for $ copilot svc exec.

entrypoint String or Array of Strings
Override the default entrypoint in the image.

# String version.
entrypoint: "/bin/entrypoint --p1 --p2"
# Alteratively, as an array of strings.
entrypoint: ["/bin/entrypoint", "--p1", "--p2"]

command String or Array of Strings
Override the default command in the image.

# String version.
command: ps au
# Alteratively, as an array of strings.
command: ["ps", "au"]

network Map
The network section contains parameters for connecting to AWS resources in a VPC.

network.vpc Map
Subnets and security groups attached to your tasks.

network.vpc.placement String
Must be one of 'public' or 'private'. Defaults to launching your tasks in public subnets.

Info

If you launch tasks in 'private' subnets and use a Copilot-generated VPC, Copilot will automatically add NAT Gateways to your environment for internet connectivity. (See pricing.) Alternatively, when running copilot env init, you can import an existing VPC with NAT Gateways, or one with VPC endpoints for isolated workloads. See our custom environment resources page for more.

network.vpc.security_groups Array of Strings
Additional security group IDs associated with your tasks. Copilot always includes a security group so containers within your environment can communicate with each other.

variables Map
Key-value pairs that represent environment variables that will be passed to your service. Copilot will include a number of environment variables by default for you.

secrets Map
Key-value pairs that represent secret values from AWS Systems Manager Parameter Store that will be securely passed to your service as environment variables.

storage Map
The Storage section lets you specify external EFS volumes for your containers and sidecars to mount. This allows you to access persistent storage across availability zones in a region for data processing or CMS workloads. For more detail, see the storage page. You can also specify extensible ephemeral storage at the task level.

storage.ephemeral Int Specify how much ephemeral task storage to provision in GiB. The default value and minimum is 20 GiB. The maximum size is 200 GiB. Sizes above 20 GiB incur additional charges.

To create a shared filesystem context between an essential container and a sidecar, you can use an empty volume:

storage:
  ephemeral: 100
  volumes:
    scratch:
      path: /var/data
      read_only: false

sidecars:
  mySidecar:
    image: public.ecr.aws/my-image:latest
    mount_points:
      - source_volume: scratch
        path: /var/data
        read_only: false
This example will provision 100 GiB of storage to be shared between the sidecar and the task container. This can be useful for large datasets, or for using a sidecar to transfer data from EFS into task storage for workloads with high disk I/O requirements.

storage.volumes Map
Specify the name and configuration of any EFS volumes you would like to attach. The volumes field is specified as a map of the form:

volumes:
  <volume name>:
    path: "/etc/mountpath"
    efs:
      ...

storage.volumes.volume Map
Specify the configuration of a volume.

volume.path String
Required. Specify the location in the container where you would like your volume to be mounted. Must be fewer than 242 characters and must consist only of the characters a-zA-Z0-9.-_/.

volume.read_only Boolean
Optional. Defaults to true. Defines whether the volume is read-only or not. If false, the container is granted elasticfilesystem:ClientWrite permissions to the filesystem and the volume is writable.

volume.efs Boolean or Map
Specify more detailed EFS configuration. If specified as a boolean, or using only the uid and gid subfields, creates a managed EFS filesystem and dedicated Access Point for this workload.

// Simple managed EFS
efs: true

// Managed EFS with custom POSIX info
efs:
  uid: 10000
  gid: 110000

volume.efs.id String
Required. The ID of the filesystem you would like to mount.

volume.efs.root_dir String Optional. Defaults to /. Specify the location in the EFS filesystem you would like to use as the root of your volume. Must be fewer than 255 characters and must consist only of the characters a-zA-Z0-9.-_/. If using an access point, root_dir must be either empty or / and auth.iam must be true.

volume.efs.uid Uint32 Optional. Must be specified with gid. Mutually exclusive with root_dir, auth, and id. The POSIX UID to use for the dedicated access point created for the managed EFS filesystem.

volume.efs.gid Uint32 Optional. Must be specified with uid. Mutually exclusive with root_dir, auth, and id. The POSIX GID to use for the dedicated access point created for the managed EFS filesystem.

volume.efs.auth Map
Specify advanced authorization configuration for EFS.

volume.efs.auth.iam Boolean
Optional. Defaults to true. Whether or not to use IAM authorization to determine whether the volume is allowed to connect to EFS.

volume.efs.auth.access_point_id String
Optional. Defaults to "". The ID of the EFS access point to connect to. If using an access point, root_dir must be either empty or / and auth.iam must be true.

logging Map
The logging section contains log configuration parameters for your container's FireLens log driver (see examples here).

logging.retention Integer
Optional. The number of days to retain the log events. See this page for all accepted values. If omitted, the default is 30.

logging.image Map
Optional. The Fluent Bit image to use. Defaults to amazon/aws-for-fluent-bit:latest.

logging.destination Map
Optional. The configuration options to send to the FireLens log driver.

logging.enableMetadata Map
Optional. Whether to include ECS metadata in logs. Defaults to true.

logging.secretOptions Map
Optional. The secrets to pass to the log configuration.

logging.configFilePath Map
Optional. The full config file path in your custom Fluent Bit image.

taskdef_overrides Array of Rules
The taskdef_overrides section allows users to apply overriding rules to their ECS Task Definitions (see examples here).

taskdef_overrides.path String Required. Path to the Task Definition field to override.

taskdef_overrides.value Any Required. Value of the Task Definition field to override.

environments Map
The environment section lets you override any value in your manifest based on the environment you're in. In the example manifest above, we're overriding the count parameter so that we can run 2 copies of our service in our 'prod' environment, and 2 copies using Fargate Spot capacity in our 'staging' environment.