Uploading Local Artifacts added in v1.21.0
Copilot supports uploading local files referenced from your addon templates to S3, and replacing the relevant resource properties with the uploaded S3 location.
On copilot svc deploy
or copilot svc package --upload-assets
, certain fields on supported resources will be updated with a S3 location before the addons template is sent to CloudFormation.
Your templates on disk will not be modified.
To see the full list of resources that are supported, take a look at the AWS CLI documentation.
This feature can be used to deploy local Lambda functions stored in the same repo as another Copilot service. For example, to deploy a javascript Lambda function alongside a copilot service, you can add this resource to your addon template:
Example Lambda Function
ExampleFunction:
Type: AWS::Lambda::Function
Properties:
Code: lambdas/example/
Handler: "index.handler"
Timeout: 900
MemorySize: 512
Role: !GetAtt "ExampleFunctionRole.Arn"
Runtime: nodejs20.x
exports.handler = function (event, context) {
console.log('example event:', event);
context.succeed('success!');
};
On copilot svc deploy
, the lambdas/example
directory will be zipped and uploaded to S3, and the Code
property will be updated to:
Code:
S3Bucket: copilotBucket
S3Key: hashOfLambdasExampleZip
AWS::Serverless::Function
), a file will be zipped before upload as well.
File paths are considered relative to the parent of the copilot/
directory in your repo.
For the above example, the folder structure would look like:
.
├── copilot
│ └── example-service
│ ├── addons
│ │ └── lambda.yml
│ └── manifest.yml
└── lambdas
└── example
└── index.js
Example: DynamoDB Stream Processing Lambda
This example will walk through creating an Amazon Dynamo DB table with a lambda function connected to process events from the table's stream. This architecture could be useful if you have a service that needs to minimize latency on storing data, but can kick off a separate process that takes longer to process the data.
Prerequisites
Steps
- Generate a DynamoDB table addon for your service by running
copilot storage init
(More info here!) - Add the
StreamSpecification
property to the generatedAWS::DynamoDB::Table
resource:copilot/service-name/addons/ddb.ymlStreamSpecification: StreamViewType: NEW_AND_OLD_IMAGES
- Add a Lambda function, IAM Role, and Lambda event stream mapping resource, making sure to give access to the DynamoDB table stream in the IAM Role:
copilot/service-name/addons/ddb.yml
recordProcessor: Type: AWS::Lambda::Function Properties: Code: lambdas/record-processor/ # local path to the record processor lambda Handler: "index.handler" Timeout: 60 MemorySize: 512 Role: !GetAtt "recordProcessorRole.Arn" Runtime: nodejs20.x recordProcessorRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: Service: - lambda.amazonaws.com Action: - sts:AssumeRole Path: / ManagedPolicyArns: - !Sub arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole Policies: - PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Action: - dynamodb:DescribeStream - dynamodb:GetRecords - dynamodb:GetShardIterator - dynamodb:ListStreams # replace <table> with the generated table's resource name Resource: !Sub ${<table>.Arn}/stream/* tableStreamMappingToRecordProcessor: Type: AWS::Lambda::EventSourceMapping Properties: FunctionName: !Ref recordProcessor EventSourceArn: !GetAtt <table>.StreamArn # replace <table> here too BatchSize: 1 StartingPosition: LATEST
- Write your lambda function:
lambdas/record-processor/index.js
"use strict"; const { unmarshall } = require('@aws-sdk/util-dynamodb'); exports.handler = async function (event, context) { for (const record of event?.Records) { if (record?.eventName != "INSERT") { continue; } // process new records const item = unmarshall(record?.dynamodb?.NewImage); console.log("processing item", item); } };
- Run
copilot svc deploy
to deploy your lambda function!🎉 As your service adds records to the table, the lambda function will be triggered and can process new records.