Solution #1: A custom script to upload artifacts to storage
To prevent redundant deployments, we had to opt out of using class wrappers for automatic artifact processing. Instead, we created a bash script and used the Exec Maven Plugin.
The script uploads function.zip with the Lambda function image to AWS S3 during the build and uses that image for deployment in CDK:
- maven deploy is initiated.
- The algorithm calls the script.
- Once the artifact is built, the script uploads it to S3 with the object key type group_id/artefact_id/function.zip, where group_id is the service name and artefact_id is the Lambda function name.
We complemented this solution later with Build Helper Maven Plugin, a tool that uploads built artifacts to S3 and ECR.
Solution #2: Establishing artifact versioning and creating a custom registry of artifact versions
To ensure the algorithm deploys the same Lambda functions to all three environments, we had to create our own registry of artifact versions.
For this, we used
We also created a custom script that automatically updates all Lambda functions when the version of the internal library they rely on changes.
Solution #3: A CDK module that sources required artifacts and pulls them directly from S3 and ECR
Next, we built a CDK module that scans the registry, checks whether the needed artifact version exists, and provides a link to it. Since in the AWS CDK, such actions can’t be performed at runtime, the module finds the needed artifact before the CDK code is triggered and uses the link in runtime.
We continue to improve the module. Our next step is to enhance it to fetch the links to local, not just remote artifacts. This will make it easy for developers to use local artifacts — the ones they build on their own.