AWS Copilot CLI: Effortless Launchpad with a Powerful Escape Route

In July 2020, Amazon Web Services (AWS) introduced a new tool, AWS Copilot CLI. This command line tool aids developers in effortlessly managing and deploying their Docker container applications on AWS. This write-up offers an assessment of my journey towards usings AWS Copilot CLI and thoughts since start in late 2022.

Operational Decisions and Principles

The evolution of how businesses host their applications on the web has seen several stages. They began by hosting services in-house, then renting datacenter space, and eventually migrating to cloud computing services like AWS, Google, or Microsoft. The latter relieves software companies of the complex task of machine operation.

In 2022 and beyond, as a budding startup, we needed to ensure maximum development efficiency. Unnecessary tinkering with infrastructure could hinder progress towards developing our product offering. So, our constraints dictated a minimal operational burden. Further, an optimum architecture necessarily implied minimal restriction on future decisions. Keeping this in mind, our principles were: 1) Use Docker containers as deployment units, 2) Deploy infrastructure as code, 3) Ensure extensibility as our company expands.

Modern services like AWS Lambda, Google Cloud Functions, among others offer highly abstract solutions that allow smaller teams to immerse themselves in code development. But in my opinion, these platforms restrict local development/reproducibility, and introspection due to their proprietary setup. Docker containers, conversely, offer simplicity, ease to inspect and run locally, and broad support as a deployment unit. They are, I believe, the deployment units of choice today.

Developing an infrastructure replica for staging, preview, and production environments could get convoluted if one relies solely on command sequences. Infrastructure as code enables to replicate an entire environment, including services, secrets, database and caches, with a minimum number of steps.

Lastly, we needed to ensure the flexibility of our approach. With a growing customer base, the integration of specialized services into our infrastructure should be easy to handle.

On a side note, I did not initially tackle the issue of service cost, as my main concern was developing time.

My Journey to AWS Copilot

After much research, I was initially drawn to Fly.io that uses Firecracker, an open-source project devised by AWS, which lets you run micro VMs. Fly could pause your application when idle, meaning a majority of startups’ applications would essentially run free. A Docker image coupled with a manifest file allows any application to be rapidly deployed on Fly, satisfying two of my three primary criteria.

Despite Fly.io's appealing features, I realized that any functionality beyond compute would require another service provider or a self-hosted service—an extra operational burden I was aiming to avoid. In contrast, AWS offers a wide range of services and powers a significant portion of the internet.

During my research, I uncovered AWS Copilot, which superbly packages container services for AWS, managing the setup process entirely in the background.

Escape Hatch

While the default templates are efficient for running most services, there may come a time when you need to fine-tune some aspects. Here, AWS Copilot CLI is a real game-changer. AWS Copilot allows users to define overrides with YAML in CloudFormation or to make use of Typescript in CDK.

A 10-Month Retrospective

From the beginning of our project, AWS Copilot CLI has been an integral part of our continuous deployment via Github Actions. Initially, determining the minimum permissions needed for the deployer role proved challenging. The tool tends to favor pipelines on AWS by default, needing manual setup for tasks utilizing the CLI.

Our journey began with a single backend service using the load-balanced service template. Gradually, we incorporated background workers, connected to the backend via Simple Queue Service (SQS), and the Static Site service. We also enhanced our default environment by integrating RDS for our databases, Redis for cache memory, and S3 for storage. Looking ahead, I plan to include all resources, such as alarms, as overrides in the environment or service.

Typically, a rolling deployment for a service wraps up in about five minutes, although services like the background worker don't utilize a rolling strategy. To augment our operational resources, we further updated our instructions manual (run books), detailing how to inspect logs locally and perform rollbacks with tagged images on Elastic Container Registry (ECR).

Overall, our current approach competently handles our needs, with ample room to cater emerging demands and personalization for future-specific tasks.

Recent Releases

The team recently introduced a Static Site template, which despite needing a few tweaks, we were able to integrate smoothly. They have even incorporated a single deployment command, copilot deploy.

Furthermore, the team has been very proactive in responding to issues on Github, revealing an earnest commitment to customer satisfaction.

Conclusion

AWS Copilot CLI is an efficient way to deploy containerized applications using best practices and sensible defaults on AWS. Not only does it simplify the start, but it also allows for continuous adjustments and inspections as you grow. Try AWS Copilot CLI and share your experiences!