- DevsWorld
- Posts
- Building Services with ECS on Fargate
Building Services with ECS on Fargate
Using Docker images without extensive server management
Welcome back to another week of DevOps Insights!
This week, let's dive into using ECS to scale services without a server. For those new to ECS, Amazon Web Services offers this service for deploying Docker containers effortlessly. With ECS, you can easily configure your services to run behind a load balancer and scale out based on memory or CPU usage.
Recently, I've been using ECS for a client project, but now I want to explore its use for my own app, zivno.cz.
Why ECS?
I firmly believe in the power of Docker for running web APIs. Its deployment flexibility is unmatched. Docker allows running containers in various environments like VMs, Lambda, Kubernetes, and ECS, offering flexibility not only in infrastructure but also across cloud providers. See: Cloud Native.
Choosing Docker over running the app on an EC2 instance was not an option I entertained. Setting up a server requires configuration scripts for scaling, and maintaining these scripts for each deployment is a task I prefer to avoid. Hence, I leaned towards a serverless solution.
Lambda was an option, but its development side felt cumbersome. Despite improving solutions like SST, it didn't fit this project's needs, especially since I started building the application with Nest.js. Changing the development paradigm mid-project wasn't viable.
So, it boiled down to ECS or Kubernetes. Aiming for a serverless approach, Kubernetes, though excellent for enterprise-grade software, seemed overkill for my current needs.
Enter ECS. It lets me develop on Docker and scale out effortlessly. Using IAM for database (RDS) authorization and running on Fargate eliminates server management. With Kubernetes being too complex, EC2 insufficient for scaling, and Lambda requiring heavy refactoring, ECS stands out as the ideal middle ground.
How Does ECS Work?
ECS comprises four main components:
1. Capacity: Your service's compute and memory resources, available through EC2, VMs, or Fargate (serverless).
2. Controller: This component deploys and manages your containers, similar to Kubernetes' manager node, determining their placement within the capacity.
3. Provisioning: The interface for application provisioning, with various methods like AWS CLI, SDK, and CDK.
4. Task Definition: A JSON-based definition that instructs the provisioning layer on running and scaling your application.
The process involves using the provisioning layer to deploy or update a container using a task definition. The controller then assesses capacity availability, scaling up if necessary, and deploys the application on the newly provisioned capacity.
Automating Build and Deployment
For CI/CD, I'll initially set up everything in Terraform, preferring its granularity and state management over CloudFormation. This setup includes the load balancer, ECS, task definition, ECR repository, IAM roles, and policies.
The build pipeline is straightforward. After a Docker build, I push the image to the ECR. Production images get a 'latest' tag, while dev and staging environments use respective tags, streamlining image deployment based on the environment.
Upon pushing the image, I trigger an ECS service update. A key advantage of ECS is its built-in health check mechanism, ensuring smooth deployments and automatically replacing old containers if all goes well.
Major Unseen Benefits of ECS
ECS pleasantly surprised me with built-in monitoring and logging, easily integrated with CloudWatch. Attaching load balancers to ECS containers simplifies traffic distribution among running containers.
The no-downtime deployment feature of ECS, backed by robust health checks, ensures service availability even during updates. If a runtime health check fails, ECS quickly reprovisions a new container.
Conclusion
ECS offers an accessible path to scalability and stability in applications. It's ideal for those transitioning from EC2 to Kubernetes, providing many benefits without the complexity of node group configurations.
One downside is the potential cost, especially when scaling horizontally. Fargate's compute costs can exceed on-demand EC2 pricing, but for extensive scaling, switching to EC2 capacity can mitigate this.
Thanks for reading another issue of this newsletter!
If you have a specific DevOps topic you're curious about, feel free to reach out and I will write about it!
For new readers, check out past issues here.