• DevsWorld
  • Posts
  • Understanding Deployment Strategies

Understanding Deployment Strategies

Learn about strategies for effectively deploying your SaaS

Welcome back!

Today we’re going to explore a few methods of deploying your application. We will go over the pros and cons of each strategy in detail. My goal is to give you the info needed to make an informed decision on how to deploy your application.

To preface, these strategies are not the end-all be-all of deploying your software. However, I’d recommend not reinventing the wheel!

Keep in mind that these strategies are more geared towards enterprise-level applications, however, you can still use them in your side projects as well.

Blue-Green Deployment

This strategy that involves maintaining two separate environments: the "blue" environment, which represents the current live version of your infrastructure, and the "green" environment, which holds the new version you want to deploy. The process involves switching traffic from the blue environment to the green one once the new version is ready. This approach provides a smooth transition without downtime, as users are redirected from the old version to the new one. If any issues arise, it's easy to revert to the blue environment. Blue-Green Deployment ensures minimal disruption to users and provides a reliable way to test and validate changes.

For example, you may have a Jenkins server that handles builds for your software. However, you need to maintain that Jenkins instance by performing updates to plugins, Jenkins core updates, and other maintenance items. To handle this, you would keep the instance you have now as your “blue” environment. Spin up a second environment that is a replica of blue, called “green”, and make your changes there. Once the changes are made and everything is ready, you simply route traffic to the green environment. The green environment moving forward is the new “blue”.

I find it easier to not keep swapping between a blue and green environment, instead, use the currently used version as “blue”. That way you don’t need to wonder which one is currently being pointed at. When it comes time to upgrade again, provision a replica of “blue”, and repeat.

Another recommendation of mine is to handle this via your load balancer / nginx configuration instead of DNS. DNS takes time to propagate and nginx configuration changes take affect immediately. If your nginx configuration is in infrastructure as code, such as Ansible, you have a source code change that reflects which server is in use. By doing so, you’ll be able to track these changes with git. This gives you a nice metadata trail as to when changes happened.

Pros:
  • Minimal Downtime: This strategy offers zero-downtime transitions. There’s no maintenance window required.

  • Easy Rollback: If issues arise with the new version, reverting to the previous version is straightforward. You simply swap back to the old.

  • Complete Validation: The green environment can be thoroughly tested and validated before switching traffic, reducing the chances of failures in production.

Cons:
  • Resource Duplication: Running two environments simultaneously can increase operational costs due to duplicated resources. However, you can generally get around this by turning off the old blue environment after a validation period.

  • Complex Configuration: Maintaining identical configurations for both environments requires careful management and synchronization. If you have configuration drift, things can get hairy.

Canary Deployment

Deploying via canary deployments is a progressive strategy that introduces changes gradually to a subset of users before rolling them out to the entire user base. With this method, a small percentage of users, "canaries", experience the new version of the infrastructure, while the majority continues to use the existing version. This allows you to monitor the new version's performance and stability in a real-world scenario, gathering valuable feedback and identifying potential issues. If the canary users experience any problems, you can quickly react and make adjustments before a full rollout. This approach minimizes risks by providing an incremental and controlled approach to releasing updates.

If you are using Kubernetes, I would strongly recommend reading through this guide by Kubernetes themselves on how to handle this deployment strategy.

Pros:
  • Gradual Rollout: Allows you to release changes incrementally, minimizing the impact of potential issues.

  • Real-World Validation: Testing changes with a subset of users provides valuable insights into performance, user experience, and possible issues.

  • Quick Feedback Loop: Early feedback from canary users allows for prompt adjustments and fixes before a full rollout.

Cons:
  • Complexity: Managing different versions of your infrastructure for canary users and the main user base can introduce complexity.

  • Configuration Challenges: Ensuring that canary users receive the correct version and configuration can be challenging.

Rolling Deployment

Rolling Deployment is a strategy that updates your infrastructure incrementally, typically node by node or server by server. It involves taking a subset of the existing infrastructure out of rotation, updating it to the new version, and then placing it back into the production environment. This process continues sequentially until all nodes have been updated. Rolling Deployment ensures that the application remains available throughout the deployment process, as only a portion of the infrastructure is updated at a time. This strategy allows for continuous monitoring and validation of the new version's performance, ensuring that any issues are detected and resolved before the complete rollout.

Pros:
  • Continuous Availability: Rolling out your deployment in this strategy maintains application availability by updating only a subset of nodes at a time.

  • Gradual Rollout: Similar to canary deployment, rolling deployment introduces changes incrementally, reducing risks.

  • Real-Time Monitoring: Each node's performance can be monitored during the update process, allowing for immediate issue detection.

Cons:
  • Potential Disruption: Despite the incremental approach, there is a slight risk of disruption during the deployment process.

  • Resource Strain: Updating nodes sequentially may increase resource utilization during deployment, impacting performance.

There is no one size fits all strategy for each application. You should utilize the strategy that works best for the application being deployed. I am a big fan of blue/green for internal infrastructure. Generally, you are making changes that are less-customer focused, so feedback from users is less important. When it comes to SaaS, canary is a great approach.

Keep in mind, this is not an exhaustive list. There’s many different strategies for deploying. These are the ones I’ve used and am most familiar with.

If you enjoyed this newsletter, please share it with someone else who would like to learn more about deployment strategies!