Crossing the Finish Line: Decoding Continuous Delivery and Deployment
We have been talking about CICD in the past few weeks.
Starting with breaking the box myth, we have looked into containers, set up Git as our tripwire, and used CI tools to automatically build and deposit a pristine, secure container image into an artifact registry (like AWS ECR).
The CI phase has answered one critical question: Is this code safe?
Now, the "CD" phase must answer the next one: How do we get this safe code to our users without breaking anything? This is where Continuous Delivery and Continuous Deployment take over. While they share an acronym, understanding the subtle difference between the two—and the tools required to execute them—is the final leap from traditional system administration to modern cloud engineering.
Delivery vs. Deployment: The Human Gatekeeper
First, let's clear up the biggest source of confusion for fresh DevOps engineers. The "D" in CI/CD stands for two distinct, yet related, concepts.
Continuous Delivery (The "Click to Ship" Model)
Continuous Delivery means your CI pipeline has packaged your application and prepared the environment so perfectly that the software is always in a deployable state. However, the final action to push the code live to production requires a human to press a button.
- Why use it? Many organizations require a final manual sign-off for business, compliance, or marketing reasons before a feature goes live to customers.
Continuous Deployment (The "Hands-Free" Model)
Continuous Deployment takes automation to its absolute limit. If the developer pushes code to the main branch and the CI pipeline glows green (all tests pass), the CD system automatically deploys that code straight to production. Zero human intervention.
- Why use it? It enables massive speed. High-performing teams use this to deploy hundreds of small, low-risk changes a day.
Setting the Stage: Infrastructure as Code (IaC)
Before you can deploy an application, you have to have somewhere to deploy it. In the old days, a SysAdmin would manually click through the AWS console or run a series of CLI commands to provision servers and load balancers.
In a modern CD pipeline, manual provisioning is a cardinal sin. You cannot automate software deployment if the underlying infrastructure is managed by hand.
This is where Infrastructure as Code (IaC) tools like OpenTofu or Terraform become the foundation of your CD phase.
Instead of clicking buttons in a console, you write declarative configuration files defining your VPCs, EC2 instances, or ECS clusters. Your CD pipeline uses these tools to provision or update the cloud environment right before deploying the new application code. The infrastructure itself becomes just another version-controlled artifact in Git.
Safe Rollout Strategies
When the infrastructure is ready, the CD tool orchestrates the rollout of your new container image. But we don't just shut off the old server and turn on the new one—that causes downtime. Modern CD utilizes advanced deployment strategies to ensure users never see a blip:
1. The Rolling Update
Instead of replacing everything at once, the CD system replaces instances incrementally. If you have 10 web servers running version 1.0, the pipeline will deploy version 1.1 to two servers, wait for them to become healthy, route traffic to them, and then tear down two old ones. It "rolls" across the cluster until everything is updated.
2. Blue/Green Deployment
This requires duplicating your infrastructure. You have an active production environment (Blue) running version 1.0. Your CD pipeline deploys version 1.1 to an identical, idle environment (Green). Once Green is fully tested and verified, you simply flip the load balancer switch to route all user traffic to Green. If something goes wrong, flipping back to Blue takes milliseconds.
3. Canary Releases
Named after the "canary in the coal mine," this strategy deploys the new version (1.1) alongside the old version (1.0), but only routes a tiny fraction of traffic (say, 5%) to the new code. The CD pipeline monitors the "canary" for errors or latency. If it remains healthy, the pipeline gradually increases the traffic until it reaches 100%.
These deployment strategies are ubiquitously associated with Kubernetes. It's true that Kubernetes makes such deployments easier with its vast ecosystem of deployment tools. But, you can implement these deployment strategies on Docker too.
The Ultimate Handoff
Whether you choose Continuous Delivery with a manual button press or fully automated Continuous Deployment, the goal is the same: making releases boring.
By utilizing IaC tools like OpenTofu to manage the AWS environment, and advanced rollout strategies to protect the user experience, the CD pipeline completely removes the late-night deployment anxiety that used to plague traditional IT teams.