What Happens After Deploy? Why Your Pipeline Isn't Done Yet
You just watched your pipeline turn green. The artifact deployed to production without a single error message. Your team breathes a sigh of relief and moves on to the next task. But here is the uncomfortable question: is the application actually working for users?
I have seen teams celebrate a successful deployment only to discover an hour later that the new feature silently broke user login. The deployment logs showed no errors. The server was running. But nobody could sign in. The pipeline considered the job done, while the real problem went unnoticed until users started complaining.
This gap between "deployment succeeded" and "deployment works" is exactly why post-deploy verification exists. It is the step that separates a technically successful deployment from one that actually delivers value.
The Three Layers of Post-Deploy Verification
Post-deploy verification is the stage where your pipeline checks the newly deployed version directly in the target environment. These checks must be automated and programmatic, not something someone runs manually from their laptop. There are three common types, each serving a different purpose.
The following flowchart shows how these three layers work together in sequence, with automatic rollback or alerting when a check fails.
Health Check: Is the Application Alive?
Health check is the most basic verification. It answers one question: is the service running and responding to requests? Typically this is a dedicated endpoint like /health or /status that returns a 200 HTTP code when the application is alive.
But here is the trap: a health check only tells you the application is not completely dead. It does not tell you whether the application works correctly. A service can return 200 while serving corrupted data, returning slow responses, or silently failing on critical operations. Health checks are necessary, but they are the minimum bar, not the finish line.
Smoke Test: Does the Core Functionality Work?
Smoke tests go deeper. They run simple scenarios that cover the most critical functions of your application. For an e-commerce site, a smoke test might open the homepage, search for a product, and add an item to the cart. For a database, it checks that main tables are accessible and basic queries run. For infrastructure, it verifies that the load balancer responds and SSL certificates are still valid.
The key word here is "simple." Smoke tests are not full regression suites. They test the happy path of your core features. If the smoke test passes, you have reasonable confidence that the application is not fundamentally broken. If it fails, you know something is wrong before users do.
Here is a minimal bash smoke test that checks a critical API endpoint and exits with a non-zero code if the response is not a 200:
#!/bin/bash
set -euo pipefail
# Smoke test: verify the login endpoint returns 200
RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" https://myapp.com/api/login)
if [ "$RESPONSE" -ne 200 ]; then
echo "Smoke test failed: login endpoint returned $RESPONSE"
exit 1
fi
echo "Smoke test passed: login endpoint returned 200"
Synthetic Monitoring: Does It Meet Performance Standards?
Synthetic monitoring simulates real user behavior on a schedule. Unlike health checks and smoke tests that run once after deployment, synthetic monitoring runs continuously. But after a deployment, your pipeline can trigger synthetic checks to verify that the new version still meets your performance standards.
For example, you might have a synthetic check that measures response time for a critical API endpoint. If the response time jumps above 500ms after deployment, the pipeline should flag it even if the endpoint returns correct data. Synthetic monitoring catches the kind of degradation that health checks and smoke tests miss.
What Happens When Verification Fails
When post-deploy verification fails, your pipeline needs to act immediately. The most common response is a rollback: reverting the environment to the previous known-good version. But rollback is not the only option, and it is not always the best one.
If you are using canary deployment, the pipeline can stop routing traffic to the new version and redirect all users back to the old one. If you are using blue-green deployment, the pipeline can switch traffic back to the environment still running the old version. The specific action depends on your deployment strategy, but the principle is the same: stop the damage and restore service.
Whatever action you take, the failure must be recorded as evidence. Your pipeline should store the logs from health checks, smoke tests, and synthetic monitoring, complete with timestamps and the artifact version being tested. This evidence becomes critical when you investigate what went wrong later. Without it, you are guessing.
Why Teams Skip This Step
Post-deploy verification is often treated as optional or skipped entirely. The reasons vary. Some teams trust their pre-deploy tests too much. Others think health checks are enough. Many simply feel pressure to move fast and consider verification a slowdown.
But skipping verification creates a blind spot. Your pre-deploy tests run in a staging environment that never perfectly matches production. Configuration differences, data volume differences, and infrastructure differences mean that passing tests in staging does not guarantee passing tests in production. Post-deploy verification is your safety net for those gaps.
A Practical Checklist for Post-Deploy Verification
If you are setting up post-deploy verification for the first time, here is a minimal starting point:
- Add a
/healthendpoint that checks database connectivity, cache connectivity, and critical external dependencies - Write three to five smoke tests covering your most critical user journeys
- Set up at least one synthetic check that measures response time for a key API or page
- Configure your pipeline to trigger rollback automatically if any verification step fails
- Store all verification results with timestamps and version numbers
Start with this minimal set and expand as you learn what breaks in your specific context.
The Real Measure of a Successful Deployment
A deployment is not complete when the artifact is on the server. It is complete when you have evidence that the new version is running correctly and meeting your standards. Post-deploy verification is what gives you that evidence.
Without it, you are deploying blind. With it, you know exactly when your deployment truly succeeded and when it silently failed. That knowledge is the difference between reacting to user complaints and catching problems before anyone notices.