What Counts as a Healthy Deployment for Apps, Databases, and Infrastructure
When a deployment finishes, how do you know it actually worked? Not the pipeline status. Not the green checkmark. Not the "deploy successful" message in Slack. The real question is: does the thing you deployed actually do what it's supposed to do?
The answer depends on what you deployed. An application, a database change, and an infrastructure update each need different kinds of verification. Using the same health check for all three is like using the same test to check whether a car engine starts, whether the oil is clean, and whether the tires have enough air. They're all important, but you check them differently.
Verifying an Application Deployment
Start with the most basic question: is the application process running and accepting connections? This is the equivalent of checking whether the server is powered on. You can hit a simple health endpoint and look for a 200 response. If you get that, the application is alive.
The following flowchart shows the distinct verification paths for each type of deployment, ending with a healthy or rollback decision.
But alive doesn't mean working. An application that accepts connections might fail as soon as it tries to process a request. It might not be able to read its configuration file. It might fail to connect to the database. It might have a broken cache connection. These problems won't show up in a simple health check.
That's why application verification needs to go deeper. Run smoke tests that call several endpoints in sequence. Simulate a synthetic transaction that mirrors what a real user would do: log in, search for something, submit a form, log out. If that synthetic flow succeeds, you have stronger evidence that the application is actually functional, not just running.
The key here is to test the parts that matter most to your users. If your application's core feature is searching by date, your verification should include a search query that uses a date range. If users upload files, your verification should include an upload flow. Don't test everything on every deployment, but do test the critical paths.
Here is a practical example of a health check and a synthetic transaction script you could run after deployment:
# Health check: verify the app is alive
if curl -s -o /dev/null -w "%{http_code}" http://localhost:8080/health | grep -q "200"; then
echo "Health check passed"
else
echo "Health check failed"
exit 1
fi
# Smoke test: simulate a core user flow (login, search, submit)
#!/bin/bash
set -euo pipefail
BASE_URL="http://localhost:8080"
# Step 1: Login
LOGIN_RESPONSE=$(curl -s -X POST "$BASE_URL/login" \
-H "Content-Type: application/json" \
-d '{"username":"testuser","password":"testpass"}')
TOKEN=$(echo "$LOGIN_RESPONSE" | jq -r '.token')
# Step 2: Search
SEARCH_RESPONSE=$(curl -s "$BASE_URL/search?q=test&date_from=2024-01-01" \
-H "Authorization: Bearer $TOKEN")
if ! echo "$SEARCH_RESPONSE" | jq -e '.results | length > 0' > /dev/null; then
echo "Search returned no results"
exit 1
fi
# Step 3: Submit form
SUBMIT_RESPONSE=$(curl -s -X POST "$BASE_URL/submit" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"title":"test","content":"test content"}')
if ! echo "$SUBMIT_RESPONSE" | jq -e '.id' > /dev/null; then
echo "Form submission failed"
exit 1
fi
echo "Smoke test passed"
Verifying a Database Change
Databases don't speak HTTP. You can't ping a database endpoint and get a 200 response. Database verification is about schema changes, index modifications, stored procedures, and reference data updates. The question is: did the migration run without errors, and did it break anything that was working before?
Start by running the migration script in a staging environment first. Check the output for errors. If the migration succeeded, run test queries that represent the application's normal access patterns. If you changed an index, check whether the queries that rely on that index still run at acceptable speed. If you modified a stored procedure, execute it with test data and verify the results.
Database verification also needs to confirm that rollback is possible. A migration script that can't be reversed is a liability. Test the rollback in your staging environment. Make sure it restores the previous state cleanly, without data loss or corruption. If you can't roll back confidently, you haven't fully verified the change.
One common mistake is treating database verification as a one-time check. Database changes can have subtle effects that only show up under production load. A new index might speed up one query but slow down another. A schema change might work fine with test data but cause locking issues with real data volumes. That's why database verification should include both functional checks and performance checks, even if the performance checks are simple baseline comparisons.
Verifying Infrastructure Changes
Infrastructure covers servers, load balancers, firewalls, DNS records, TLS certificates, and network routing. When you change infrastructure, you're changing the environment that everything else depends on. A misconfigured firewall can silently break database connections. A wrong load balancer rule can send traffic to the wrong servers. An expired TLS certificate can make your application unreachable over HTTPS.
Infrastructure verification is about connectivity and configuration. After changing a firewall rule, verify that the application can still reach the database. After updating a load balancer, verify that traffic reaches the correct backend servers. After renewing a TLS certificate, verify that HTTPS connections work without security warnings.
These checks often need to run from outside the infrastructure itself. You can't test external connectivity from inside the network. Use scripts or tools that simulate connections from the outside. Ping the endpoints. Check the certificate expiration date. Verify that DNS resolution returns the correct IP addresses.
Infrastructure changes also tend to have cascading effects. Changing one DNS record can affect multiple services. Updating one firewall rule can block traffic that was previously allowed. That's why infrastructure verification should include a connectivity map: a list of all the connections that need to work, and a test for each one.
The Common Principle
Even though the verification methods differ, the principle is the same: every deployment must leave evidence that the deployed object is functioning correctly. Evidence can be a log entry showing a successful migration, an HTTP response showing the application can serve requests, or a ping result showing a server is reachable. Without evidence, you don't know whether the deployment succeeded. You only know that the pipeline finished.
A Practical Verification Checklist
Use this as a starting point, not a final list. Adjust it to match your specific systems.
- Application: health endpoint returns 200, synthetic transaction for core user flow succeeds, critical external dependencies (database, cache, API) are reachable.
- Database: migration script runs without errors, test queries return correct results, query performance is within acceptable range, rollback script works in staging.
- Infrastructure: connectivity between all dependent components is verified, TLS certificates are valid and not expiring soon, DNS resolution returns correct records, firewall rules allow required traffic.
When Is a Deployment Actually Done?
A deployment is done when you have verified that the change works correctly in its environment. Not when the pipeline turns green. Not when the ticket is closed. Not when the team lead says "looks good." The deployment is done when you have evidence that the application, database, or infrastructure is doing what it's supposed to do.
That evidence is what separates a deployment that happened from a deployment that succeeded.