Smoke Tests and Synthetic Transactions: Verifying Your Deployment Actually Works
You just watched your pipeline turn green. Unit tests passed, integration tests passed, even the end-to-end tests came back clean. You hit deploy, the progress bar fills up, and the pipeline reports success. But here is the uncomfortable truth: none of those tests ran on the actual production environment. They all ran somewhere else - a CI runner, a staging server, a local machine. The moment your application lands on real infrastructure, everything could still break.
This is why smoke tests and synthetic transactions exist. They are not replacements for your existing test suite. They are a separate layer that answers a different question: not "is the code correct?" but "did the deployment actually work?"
The Simplest Check: Smoke Tests
A smoke test is the most basic thing you can do after a deployment. It answers one question: is the application alive and responding?
Think of it like turning on a new server rack in a data center. Before you run any serious workload, you check that the power light is on, the fans are spinning, and you can ping the management interface. You do not run a full benchmark. You just confirm the thing is powered up.
For a web application, a smoke test might be a single HTTP request to the health check endpoint. For a backend service, it could be checking that the process is running and listening on the expected port. For a mobile app, it might mean verifying that the app launches without crashing on a real device.
The key characteristics of a good smoke test are speed and simplicity. It should complete in seconds, not minutes. It should not depend on complex business logic or external systems that might be temporarily unavailable. If your smoke test needs a database connection, a cache server, and three third-party APIs, it is no longer a smoke test - it is something else entirely.
Here is what a practical smoke test and a synthetic transaction look like in bash:
# Smoke test: quick health check
curl -f -s -o /dev/null http://myapp.com/health || exit 1
echo "Smoke test passed"
# Synthetic transaction: simulate a user login and search
#!/bin/bash
set -e
BASE="http://myapp.com"
# Step 1: Load login page
curl -s -o /dev/null -w "%{http_code}" "$BASE/login" | grep -q 200 || exit 1
# Step 2: Submit login form (simulated)
curl -s -c /tmp/cookies.txt -b /tmp/cookies.txt \
-d "username=testuser&password=testpass" \
-o /dev/null -w "%{http_code}" "$BASE/login" | grep -q 302 || exit 1
# Step 3: Search for a product
curl -s -b /tmp/cookies.txt "$BASE/search?q=laptop" | grep -q "results" || exit 1
echo "Synthetic transaction passed"
Place your smoke test as the first step after deployment, before any traffic is routed to the new version. If the smoke test fails, the pipeline should stop immediately. Do not proceed to deeper checks. Do not route user traffic. The deployment failed at the most basic level, and nothing else matters until that is resolved.
Going Deeper: Synthetic Transactions
A synthetic transaction takes verification one step further. Instead of just checking that the application is alive, it simulates real user behavior. It walks through the critical paths of your application the way a real user would, but it runs automatically, triggered by the pipeline.
Imagine an e-commerce application. A synthetic transaction might:
- Open the homepage
- Search for a product
- Add the product to the cart
- Proceed to checkout
- Complete the purchase flow
Each step checks that the response is correct, that pages load properly, that the cart actually contains the right item, and that the order confirmation appears. If any step fails, the synthetic transaction fails.
Synthetic transactions are more expensive than smoke tests. They take longer to run - often several minutes instead of seconds. They depend on more parts of the system working correctly. But they catch problems that smoke tests cannot.
A common scenario: the smoke test passes because the health check endpoint returns 200 OK. But the login page returns a 500 error because a configuration file was not copied correctly during deployment. The smoke test never checked the login page. The synthetic transaction would catch it immediately.
Where They Fit in Your Pipeline
Smoke tests and synthetic transactions serve different purposes and should appear at different stages of your deployment pipeline.
The smoke test goes first, immediately after the deployment completes. It is your gatekeeper. If the application is not even running, there is no point in running anything else. The pipeline should fail fast and stop.
The synthetic transaction runs after the smoke test passes. It is your deeper verification before user traffic arrives. If the synthetic transaction fails, you still have time to stop the deployment before users encounter the problem.
In practice, you do not need many synthetic transactions. Two to five scenarios covering your most critical user journeys is usually enough. The goal is not exhaustive testing - you already did that before deployment. The goal is to confirm that the deployment itself did not break anything.
What These Tests Catch That Others Miss
Pre-deployment tests run in controlled environments. They use test databases, mock services, and configuration files that may differ from production. Even with staging environments that mirror production closely, differences exist.
Smoke tests and synthetic transactions run in the actual production environment. They catch problems that only appear there:
- Environment configuration differences between staging and production
- Missing dependencies or incorrect versions in production
- Permission issues that only exist in the production account or cluster
- Network policies that block legitimate traffic
- SSL certificate problems
- Load balancer misconfigurations
- Database connection pool exhaustion under real conditions
These are not theoretical problems. They happen regularly in teams of every size. A configuration value that works perfectly in staging but has a typo in production. A secret that was rotated in staging but not in production. A firewall rule that blocks the new service's port. These issues will never be caught by pre-deployment tests because those tests do not run against the real environment.
A Practical Checklist
When implementing smoke tests and synthetic transactions for your pipeline, keep these points in mind:
- Keep smoke tests under 10 seconds. If they take longer, simplify them.
- Place smoke tests immediately after deployment, before traffic routing.
- Fail the pipeline on smoke test failure. Do not proceed.
- Run synthetic transactions after smoke tests pass, before full traffic cutover.
- Limit synthetic transactions to 2-5 critical user journeys.
- Do not use synthetic transactions for exhaustive testing. That is what your pre-deployment tests are for.
- Monitor synthetic transaction results over time. A pattern of intermittent failures may indicate an underlying issue.
- Make sure synthetic transactions run against the actual production environment, not a staging replica.
The Concrete Takeaway
Your pre-deployment tests verify that your code is correct. Your smoke tests and synthetic transactions verify that your deployment succeeded. They are not the same thing, and one cannot replace the other. A green pipeline means your code passed all checks in a controlled environment. A successful smoke test means your application is actually running in production. A passing synthetic transaction means your users can complete their most important tasks. Add both to your deployment pipeline, and you will catch the problems that only appear when code meets reality.