Why Your Container Tags Are Lying to You
You run docker pull myapp:latest and think you know exactly what you are getting. But you do not. That tag could point to a different image tomorrow, or it already points to a different image than it did an hour ago. The same tag, the same command, but completely different software running in production.
This is not a theoretical problem. Teams have spent hours debugging production issues only to discover that the image they thought was running was not the image actually running. The tag said one thing, but the content was something else entirely.
Where Images Live
Before we talk about tags and their problems, we need to talk about where container images actually live.
When you build a Docker image on your laptop, that image exists only on your machine. To run it on a server, in a staging environment, or in production, you need a place to store and share that image. That place is called a registry.
Think of a registry as a file server for container images, but with extra capabilities. It manages versions, checks integrity, and controls who can access what. When you run docker pull nginx:latest, you are pulling from Docker Hub, which is a public registry. Anyone can pull from it, but anyone can also push to it, which is why you should not rely on public images for production.
Most companies run their own internal registry. Common options include Harbor, Nexus, GitLab Container Registry, or Amazon ECR. An internal registry gives you three things:
- Provenance: You know where every image came from. No random images from the internet.
- Speed: Transferring images over your internal network is much faster than pulling from the public internet.
- Control: You decide who can push images and who can pull them.
Without a registry, your deployment pipeline has nowhere to put the images it builds. With a registry, you have a single source of truth for every image your team produces.
Tags Are Labels, Not Identifiers
Every image in a registry has one or more tags. A tag is a label you attach to an image to mark a specific version or variant. You see tags everywhere: myapp:1.0.0, myapp:staging, myapp:latest.
Tags are convenient. They give you a human-readable way to refer to images. Instead of remembering a long hash, you type myapp:1.0.0 and you get the image you want.
But tags have a fundamental flaw: they are mutable. You can change what image a tag points to at any time. Today, myapp:latest points to image hash abc123. Tomorrow, after a new build, it points to def456. The tag stays the same, but the image changes.
This mutability creates real problems. Consider a scenario where your staging environment runs myapp:staging. Your pipeline builds a new image, tags it as staging, and pushes it. Now staging is running the new code. But what if someone manually overwrites the staging tag with a different image? Or what if your pipeline accidentally tags the wrong build? You have no way to know which image is actually running in staging unless you check the digest.
The Immutable Tag Pattern
The solution is to use immutable tags - tags that never change after they are created. Once you assign a tag to an image, that tag stays with that image forever.
Immutable tags contain unique information that identifies a specific build. Common patterns include:
- Semantic version:
myapp:1.2.3 - Git commit hash:
myapp:a1b2c3d - Build timestamp:
myapp:20240515-1430 - Pipeline run ID:
myapp:build-456
With immutable tags, you can trace exactly which code is running in any environment. If someone reports a bug in production, you look at the tag, find the commit hash, and know exactly what code produced that image. No guesswork, no "which version of latest is this?"
Some teams combine approaches. They use semantic versions for releases, commit hashes for development builds, and timestamps for automated deployments. The key is consistency: every build produces a unique tag that never gets reused.
Digests: The Truth
Tags are convenient but unreliable. If you want absolute certainty about which image you are running, you need to use digests.
Every container image has a digest. A digest is a cryptographic hash of the image content. It looks something like sha256:abc123def456.... The digest is unique to that specific image. If the image content changes even by one byte, the digest changes completely.
Unlike tags, digests cannot be moved or reassigned. The digest sha256:abc123 will always refer to the exact same image, forever. You cannot point that digest to a different image no matter what you do.
When you pull an image by digest, you are guaranteed to get exactly the image you expect. There is no ambiguity, no chance of getting a different version.
# This is safe - you get exactly what you asked for
docker pull myapp@sha256:abc123def456
Using Tags and Digests Together in Pipelines
In practice, you use both tags and digests together. Tags make images human-readable and easy to reference. Digests provide the guarantee that you are running the right image.
Here is how a typical pipeline handles this:
- Build the image.
- Tag it with an immutable tag (commit hash or version).
- Push the image to the registry.
- Record the digest in your deployment metadata.
- When promoting from staging to production, verify that the digest matches exactly.
The verification step is critical. When you promote an image from staging to production, you should not just check the tag. You should check that the digest is identical. This prevents a situation where someone overwrites the staging tag with a different image, and that wrong image gets promoted to production.
Here is a practical example of how to retrieve the digest after pushing an image and then use it to pin a deployment:
# Build and push the image with an immutable tag
docker build -t myapp:build-456 .
docker push myapp:build-456
# Retrieve the digest from the pushed image
digest=$(docker inspect --format='{{index .RepoDigests 0}}' myapp:build-456 | cut -d'@' -f2)
echo "Digest: $digest"
# Use the digest in a Kubernetes deployment manifest
cat <<EOF > deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp@$digest
EOF
# Apply the deployment
kubectl apply -f deployment.yaml
This ensures every pod runs the exact same image, regardless of what happens to tags in the registry.
Many deployment tools support digest-based deployments. Kubernetes, for example, lets you specify an image by digest instead of tag:
spec:
containers:
- name: myapp
image: myregistry.com/myapp@sha256:abc123def456
This ensures that every pod runs the exact same image, regardless of what happens to tags in the registry.
Practical Checklist
Before your next deployment, run through these checks:
- Every build produces a unique, immutable tag (commit hash, version, or timestamp)
- The
latesttag is never used in production deployments - Your pipeline records the digest of every image it builds
- Image promotion between environments verifies digest, not just tag
- Your registry has access controls to prevent unauthorized pushes
- Old images are cleaned up regularly to avoid storage bloat
The Takeaway
Tags are for humans. Digests are for machines. Use tags to make your life easier, but use digests to make your deployments reliable. When something goes wrong in production, you want to know exactly what is running. A mutable tag will not tell you. A digest will, every single time.