What Actually Happens When You Deploy: Placing Artifacts Into Environments

You have built your application, run tests, and stored a verified artifact. Now comes the moment that everyone notices: deployment. This is when your new version starts running somewhere real -- whether that is a staging server for internal testing or production where actual users depend on it.

Deployment is the act of placing an artifact into a target environment and making it live. But if you think deployment is just copying files to a server, you are in for a surprise. The way you deploy depends entirely on what you are deploying: application code, database changes, or infrastructure configuration. Each has its own mechanics, risks, and strategies.

Deployment Is Not One-Size-Fits-All

When you deploy an application, you are replacing a running version with a new one. This could mean sending a fresh container image to Kubernetes, swapping binary files on a server, or restarting a service. The goal is straightforward: stop the old version and start the new one with minimal disruption.

Database deployment is a different beast. You are not replacing files; you are running migration scripts that alter schemas or transform data. A database holds state -- user records, orders, configurations -- that must remain consistent before, during, and after the change. You cannot simply "overwrite" a database the way you replace a JAR file. A migration that adds a column might be safe, but one that renames a table could break every running query. And unlike application code, database changes are often irreversible or require careful rollback scripts.

Infrastructure deployment adds another layer. Here you are applying configuration to cloud providers or provisioning tools like Terraform, Pulumi, or Ansible. The deployment creates, modifies, or destroys resources: virtual machines, load balancers, databases, networking rules. A mistake in infrastructure deployment can delete a production database or expose sensitive data to the internet. The stakes are high, and the feedback loop is slower than application deployment.

The following flowchart contrasts the three deployment flows side by side:

flowchart TD subgraph App["Application Deployment"] A1["Replace instances"] --> A2["Rolling / Blue-Green / Canary"] A2 --> A3["Minimal disruption"] end subgraph DB["Database Deployment"] D1["Run migration scripts"] --> D2["Alter schema / transform data"] D2 --> D3["All-or-nothing; rollback needed"] end subgraph Infra["Infrastructure Deployment"] I1["Apply config to cloud"] --> I2["Create / modify / destroy resources"] I2 --> I3["Slow feedback; high stakes"] end

Different Strategies for Different Artifacts

Because each artifact type behaves differently, the deployment strategy that works for one may not work for another.

For applications, you have several well-known strategies:

For example, a Kubernetes Deployment manifest for a rolling update might look like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    spec:
      containers:
      - name: app
        image: my-app:v2.1.0
        ports:
        - containerPort: 8080
  • Rolling update: Replace instances one by one. The old version gradually gives way to the new one. No downtime, but the old and new versions run side by side briefly.
  • Blue-green: Spin up a complete new environment (green) alongside the current one (blue). Once the green environment is ready and verified, switch all traffic to it. Instant cutover, but double the infrastructure cost during the switch.
  • Canary: Send the new version to a small subset of users first. Monitor for errors or performance degradation. If things look good, gradually increase the traffic. If not, roll back the canary without affecting most users.

These strategies work because application instances are stateless or can be drained gracefully. You can run multiple versions simultaneously without corrupting data.

Databases do not have that luxury. You cannot run two versions of a schema at the same time and expect consistent behavior. A rolling update for a database migration is rarely possible because the schema change is global -- every query sees the same structure. Canary deployments for databases are equally tricky. You could run the migration on a replica first, but the moment you promote it to primary, all users are affected. Database changes are typically all-or-nothing: you apply the migration, verify it, and if something goes wrong, you run a rollback migration.

Infrastructure deployments sit somewhere in between. You can use blue-green for infrastructure by provisioning a parallel set of resources and switching DNS or load balancer targets. But infrastructure changes often have dependencies: you cannot create a new database instance without also updating the application configuration that points to it. And infrastructure changes can be slow -- provisioning a new server cluster might take minutes, not seconds.

Two Non-Negotiable Principles

Regardless of what you are deploying or which strategy you choose, two principles apply to every deployment.

First: deployments must be repeatable. If you run the same pipeline twice with the same artifact, you should get the same result. This means you must deploy the artifact you already verified, not rebuild from source at deploy time. Rebuilding introduces uncertainty: maybe the build server had a different library version, maybe the network was slow, maybe the compiler optimized differently. Use the exact same binary, container image, or package that passed your tests.

Repeatability also requires that your target environment is in a known state. If you cannot guarantee what is already running, you cannot predict what will happen when you deploy. Infrastructure-as-code helps here: it defines the desired state of your environment, so your deployment knows what to expect.

Second: every deployment must be recorded. Log what was deployed, which version, which commit, which environment, the exact timestamp, and who or what triggered it. This is not paperwork for compliance auditors. It is your first line of defense when something goes wrong. When users start reporting errors after a deployment, the deployment log tells you exactly what changed. Without it, you are guessing. Was it the code change? The config update? The database migration? The infrastructure change? A good deployment log narrows the search immediately.

Deployment Is Not Done When the Artifact Is Placed

Here is a common mistake: the pipeline marks deployment as successful the moment the artifact lands in the environment. But placing the artifact is only half the job. You need to verify that the new version is actually running correctly.

Verification after deployment is a separate stage. It checks that the service responds to health checks, that the database migration completed without errors, that the infrastructure resources are in the desired state. Some teams run smoke tests -- a quick set of critical user journeys -- to confirm the deployment did not break anything obvious.

Until verification passes, the deployment is not complete. If verification fails, the pipeline should trigger a rollback or alert the team immediately. Waiting for someone to notice a problem hours later defeats the purpose of automation.

Practical Checklist Before Your Next Deployment

Before you press that deploy button or let your pipeline run, run through this short checklist:

  • Is the artifact the same one that passed all tests? (No rebuild at deploy time.)
  • Is the target environment in a known state? (No manual changes that might conflict.)
  • Is the deployment strategy appropriate for the artifact type? (Rolling for apps, migration for databases, provision for infrastructure.)
  • Is there a rollback plan? (Can you revert the change quickly? For databases, do you have a rollback migration script ready?)
  • Will the deployment be logged automatically? (Version, commit, timestamp, trigger.)
  • Is there a verification step after deployment? (Health checks, smoke tests, or monitoring alerts.)

The Takeaway

Deployment is the moment your work meets reality. It is not a file copy operation. It is a carefully planned handoff between your pipeline and your running system. The artifact type determines the strategy, the environment state determines the risk, and the deployment log determines how fast you can recover from failure. Treat deployment with the same rigor as writing code -- because a bad deployment can undo weeks of good work in seconds.