What a CI/CD Pipeline Must Be Able to Do (Beyond the Hype)

You push code, the pipeline turns green, and the deployment goes out. But when something breaks, you realize the pipeline was never designed to handle it. The database migration ran in the wrong order. The artifact from staging is different from what got deployed to production. And rolling back? Nobody planned for that.

This is the gap between having a pipeline and having a pipeline that actually works. Tools like Jenkins, GitHub Actions, GitLab CI, or ArgoCD all claim to solve delivery, but the tool itself is never the problem. The problem is missing capabilities. If your pipeline doesn't have the right building blocks, no tool can fix that.

Here are the six fundamental capabilities every CI/CD pipeline must have. Not nice-to-haves. Not features you add when you have time. These are the minimum requirements for getting changes from code to production safely.

Build: Turn Code Into Something Runnable

Every time a developer pushes a change, the pipeline must convert that code into something that can actually run. For compiled languages like Go, Rust, or Java, this means compiling the source into binaries. For interpreted languages like Python or JavaScript, build means checking syntax, bundling modules, resolving dependencies, and preparing the runtime environment.

Build is the first gate. If the code doesn't build, nothing else matters. The pipeline should fail fast here, not waste time running tests on code that can't even compile.

A common mistake is treating build as a simple step that always works. But build environments differ. A build that succeeds on a developer's laptop might fail in the pipeline because of missing system libraries, different tool versions, or environment variables. The pipeline's build step must be reproducible and isolated, so what works in CI works everywhere.

Test: Catch Problems Before They Reach Users

After a successful build, the pipeline must run automated tests. This is not just about unit tests that run in milliseconds. A healthy pipeline runs multiple layers of testing:

  • Unit tests that verify individual behaviors
  • Integration tests that check how components work together
  • End-to-end tests that simulate real user scenarios

Each layer catches different kinds of problems. Unit tests catch logic errors. Integration tests catch mismatches between services. End-to-end tests catch workflow failures that span multiple systems.

The key is automation. Tests must run without human intervention. If someone has to manually trigger tests or interpret results, the pipeline loses its primary value: speed and consistency. Every test that runs automatically is one less thing a human has to remember to check.

Package: Create a Versioned, Deployable Artifact

Once code builds and tests pass, the pipeline must package the result into an artifact that can be deployed. The artifact format depends on what you're shipping:

  • A container image for microservices
  • A binary file for desktop applications
  • An APK or IPA for mobile apps
  • A zip archive for serverless functions
  • A Helm chart for Kubernetes deployments

Every artifact must have a unique version. Not just a timestamp or a build number, but a version that ties back to the exact commit, the pipeline run, and the test results. This traceability is what lets you know exactly what is running in production and what changed between versions.

The artifact must be stored in a central registry or repository that the deployment stage can access. If you rebuild the artifact at deploy time, you lose consistency. The artifact that passed tests must be the exact same artifact that gets deployed.

Deploy: Place the Artifact in the Target Environment

Deployment is more than copying files. It's the process of placing a new version into an environment and making it serve traffic. For staging, deployment means installing the new version for testing. For production, it means replacing the running version without disrupting users.

Different deployment strategies exist for different risk levels:

  • Rolling update: replace instances one by one
  • Blue-green: switch traffic between two identical environments
  • Canary: send a small percentage of traffic to the new version first
  • Feature flags: deploy the code but keep it hidden behind a toggle

The pipeline must support the right strategy for each environment. Staging can use a simple replace. Production often needs gradual rollout with monitoring. The pipeline should automate the entire process, not just the file copy.

Migrate: Handle Database Changes Safely

If your application uses a database, the pipeline must handle schema migrations. Adding a column, changing a data type, or creating a new table requires running migration scripts in a specific order. These migrations cannot be mixed randomly with application deployments.

The tricky part is ordering. Sometimes the migration must run before the new application code is deployed. For example, adding a nullable column that the new code will use. Other times, the migration must run after the new code is deployed. For example, removing an old column that the old code still references.

The pipeline must know this order and execute it correctly. A migration that runs at the wrong time can cause downtime, data loss, or both. This is one of the most overlooked capabilities in CI/CD pipelines, and one of the most dangerous to get wrong.

Rollback: Undo When Things Go Wrong

Not every deployment succeeds. When a new version causes errors, performance degradation, or data corruption, the pipeline must be able to revert to the previous version. Rollback is not just redeploying the old artifact. It involves:

  • Reverting the application to the previous version
  • Running reverse migrations on the database
  • Restoring infrastructure configuration
  • Verifying that the rollback actually worked

Rollback must be planned before the first deployment. If you design the pipeline without considering how to undo a change, you will find yourself scrambling to write rollback scripts while production is down. That is the worst time to figure it out.

For database migrations, rollback means having down migrations that reverse the up migrations. For infrastructure, it means keeping previous state files or using infrastructure-as-code tools that support state rollback. For applications, it means keeping the previous artifact available and having a deployment strategy that supports instant switching.

Putting It All Together

These six capabilities build, test, package, deploy, migrate, and rollback form the foundation of any real CI/CD pipeline. Depending on what you're shipping, some capabilities may look different. Infrastructure pipelines might replace build and package with configuration validation and state preparation. Mobile pipelines might add code signing and app store submission. But the core functions remain the same.

Here is a minimal GitLab CI pipeline that maps each capability to a stage:

stages:
  - build
  - test
  - package
  - deploy
  - migrate
  - rollback

build:
  stage: build
  script:
    - go build -o app

test:
  stage: test
  script:
    - go test ./...

package:
  stage: package
  script:
    - docker build -t myapp:$CI_COMMIT_SHA .
    - docker push registry.example.com/myapp:$CI_COMMIT_SHA

deploy:
  stage: deploy
  script:
    - kubectl set image deployment/myapp myapp=registry.example.com/myapp:$CI_COMMIT_SHA

migrate:
  stage: migrate
  script:
    - ./run_migrations up

rollback:
  stage: rollback
  script:
    - ./run_migrations down
    - kubectl rollout undo deployment/myapp
  when: manual

The following flowchart shows how these six capabilities connect in a typical pipeline:

flowchart TD A[Code Push] --> B[Build] B --> C[Test] C --> D{Package} D --> E[Deploy] E --> F[Migrate DB] F --> G{Health Check} G -- Pass --> H[Complete] G -- Fail --> I[Rollback] I --> J[Restore DB] J --> K[Redeploy Previous] K --> L[Verify]

Before you choose a CI/CD tool or redesign your pipeline, map out which of these capabilities you have and which are missing. A tool that promises everything but doesn't handle database migrations or rollback planning will leave you exposed.

Quick Capability Checklist

  • Build runs in an isolated, reproducible environment
  • Tests run automatically at multiple levels
  • Artifacts are versioned and stored in a central registry
  • Deployment supports the right strategy for each environment
  • Database migrations are ordered correctly relative to application deployments
  • Rollback is tested and works for application, database, and infrastructure

The Concrete Takeaway

A pipeline is not a collection of steps. It is a system that must handle the full lifecycle of a change: from code to running service, and back again if needed. If your pipeline cannot build, test, package, deploy, migrate, and rollback, it is incomplete. Start by filling the missing capability, not by switching tools.