When Running Terraform from Your Laptop Stops Being Enough
You have been managing infrastructure with Terraform from your terminal. It works fine when you are the only person making changes. You run terraform plan, check the output, run terraform apply, and move on. The state file sits on your machine, and you know exactly what you changed and when.
Then a colleague joins the project. They clone the repository, run terraform init, and get a different plan than what you saw yesterday. The state file on their laptop is outdated. Someone applies a change without telling anyone. A week later, nobody remembers who modified the security group rules or why.
This is the moment when the laptop-based workflow breaks. The problems are not about Terraform itself. They are about coordination, visibility, and accountability. When infrastructure is shared, running commands from individual machines creates confusion, risk, and silent drift.
Moving the Workflow into a Pipeline
Instead of running terraform plan and terraform apply from each person's laptop, you can move that workflow into a CI/CD pipeline. Every infrastructure change then goes through the same path, gets recorded, and can be reviewed by the team before it takes effect.
The pipeline approach turns infrastructure changes into something similar to application code changes. You write the configuration, open a pull request, get feedback, and only after approval does the change get applied. The difference is that Terraform adds a planning step that shows exactly what will happen to your cloud resources before anything runs.
The following example shows a generic CI pipeline configuration that implements the write-plan-apply workflow:
stages:
- validate
- plan
- apply
validate:
stage: validate
script:
- terraform fmt -check
- terraform validate
only:
- merge_requests
plan:
stage: plan
script:
- terraform plan -out=plan.tfplan
artifacts:
paths:
- plan.tfplan
only:
- merge_requests
apply:
stage: apply
script:
- terraform apply plan.tfplan
when: manual
only:
- main
The Three Stages: Write, Plan, Apply
The most common way to integrate Terraform into a pipeline is to split the workflow into three stages that match the natural flow of making a change.
The following sequence diagram illustrates how the three stages interact:
Write: Catch Problems Before Review
The write stage starts when someone modifies Terraform configuration files and opens a pull request. At this point, the pipeline can run basic checks automatically. terraform fmt verifies that the code follows standard formatting. terraform validate checks that the configuration is syntactically correct and that all required arguments are present.
These checks are the infrastructure equivalent of running a linter on application code. They catch simple mistakes early, before a human reviewer spends time looking at the pull request. If the formatting is wrong or a required field is missing, the pipeline fails immediately, and the author fixes it before anyone else gets involved.
Plan: Show What Will Change
After the pull request is open, the pipeline runs terraform plan automatically. The output of the plan is then posted as a comment on the pull request. This is where the review becomes meaningful.
The plan output shows exactly what Terraform will do: which resources will be created, which will be modified, and which will be destroyed. A reviewer can see that a pull request changes the server instance type from small to medium, or adds a new firewall rule, or deletes an old database subnet group. If something looks wrong, the reviewer rejects the pull request before any change reaches production.
This stage is critical because it moves infrastructure review from "trust me, I ran the plan" to "here is the plan, check it yourself." The plan becomes part of the pull request conversation, and the decision to approve or reject is based on concrete output, not on faith.
Apply: Execute Only After Approval
The apply stage only runs after the pull request is approved and merged into the main branch. The pipeline on the main branch then executes terraform apply using the plan that was already reviewed.
The safest way to do this is to save the plan output from the previous stage as a file and pass that file to terraform apply. This technique is called a saved plan. It guarantees that the apply runs the exact same changes that were reviewed, not a new plan that might differ because someone pushed another commit between review and apply.
Without a saved plan, the pipeline would run terraform plan again during the apply stage. If the configuration changed in the meantime, the new plan could be different from what was reviewed. That defeats the purpose of having a review in the first place.
Managing State in a Pipeline
State management becomes straightforward when Terraform runs in a pipeline. The state file must be stored in a shared location, such as a cloud storage bucket or a Terraform backend that the team configures once. Every time the pipeline runs plan or apply, it fetches the state from that shared location, not from a local copy that might be outdated.
The pipeline should also enable state locking. Terraform can lock the state file while a plan or apply is running, preventing two processes from modifying the state at the same time. Without locking, two pipelines could run apply simultaneously, causing corruption or unexpected results.
What This Workflow Gives You
Once the write-plan-apply workflow runs in a pipeline, infrastructure changes follow the same discipline as application code changes. Every change goes through a pull request, gets reviewed by team members, gets tested with an automated plan, and only runs after approval. The entire history of changes is recorded in Git and in the pipeline logs.
You no longer wonder who changed a server or when it happened. You no longer worry about stale state files on someone's laptop. You no longer hear "it worked on my machine" about infrastructure.
Practical Checklist for Setting This Up
- Configure a remote backend for state storage before setting up the pipeline. The backend must be accessible to the pipeline runner.
- Set up state locking. Most backends like S3 with DynamoDB or GCS with object versioning support this natively.
- Add
terraform fmtandterraform validateas early checks in the pull request pipeline. - Run
terraform planon every pull request and post the output as a comment. - Use saved plans. Store the plan file as a pipeline artifact and pass it to the apply stage.
- Restrict
terraform applyto run only on the main branch after merge. - Ensure the pipeline runner has the minimum required permissions to execute the plan and apply. Do not use admin credentials.
What Comes Next
After the pipeline handles the write-plan-apply workflow automatically, the next question is how to manage different environments. Staging and production rarely use the same configuration values. The pipeline needs to handle environment-specific variables, state files, and approval gates without duplicating the entire workflow for each environment.
But that is a problem for later. For now, the important step is to stop running Terraform from laptops and start treating infrastructure changes like code changes. The pipeline gives you a single source of truth for what changed, who approved it, and when it was applied. That alone eliminates most of the confusion that comes with shared infrastructure.