How Pipelines Access Secrets Without Storing Them
You have a pipeline that builds, tests, and deploys your application. Somewhere during that process, it needs a database password, an API key, or a certificate. The natural instinct is to put that secret into a pipeline variable, a configuration file, or even hardcode it into a script. But that creates a problem: once a secret enters your pipeline, it can end up in places you never intended.
Secrets leak into build logs. They get baked into Docker images. They show up in artifact files. They persist in workspace caches. The moment a secret touches your pipeline code, you lose control over where it goes.
The solution is not to keep secrets out of the pipeline entirely. The pipeline needs them to do its job. The solution is to inject secrets in a way that they exist only in memory, only for the duration of the task, and never persist anywhere permanent. There are three common approaches, and each comes with its own trade-offs.
Environment Variables: Simple but Leaky
The most common approach is to pull a secret from a vault or secret store and set it as an environment variable in the running process. When your pipeline runs npm test or dotnet run, the variable DB_PASSWORD is already available in the process memory.
This approach is simple and fast. It works with almost every tool and framework. You do not need to change how your application reads configuration. Most languages and runtimes support environment variables natively.
For example, using the Vault CLI, you can fetch a secret and export it before running your build:
# Fetch the database password from Vault and export it as an environment variable
export DB_PASSWORD=$(vault kv get -field=password secret/db-prod)
# Run the build command that needs the secret
npm run build
The problem is that environment variables are easy to leak. Many applications print all environment variables during debugging. Logging frameworks often write environment variables to log files by default. Once a secret appears in a log, anyone with access to the logging system can read it. That includes developers, support staff, and potentially attackers if the logging system is compromised.
There is another risk: build artifacts. When your pipeline creates a JAR file, a Docker image, or a compiled binary, environment variables can get captured if the build process accidentally reads all variables. A Docker build that uses ARG or ENV instructions can embed secrets into the image layers. Once the secret is in the image, it is there forever unless you rebuild and redeploy everything.
Environment variables work well for short-lived pipelines where you control the logging and build process tightly. They are dangerous in pipelines that produce artifacts or have verbose logging.
Mount Files: More Control, More Cleanup
The second approach is to pull the secret from the vault and write it to a temporary file inside the container or workspace. The application reads that file during startup. After the task finishes, the file is deleted.
This approach gives you more control. You can set file permissions so only the application process can read the file. You can mount the file as read-only. You can delete it immediately after use. Many modern frameworks support reading configuration from files. Spring Boot reads from application.properties or YAML files. .NET reads from JSON configuration files. You can point these frameworks to a temporary file that contains only the secrets needed for that run.
The risk is that files can be left behind. If your pipeline does not clean up the workspace after finishing, the secret file remains on disk. In container environments, a mounted file can be read by other processes in the same container if permissions are not set correctly. If your pipeline uses caching, such as Docker layer caching, the secret file can get cached and appear in subsequent builds.
Mount files are safer than environment variables because you control the lifetime and permissions of the file. But they require discipline: you must clean up after every run, and you must ensure that caching mechanisms do not preserve the file.
Direct API Calls: No Secret in the Pipeline
The third approach is to never give the secret to the pipeline at all. Instead, the application or script calls the vault API directly every time it needs a secret. The pipeline does not handle the secret. It does not set environment variables. It does not write files. The application itself reaches out to the vault and fetches what it needs.
For example, instead of passing a database password as an environment variable, the application calls GET /v1/secret/db-password when it needs to connect to the database. The vault authenticates the request, returns the secret, and the application uses it immediately.
The following sequence diagram illustrates this flow:
This is the most secure approach. The secret never exists in pipeline memory. It is never written to a file. It never appears in logs. Even if someone gains access to the pipeline workspace, they cannot find the secret because it was never there.
The trade-off is availability. Your application now depends on the vault being reachable. If the vault goes down or becomes unreachable, your application cannot start. Every API call adds latency and leaves a trace in the vault's audit log. This approach works best for secrets that are used infrequently or for dynamic secrets that have short lifetimes, such as temporary database credentials that expire after a few minutes.
Direct API calls are ideal for production environments where security requirements are high and infrastructure teams can guarantee vault availability. They are overkill for development or testing pipelines where the risk of secret leakage is lower.
Choosing the Right Approach
There is no single best method. The choice depends on how often the secret is used, how strict your access control needs to be, and how reliable your vault infrastructure is.
For quick development pipelines where secrets change rarely, environment variables are fine as long as you control logging and artifact creation. For production deployments where secrets must not leak, direct API calls or mount files with strict cleanup are better. For containerized environments, mount files with read-only permissions and automatic cleanup after the container exits work well.
The important thing is to understand the weak points of each approach and design your pipeline to close those gaps. Do not just pick the easiest method. Pick the method that matches your risk tolerance and operational capability.
Practical Checklist
Before you decide how your pipeline will access secrets, check these points:
- Does your pipeline produce artifacts (Docker images, JAR files, compiled binaries) that could capture environment variables?
- Do your logging frameworks print environment variables by default?
- Can you set file permissions on mounted secrets in your container environment?
- Does your pipeline clean up workspace files after each run?
- Is your vault infrastructure reliable enough for direct API calls from applications?
- Do you need audit trails for every secret access?
The Real Challenge
Getting secrets into the pipeline without storing them is only half the problem. The harder part is making sure they do not leak into unexpected places: logs, artifacts, version control, or cached build layers. Each approach gives you a way to inject secrets, but none of them automatically prevents leakage. That requires ongoing discipline, automated checks, and a clear understanding of where your secrets can end up.
The goal is not to find a perfect method. The goal is to choose a method that matches your environment and then build safeguards around its weak points.