Integration Tests: Catching Problems When Components Talk to Each Other

You wrote a function that looks correct. The unit test passes. The logic is clean. Then you deploy it, and the application starts returning errors. The database column you assumed existed was renamed last week. The external API you call changed its response format. The service you depend on now expects a different header.

This is the gap that unit tests cannot fill. A function can be perfectly correct in isolation and still fail when it tries to talk to another component. Integration tests exist to catch exactly these kinds of problems.

What Integration Tests Actually Check

Integration tests verify that two or more components work together correctly. The components might be your application and a database, your service and an external API, or two internal services within the same system.

The bugs they catch are rarely about wrong logic. They are about mismatched assumptions:

The following sequence diagram illustrates a typical mismatch: a service sends a date as a string, but the database expects a timestamp, causing an error.

sequenceDiagram participant Service participant Database Service->>Database: INSERT INTO orders (date) VALUES ('2025-04-01') Database-->>Service: ERROR: column "date" is of type timestamp but expression is of type text Note over Service,Database: Mismatched assumption: string vs. timestamp
  • Your code sends a date as a string, but the database column expects a timestamp.
  • Your service calls an API with a query parameter, but the API moved that parameter into the request body.
  • Your application assumes a field is always present, but the upstream service only includes it under certain conditions.

These are not bugs you can find by staring at code. They only appear when components actually exchange data.

The Fragility Trap

Integration tests have a reputation for being slow and brittle. This reputation is earned. The more real components you involve, the more likely your tests fail for reasons outside your code. A network timeout. A dependent service that is down. Test data that got corrupted by a previous run.

When this happens repeatedly, teams stop trusting the test results. They start skipping integration tests or ignoring failures. The tests become noise instead of signal.

The solution is not to avoid integration tests. The solution is to be selective about what you test with real dependencies.

Choosing What to Test with Real Dependencies

Not every dependency needs to be real in your integration tests. The rule of thumb is simple: test with a real dependency only for things that are hard to simulate or that frequently cause problems in production.

Databases are usually worth testing with a real instance. Query behavior, constraints, transactions, and locking are difficult to mock accurately. A mock might tell you your query is syntactically correct, but it will not tell you that your query causes a deadlock under concurrent access, or that your migration changed a column type that your code still treats as a string.

External third-party APIs are generally not worth testing with real endpoints in your pipeline. Use test doubles that record typical responses. The risk of flaky tests from network issues or API rate limits outweighs the benefit. Save the real integration for staging or production verification.

Internal services within your organization fall in the middle. You can test with real instances if the interface changes frequently and the cost of a mismatch is high. Otherwise, contract tests often provide better signal with less fragility.

A practical way to decide: ask yourself, "If this dependency has a problem, would I know from the application logic, or only from how it connects?" If the answer is "from how it connects" -- response format, header structure, parameter order -- then it is a candidate for an integration test with a real dependency. If the answer is "from the application logic," unit tests or contract tests are sufficient.

Keeping Integration Tests Fast and Reliable

Once you have decided what to test with real dependencies, follow these practices to keep your integration tests useful:

Test only the connection, not the business logic. If you already have unit tests covering your business rules, do not repeat them in integration tests. An integration test for a database query should verify that the query runs successfully against a real database and returns the expected structure. It should not verify every edge case of the business logic that uses that query.

Reset the environment before each test. If you use a database, create isolated test data and clean it up after the test finishes. Tests that depend on state left by previous tests are fragile and hard to debug. Use database transactions that roll back after each test, or spin up a fresh test container for each test run.

Limit the number of integration tests. You do not need to test every combination of parameters. Test one happy path and a few realistic failure scenarios. The goal is confidence that the connection works, not coverage of every possible input.

Where Integration Tests Fit in Your Pipeline

Integration tests sit between unit tests and end-to-end tests. They are more expensive than unit tests but faster and more focused than end-to-end tests.

A typical pipeline runs unit tests first. If they pass, the pipeline runs integration tests. If integration tests pass, the pipeline proceeds to staging or production deployment. End-to-end tests, if you have them, run later or in a separate environment.

The purpose of integration tests is not to achieve a coverage percentage. The purpose is to give you confidence that when your code changes, the connections between components still work.

Practical Checklist

  • For each external dependency, decide: test with real instance, test with test double, or rely on contract tests.
  • Run integration tests in an isolated environment that can be reset to a known state.
  • Keep integration tests focused on connection behavior, not business logic.
  • Limit integration tests to one happy path and a few realistic failure scenarios.
  • Monitor test execution time. If integration tests take longer than unit tests, you probably have too many or the wrong kind.

The Takeaway

Integration tests answer a question that unit tests cannot: "Do these components actually work together?" The answer is worth having before you deploy. But integration tests are a tool, not a goal. Be selective about what you test with real dependencies, keep the tests fast and isolated, and use them to build confidence in your deployments, not to chase coverage numbers.