When Two App Versions Share One Database: The Dual-Write and Dual-Read Transition
Picture this: your team has just added a new column to a production database table. The schema change went smoothly. Now you need to deploy the new application version that writes to that column. But the old application version is still running, handling requests, and it knows nothing about this new column.
If the new app starts writing only to the new column, the old app won't see that data. Users whose requests land on the old app will get incomplete or inconsistent results. You cannot stop the old app instantly. You cannot switch everything to the new version in one shot. You need a transition period where both versions coexist and share the same database.
This is where dual-write and dual-read patterns become necessary. They are not elegant. They are not simple. But they are the practical way to migrate data structures in a live system without downtime.
The Core Problem: Two Versions, One Database
When you expand a schema by adding a new column or table, the database can hold both old and new structures. But the applications reading and writing that data are not so flexible. The old application version only understands the old structure. The new application version understands both, but it needs to keep the old version working until every instance has been upgraded.
The naive approach would be to have the new app write only to the new column. That breaks the old app immediately. The old app reads from the old column, finds nothing, and fails. The correct approach is to make the new app write to both places until the old app is gone.
The following sequence diagram illustrates the flow of writes and reads during the transition period:
Dual-Write: Writing to Two Places at Once
Dual-write means the new application version writes data to both the old structure and the new structure on every write operation. When a user creates a record, the new app fills in the old column as it always did, then also writes the same data to the new column.
This sounds straightforward, but two details matter a lot.
Here is a JavaScript example that implements dual-write and dual-read for a user profile update:
async function updateUserProfile(userId, name, email) {
// Dual-write: write to old column first, then new column
await db.query(
'UPDATE users SET name = ?, email = ? WHERE id = ?',
[name, email, userId]
);
await db.query(
'UPDATE users SET profile_data = ? WHERE id = ?',
[JSON.stringify({ name, email }), userId]
);
}
async function getUserProfile(userId) {
// Dual-read: prefer new column, fall back to old column
const row = await db.query(
'SELECT profile_data, name, email FROM users WHERE id = ?',
[userId]
);
if (row.profile_data) {
return JSON.parse(row.profile_data);
}
return { name: row.name, email: row.email };
}
First, the write order must be consistent. Write to the old column first, then the new column. If the process fails after writing to the old column but before writing to the new column, the old app can still read the data. The new column will be missing that record, but you can fix that later with a backfill. If you wrote to the new column first and the process failed, the old app would see incomplete data immediately. That is a production incident waiting to happen.
Second, the values must be identical. The data written to the old column and the new column must represent the same information. If there is any transformation or logic difference between the two writes, you will have data inconsistency that becomes very hard to debug later. Keep the write logic identical. The new column might store the data in a different format or structure, but the meaning must be the same.
Dual-Read: Reading from Two Places, Preferring the Old
Once dual-write is running, the new app can write data that both versions can read. But what about reading? The new app could start reading from the new column immediately, but that creates a problem. The old app is still writing data only to the old column. If the new app reads only from the new column, it will miss data written by the old app.
The solution is dual-read. The new app reads from both places but prioritizes the old column during the transition. This ensures that data written by the old app is always visible to the new app. Over time, as you verify that data is flowing correctly to the new column, you can gradually shift reads to the new column.
This gradual shift is where feature flags become useful. You can configure the new app to read from the new column for a small percentage of requests. If nothing breaks, increase the percentage. If an error appears, flip the flag back and all reads return to the old column. No redeployment needed.
What About Data Written by the Old App?
During this transition, the old app is still running and writing only to the old column. The new app must handle this. When the new app reads a record that was written by the old app, it finds data only in the old column. The new app should be able to read that data from the old column, use it, and optionally copy it to the new column as part of a background process.
This is not the same as dual-write. This is a backfill process that runs separately, filling the new column with data that was written before the new app started writing to both places. Backfill is a batch operation that runs after the dual-write and dual-read patterns are stable.
The Real Complexity: Coordinating the Transition
The hardest part of this phase is not the code. It is the coordination. You need to know which instances are running which version. You need to know when the last old instance has been decommissioned. You need to monitor for data inconsistencies between the old and new columns.
During dual-write, every write operation becomes two writes. That means more database load, more transaction time, and more potential points of failure. Monitor your database metrics during this phase. If write latency increases, you may need to batch the writes or use asynchronous replication.
During dual-read, every read operation may need to check two locations. That adds complexity to your query logic and can slow down read paths. Use caching carefully. Do not cache data from the old column if the new column is supposed to become the source of truth.
Practical Checklist for the Transition
- Confirm that the schema expansion (new column or table) is deployed and verified.
- Deploy the new app version with dual-write logic: write to old column first, then new column.
- Verify that data written by the new app is visible to the old app.
- Enable dual-read in the new app: read from old column by default, but prepare to switch.
- Use a feature flag to gradually shift reads from old column to new column.
- Monitor for data inconsistencies between old and new columns.
- Start the backfill process to copy old data into the new column.
- After backfill completes and all reads use the new column, remove dual-write logic.
- Decommission the old column or table after confirming no application reads from it.
The Takeaway
Dual-write and dual-read are not permanent patterns. They are temporary bridges that let you migrate data structures while keeping the system running for all users. The goal is to reach a state where only the new structure matters, and the old structure can be removed. Until then, every write goes to two places, every read checks two sources, and your team stays alert for inconsistencies. This phase is uncomfortable, but it is the only way to change a live database schema without stopping the world.