Renaming Columns, Splitting Tables, and Changing Constraints Without Downtime
You have a users table with a column called full_name. The team decides it should be display_name instead. If you rename it directly, every application still reading full_name will break the moment the change hits production. The column is gone, the queries fail, and users see errors.
This is not a hypothetical problem. Teams rename columns, split tables into two, and change constraints every sprint. The naive approach -- alter the schema and fix the code later -- causes production incidents that could have been avoided. The solution is a pattern called expand-contract, and it works for all three scenarios.
The Core Idea: Add First, Switch Gradually, Remove Last
The expand-contract pattern has three phases. First, you expand the schema by adding new structures alongside the old ones. Then you migrate applications and data to use the new structures. Finally, you contract by removing the old structures once nothing depends on them anymore.
The diagram below illustrates the three-phase expand-contract pattern and how it applies to renaming a column, splitting a table, and changing a constraint.
The key insight is that you never make a breaking change in one step. You always keep the old path working until the new path is fully adopted. This means zero downtime during schema changes, as long as you follow the sequence correctly.
Renaming a Column Without Breaking Anything
Let's walk through the full_name to display_name rename. In the expand phase, you add a new column display_name to the users table. You do not drop full_name. The new version of your application starts writing to both columns. Every insert or update writes the same value to full_name for old consumers and to display_name for new consumers.
Here are the SQL commands for each phase of the rename:
-- Phase 1: Expand - add the new column
ALTER TABLE users ADD COLUMN display_name VARCHAR(255);
-- Dual-write example (application logic, not SQL alone)
-- When inserting or updating a user, write to both columns:
INSERT INTO users (full_name, display_name) VALUES ('Alice', 'Alice');
UPDATE users SET full_name = 'Bob', display_name = 'Bob' WHERE id = 42;
-- Phase 2: Migrate - backfill existing data
UPDATE users SET display_name = full_name WHERE display_name IS NULL;
-- Phase 3: Contract - drop the old column
ALTER TABLE users DROP COLUMN full_name;
This sequence ensures that at no point does the database reject a query or lose data.
After the column exists and the application writes to both, you run a backfill. This is a batch process that copies all existing values from full_name into display_name for every row. You verify the counts match and spot-check random records to ensure nothing was lost.
Now comes the switch. All applications that read user names must be updated to read from display_name instead of full_name. This can happen gradually. Some services switch first, others follow. During this period, both columns remain populated, so any service still reading full_name still works.
Once every application and every query has been migrated, you enter the contract phase. You drop the full_name column. The whole process takes days or weeks depending on how many services need updating, but there is never a moment where users cannot access the application.
Splitting One Table Into Two
This scenario is more complex. Imagine your orders table stores order details and payment information in the same row. The team wants to separate payment data into a dedicated payments table. You cannot just create the new table and stop writing to the old one, because existing applications still read from orders.
The expand phase creates the payments table. The new version of the application starts writing payment data to both places. Every time an order is created or updated, the application writes the payment details to the orders table for old consumers and to the payments table for the new structure. This is called dual-write, and it is the hardest part to get right. Both writes must succeed or both must roll back. Partial writes will corrupt your data.
Backfill is critical here. You need to copy all existing payment data from orders into payments. Run this in batches to avoid locking the table for too long. After each batch, verify that the record count and total payment amounts match between the two tables. If they do not match, stop and investigate before continuing.
Once the backfill is verified and all applications have been updated to read payment data from payments instead of orders, you enter the contract phase. You remove the payment columns from orders. This step requires confidence that no query, report, or legacy service still accesses those columns. Check your database logs, application logs, and query monitoring before dropping anything.
Changing a Nullable Constraint to Not Null
This scenario looks simple but often catches teams off guard. Your users table has an email column that allows null values. The business now requires every user to have an email. If you alter the column to NOT NULL directly, the database will reject the change because existing rows with null emails violate the constraint.
The expand phase here does not add a new column in the traditional sense. A common approach is to add a new column email_not_null that mirrors email but with a not-null constraint. The new application version writes to both columns. For inserts, both columns get the same value. For updates, both get updated.
Backfill is the make-or-break step. Every row with a null email must be fixed. You need to either provide a default value, coordinate with users to fill in their email, or work with other teams to supply the missing data. This is not a technical problem alone. It is a data quality and organizational problem. The backfill script should log every row it cannot fix and alert the team to handle those cases manually.
After all rows have valid emails and all applications read from email_not_null, you contract by dropping the old email column. If you want to keep the original column name, you can rename email_not_null to email after the old column is gone.
Practical Checklist for Schema Changes
Before you run any schema migration in production, go through this checklist:
- Can the old schema still serve requests after the change?
- Is there a dual-write path for all new data?
- Is the backfill script tested on a copy of production data?
- Have you verified that all consumers have migrated before dropping anything?
- Do you have a rollback plan if the contract phase reveals a missed dependency?
The Takeaway
Every schema change you make in production should follow the same sequence: add the new structure, migrate data and applications gradually, and remove the old structure only when nothing depends on it. Whether you are renaming a column, splitting a table, or tightening a constraint, the pattern is the same. The cost is patience and careful coordination. The reward is zero downtime and zero broken queries.