TypeORM Migrations Are Dangerous (Here's How to Check)

TypeORM's migration generator doesn't understand Postgres lock modes. Here's what that means for your production database and how to catch problems before they ship.

TypeORM is widely used in the TypeScript ecosystem. It’s decent at what it does. But its migration generator has a blind spot: it has no concept of PostgreSQL lock modes.

When you run typeorm migration:generate, it diffs your entity definitions against the current database schema and produces SQL. The SQL is correct, it will get your schema to the desired state. But the tool never asks “will this lock the table?” or “will this block reads?”

It doesn’t need to ask. It doesn’t know those questions exist.

What TypeORM generates vs. what you should run

Here’s a real example. You add a publishedAt field to a User entity:

@Column({ type: 'timestamptz', default: () => 'now()' })
publishedAt: Date;

TypeORM generates:

ALTER TABLE "user" ADD "publishedAt" TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT now();

The dangerous part here is the non-constant default. Expressions like now() are not metadata-only, so Postgres rewrites existing rows under ACCESS EXCLUSIVE. On a small table, you might not notice. On a table with a few million rows, you can take out reads and writes for the duration of the rewrite.

What you should actually run:

ALTER TABLE "user" ADD "publishedAt" TIMESTAMP WITH TIME ZONE;
-- Backfill existing rows in batches (outside this migration)
ALTER TABLE "user" ALTER COLUMN "publishedAt" SET DEFAULT now();
ALTER TABLE "user" ADD CONSTRAINT chk_published_at
  CHECK ("publishedAt" IS NOT NULL) NOT VALID;
ALTER TABLE "user" VALIDATE CONSTRAINT chk_published_at;
ALTER TABLE "user" ALTER COLUMN "publishedAt" SET NOT NULL;
ALTER TABLE "user" DROP CONSTRAINT chk_published_at;

More steps, yes. But the staged version replaces one long rewrite lock with brief metadata changes plus a separate validation step that keeps normal traffic moving.

The second footgun: CREATE INDEX

TypeORM also generates indexes. If your entity has:

@Index()
@Column()
email: string;

The generated migration includes:

CREATE INDEX "IDX_e12875dfb3b1d92d7d7c5377e2" ON "user" ("email");

That’s a non-concurrent index creation. It takes a SHARE lock, which blocks all writes to the table for the duration of the index build. On a 10M-row table, that could be minutes.

The fix:

CREATE INDEX CONCURRENTLY "IDX_e12875dfb3b1d92d7d7c5377e2" ON "user" ("email");

One word. CONCURRENTLY. It uses SHARE UPDATE EXCLUSIVE instead of SHARE, which means reads and writes continue normally while the index is being built. The trade-off: it takes longer and can’t run inside a transaction block.

TypeORM doesn’t add CONCURRENTLY. It doesn’t know it should.

The third footgun: ALTER COLUMN TYPE

You change a column from varchar(50) to varchar(100):

@Column({ type: 'varchar', length: 100 }) // was 50
name: string;

TypeORM generates:

ALTER TABLE "user" ALTER COLUMN "name" TYPE character varying(100);

This looks safe. Widening a varchar should be a metadata-only operation, right?

It depends. For varchar(50)varchar(100), Postgres actually handles this as a metadata-only change since PG 9.2. But for many other type changes, like integer to bigint or timestamp to timestamptz, Postgres rewrites the entire table under ACCESS EXCLUSIVE.

The point isn’t that every type change is dangerous. It’s that TypeORM doesn’t distinguish between them. It generates the same ALTER COLUMN TYPE statement regardless of whether it’ll be instant or take down your table.

The transaction problem

By default, TypeORM wraps migrations in a transaction. This is usually good: if a migration fails halfway, it rolls back cleanly. But it creates a conflict with CREATE INDEX CONCURRENTLY, which explicitly cannot run inside a transaction.

If you need concurrent index creation, you have to set transaction = false on your migration class:

export class AddEmailIndex1708000000000 implements MigrationInterface {
  transaction = false;  // Required for CONCURRENTLY

  async up(queryRunner: QueryRunner) {
    await queryRunner.query(`SET lock_timeout = '2s'`);
    await queryRunner.query(
      `CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_user_email ON "user" ("email")`
    );
  }
}

This is another thing TypeORM doesn’t handle for you. The generated migration will always use a transaction and always use non-concurrent index creation.

How to catch this before it ships

I built pgfence specifically for this problem. It parses TypeORM migration files, extracts the SQL from queryRunner.query() calls, and runs it through PostgreSQL’s actual parser.

npx @flvmnt/pgfence analyze --format typeorm src/migrations/*.ts

It handles the format detection automatically: if it sees MigrationInterface or queryRunner.query, it knows it’s TypeORM. It extracts SQL from the up() method (ignoring down()), flags dynamic SQL it can’t analyze, and reports the lock mode and risk level for every statement.

For the migration above, you’d get:

Statement: ALTER TABLE "user" ADD "publishedAt" timestamptz NOT NULL DEFAULT now()
Lock Mode: ACCESS EXCLUSIVE
Risk:      HIGH
Rule:      add-column-non-constant-default

For this pattern, you also get the staged safer sequence right in the output.

The setup I’d recommend for TypeORM projects

Add pgfence to your CI pipeline. Two minutes of setup:

npm install -D @flvmnt/pgfence

In your CI config:

npx @flvmnt/pgfence analyze --ci --max-risk medium --format typeorm src/migrations/*.ts

This fails the build when a statement exceeds MEDIUM risk, which blocks HIGH and CRITICAL while still allowing many advisory MEDIUM findings. You can adjust the threshold.

For extra safety, add SET lock_timeout = '2s' to every migration. pgfence will remind you if you forget, it’s one of the default policy checks.

TypeORM is fine for generating migrations. Just don’t trust those migrations blindly. Run them through something that actually understands what Postgres will do when it executes them.

← All posts