PGLT + pgfence: Catch SQL Errors and Lock Dangers in One CI Pipeline

Postgres Language Server validates SQL correctness. pgfence adds lock and migration-safety analysis. Here's how to run both in CI for broader migration coverage.

There are two ways a migration can go wrong.

The first is that the SQL is invalid. Missing comma, wrong column name, referencing a table that doesn’t exist. Postgres Language Server (PGLT) is built for that class of problem: SQL correctness checks like syntax, semantic, and type issues before you deploy.

The second is that the SQL is valid but dangerous. ALTER TABLE users ADD COLUMN processed_at timestamptz NOT NULL DEFAULT now() is perfectly valid SQL. It can also take an ACCESS EXCLUSIVE lock on your users table and rewrite existing rows while production traffic keeps arriving.

PGLT doesn’t catch the second kind. It’s not supposed to: lock modes are a runtime concern, not a syntax concern. That’s what pgfence does.

What each tool checks

ConcernPGLTpgfence
SQL syntax errorsYesNo
Type mismatchesYesNo
Missing tables/columnsYesNo
Lock mode analysisNoYes
Risk level scoringNoYes
Safe rewrite recipesNoYes
Missing lock_timeoutNoYes
CONCURRENTLY-in-transactionNoYes

They mostly complement each other. PGLT tells you “this SQL is correct.” pgfence tells you “this SQL is safe to run on a busy table.”

A combined CI pipeline

Here’s a GitHub Actions workflow that runs both:

name: Migration Safety
on:
  pull_request:
    paths:
      - 'migrations/**'

jobs:
  check-migrations:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      # 1. Validate SQL correctness with PGLT
      - name: Install Postgres Language Server
        run: |
          curl -L https://github.com/supabase-community/postgres-language-server/releases/latest/download/pglt-x86_64-unknown-linux-gnu.tar.gz | tar xz
          chmod +x pglt
          sudo mv pglt /usr/local/bin/

      - name: Check SQL syntax
        run: pglt check migrations/

      # 2. Check lock safety with pgfence
      - name: Check migration safety
        run: npx --yes @flvmnt/pgfence@latest analyze --ci migrations/*.sql

Step 1 catches bad SQL. Step 2 catches dangerous SQL. If either fails, the PR is blocked.

What this actually catches

Say someone submits a migration with:

ALTER TABLE orders ADD COLUMN processed_at timestamptz NOT NULL DEFAULT now();
CREATE INDEX idx_orders_processed_at ON orders (processed_at);

PGLT validates the syntax, it’s fine. pgfence flags two problems:

  1. ADD COLUMN ... DEFAULT now() uses a non-constant default, so Postgres rewrites the table under ACCESS EXCLUSIVE. HIGH risk.
  2. CREATE INDEX without CONCURRENTLY takes a SHARE lock and blocks writes. MEDIUM risk.

Plus a policy warning: no SET lock_timeout found.

The safe version:

SET lock_timeout = '2s';

-- Step 1: Add nullable column (instant, no lock contention)
ALTER TABLE orders ADD COLUMN processed_at timestamptz;

-- Step 2: Backfill in batches (outside migration, out-of-band)
-- UPDATE orders SET processed_at = now()
--   WHERE processed_at IS NULL LIMIT 1000;

-- Step 3: Set the default for new rows
ALTER TABLE orders ALTER COLUMN processed_at SET DEFAULT now();

-- Step 4: Add NOT NULL safely
ALTER TABLE orders ADD CONSTRAINT chk_processed_at_not_null
  CHECK (processed_at IS NOT NULL) NOT VALID;
ALTER TABLE orders VALIDATE CONSTRAINT chk_processed_at_not_null;
ALTER TABLE orders ALTER COLUMN processed_at SET NOT NULL;
ALTER TABLE orders DROP CONSTRAINT chk_processed_at_not_null;

And the index, outside a transaction:

SET lock_timeout = '2s';
CREATE INDEX CONCURRENTLY idx_orders_processed_at ON orders (processed_at);

For patterns with rewrite guidance, pgfence prints the safer sequence directly in the report.

Adjusting the risk threshold

--ci fails when a result exceeds the configured ceiling. With the default --max-risk high, CRITICAL findings and policy errors fail the build. If you want HIGH and CRITICAL to fail, make it stricter:

# Block HIGH and CRITICAL
npx --yes @flvmnt/pgfence@latest analyze --ci --max-risk medium migrations/*.sql

# Block MEDIUM, HIGH, and CRITICAL
npx --yes @flvmnt/pgfence@latest analyze --ci --max-risk low migrations/*.sql

For many teams, --max-risk medium is the practical starting point because it blocks the strongest findings without turning every advisory into a hard failure.

Adding DB-size-aware scoring

If you want pgfence to adjust risk levels based on actual table sizes, the current OSS CLI supports either a read-only database URL or a prebuilt stats JSON file. The simplest repo-backed example is a direct read-only connection in CI:

      - name: Check migration safety (size-aware)
        run: npx --yes @flvmnt/pgfence@latest analyze --ci --db-url $DATABASE_URL migrations/*.sql

With table stats, a MEDIUM-risk CREATE INDEX on a 10M-row table gets bumped to CRITICAL. Without stats, pgfence still catches the lock mode issue, it just cannot tell you how bad it will be for your specific table.

The bottom line

PGLT and pgfence solve different problems. One validates SQL correctness. The other validates operational safety. Running both in the same pipeline catches the two main categories of migration failures: “this SQL is wrong” and “this SQL will take down my table.”

# Install both
npm install -D @flvmnt/pgfence

# Run both in CI
pglt check migrations/
npx @flvmnt/pgfence analyze --ci migrations/*.sql

Two commands, broader coverage. You still need judgment for dynamic SQL, rollout sequencing, and application-level backfills.

← All posts