New: How Datapace Cut Supabase Dashboard Queries from 61 to 1. Read the case study →
Schema hygiene

That deprecated feature left its data behind. Here is the cleanup.

Datapace cross references your schema with the code that uses it, and with the live query workload that touches it. Tables and columns that no longer serve anything get flagged, and a safe cleanup migration is proposed.

Orphan detection

tables, columns, indexes

Code scan

full repo AST

Cleanup pattern

archive, then drop

The problem

A feature is deprecated. The code deletion ships. The tables and columns stay, forever. Six months later nobody remembers what experiments_v1 does, so nobody drops it. Schema clutter accumulates. Migrations get slower. Backups cost more. Compliance audits cover more surface than they need to.

The fix is obvious but never gets prioritized: audit the schema and drop what nobody uses. Nobody has time. Nobody has a safe way to check that nothing still uses it.

How Datapace solves this

The fix, automated.

01

Cross references schema against repo code

Datapace parses the AST of your repo (ORM models, raw SQL files, migration history, feature flag configs) and matches every identifier against pg_class, pg_attribute, and pg_index. Tables, columns, and indexes that appear in the schema but nowhere in any code path are candidates for cleanup.

cross-reference · code × schema17 orphans · 3.8 GB reclaimable
tables
47 scanned2 orphan
columns
324 scanned8 orphan
indexes
89 scanned7 orphan

top orphans

experiments_v1 · table · 2.1 GB

user_flags_legacy · table · 380 MB

users.beta_opt_in · column · 48 MB

idx_sessions_legacy · index · 1.2 GB

02

Confirms with live query workload

A column might be absent from code but still receive reads from legacy clients, one-off scripts, or external consumers. Datapace correlates the static analysis with pg_stat_statements over a window you configure. A column must be both absent from code and absent from recent queries before it is proposed for removal.

orphan inventory · workload + code0 recent queries, 0 code refs
objecttypelast querysizerefs
experiments_v1table94d ago2.1 GB0
user_flags_legacytable120d ago380 MB0
users.beta_opt_incolumn61d ago48 MB0
idx_sessions_legacyindexnever1.2 GB0
safe to archivetotal reclaim · 3.8 GB
03

Proposes archive, then drop

Datapace does not just propose DROP. It proposes the two step pattern: create an archive_* table that captures the data to a cold storage schema, then drop the original in a follow-up migration gated behind a review window you configure. You keep an escape hatch. The proposal includes row count, size reclaimed, FK dependencies that must be removed first, and estimated lock time.

cleanup plan · experiments_v1reversible for 30 days
step 1PR #412 · archiveopens today

CREATE TABLE archive.experiments_v1

AS SELECT * FROM experiments_v1;

ALTER TABLE experiments_v1 DETACH FK refs;

7-day review window
step 2PR #419 · dropopens in 7d

DROP TABLE experiments_v1;

archive retained 30d, rollback available

code refs

0

queries 90d

0

fk deps

0 blocking

Who this is for

Engineering teams with long-lived production schemas
Platform teams doing periodic schema audits
Compliance sensitive orgs reducing data surface area
Anyone whose database has grown 3x faster than their feature set

Example workflow

See it in action.

Stop regressions before they ship.

2-minute setup. Read-only Postgres connection. Results delivered in your repo and Slack.