Unused indexes, duplicates, bloated tables. Cleaned up every month.
Your index inventory rots. Indexes get added, nothing removes them. Duplicates accumulate. Bloat grows. Datapace audits on your schedule and opens cleanup PRs you can merge when ready.
Audit cadence
monthly by default
Bloat detection
pgstattuple + pg_stats
Output
DROP or REINDEX PRs
The problem
Every index costs writes. Every bloated table costs reads. Nobody audits. The cost compounds until queries get slow and the database is 3x the size it needs to be, and everyone blames the data, not the indexes.
REINDEX CONCURRENTLY exists. DROP INDEX exists. They require someone to know which indexes are unused, which are duplicates, which tables have bloat beyond the threshold. That someone does not exist on your team.
How Datapace solves this
The fix, automated.
Continuous index usage tracking
Datapace reads pg_stat_user_indexes on a rolling window and identifies indexes that have not been used in 30, 60, or 90 days. It also detects duplicate indexes, where two distinct indexes cover the same columns, or where one is a strict prefix of another. Every flagged index includes its size, last-used timestamp, and scan count so you can decide with full context.
drop
2 · 2.6 GB
duplicate
1 · 890 MB
scanned
89 total
Bloat analysis per table
Using pgstattuple where available and pg_stats otherwise, Datapace estimates dead tuple ratio and physical bloat per table. Tables that cross your configured threshold are flagged with a reclaim estimate, a proposed remediation (REINDEX, VACUUM FULL with guard, or pg_repack), and an off-peak window suggestion based on your traffic profile.
Automated cleanup PRs
Each audit produces a small set of PRs: one to drop unused indexes (with size reclaimed and last-used dates), one to consolidate duplicates, one to REINDEX CONCURRENTLY the worst bloated indexes. Each PR is small, reviewable, and safe to merge or skip at your own cadence. Every proposed DROP is revalidated against the live workload right before it lands, so no index that just started serving a new query gets removed.
DROP INDEX CONCURRENTLY idx_users_email_lower;
DROP INDEX CONCURRENTLY idx_sessions_created_at;
4 indexes dropped · 3.8 GB reclaimed
DROP INDEX CONCURRENTLY idx_orders_user;
-- covered by idx_orders_user_id, status
2 pairs consolidated · 890 MB reclaimed
REINDEX TABLE CONCURRENTLY sessions;
REINDEX TABLE CONCURRENTLY events;
3 tables · ~18 GB reclaimed
Who this is for
Example workflow
See it in action.
Stop regressions before they ship.
2-minute setup. Read-only Postgres connection. Results delivered in your repo and Slack.