10 Ways to Make SQLite Blazing Fast (With Real Examples)
10 Ways to Make SQLite Blazing Fast
SQLite powers more applications than any other database engine on Earth. Most of them are leaving performance on the table. Here's how to fix that.
Default SQLite is deliberately slow for safety — five PRAGMAs and a few indexes can make it 50x faster without sacrificing reliability.
| # | Optimization | Impact |
|---|---|---|
| 1 | Enable WAL mode | Critical |
| 2 | Set synchronous = NORMAL | Critical |
| 3 | Add proper indexes | Critical |
| 4 | Batch writes in transactions | Critical |
| 5 | Increase page cache | High |
| 6 | Enable memory-mapped I/O | High |
| 7 | Stop using SELECT * | High |
| 8 | Keep temp tables in memory | Medium |
| 9 | Run ANALYZE after schema changes | Medium |
| 10 | VACUUM to defragment | Medium |
The first four alone account for 90% of the speedup. MyD1's AI Agent can analyze your tables and suggest missing indexes, rewrite slow queries, and recommend the right PRAGMAs for your workload.
SQLite gets a bad reputation for being "slow" or "not production-ready." That's wrong. SQLite can handle millions of rows, process thousands of queries per second, and outperform PostgreSQL and MySQL on read-heavy workloads -- if you know how to configure it.
The default settings are optimized for safety and compatibility, not speed. That's a deliberate choice by the SQLite team. But it means out-of-the-box SQLite is leaving 5x-50x performance on the table.
Here are 10 optimizations that will transform your SQLite performance. Each one includes real SQL you can run right now.
This is the single most impactful change you can make. By default, SQLite uses journal mode DELETE, which locks the entire database on every write. WAL (Write-Ahead Logging) mode allows readers and writers to work simultaneously.
PRAGMA journal_mode = WAL;
What changes:
- Readers no longer block writers (and vice versa)
- Write performance improves 5x-50x depending on workload
- Multiple readers can operate concurrently
- Crash recovery is faster
You only need to run this once -- it persists across connections. There's almost no reason not to use WAL mode for any application.
If you're on Cloudflare D1, WAL mode is already enabled. This optimization applies to local SQLite databases -- development databases, mobile apps, desktop apps, or self-hosted SQLite.
By default, SQLite uses synchronous = FULL, which forces a full filesystem sync (fsync) after every transaction. This is the safest option -- but it's also painfully slow, especially on spinning disks or networked filesystems.
PRAGMA synchronous = NORMAL;
NORMAL is safe for WAL mode -- you might lose the last few milliseconds of data in a power failure, but the database will never corrupt. For the vast majority of applications, this is the right tradeoff.
When to use FULL: Only if you're running SQLite on unreliable hardware (power failures are common) and losing even a single committed transaction is unacceptable (financial systems, medical records).
This sounds obvious, but it's the #1 cause of slow SQLite queries. Without an index, every query does a full table scan -- reading every single row to find matches. With an index, it jumps directly to the right rows.
-- If you query users by email:
SELECT * FROM users WHERE email = 'john@example.com';
-- Add an index:
CREATE INDEX idx_users_email ON users(email);
-- For queries that filter on multiple columns:
SELECT * FROM orders WHERE user_id = 42 AND status = 'pending';
-- Use a composite index (column order matters!):
CREATE INDEX idx_orders_user_status ON orders(user_id, status);
A query on a 1-million-row table goes from 200ms (full scan) to 0.1ms (index lookup). That's a 2000x improvement.
How to find missing indexes: Use EXPLAIN QUERY PLAN to see if SQLite is scanning the whole table:
EXPLAIN QUERY PLAN
SELECT * FROM orders WHERE user_id = 42;
-- Bad: "SCAN orders" (full table scan)
-- Good: "SEARCH orders USING INDEX idx_orders_user_id"
Use MyD1 to run EXPLAIN QUERY PLAN visually and spot missing indexes without digging through terminal output.
Every individual INSERT in SQLite is an implicit transaction by default. Each transaction involves a disk sync. Inserting 10,000 rows = 10,000 disk syncs. That takes minutes.
Wrap them in a single transaction and it takes milliseconds.
-- 10,000 individual transactions
-- Each one = fsync to disk
INSERT INTO logs VALUES (1, 'event_a');
INSERT INTO logs VALUES (2, 'event_b');
-- ... 9,998 more
-- Takes 30-60 seconds
BEGIN TRANSACTION;
INSERT INTO logs VALUES (1, 'event_a');
INSERT INTO logs VALUES (2, 'event_b');
-- ... 9,998 more
COMMIT;
-- Takes 50-200 milliseconds
In application code (Node.js, Python, etc.), always batch your writes:
// Node.js with better-sqlite3
const insert = db.prepare('INSERT INTO logs (id, event) VALUES (?, ?)');
const insertMany = db.transaction((rows) => {
for (const row of rows) insert.run(row.id, row.event);
});
insertMany(myData); // 10,000 rows in ~100ms
On Cloudflare D1, use the batch API for the same effect:
// Cloudflare D1 batch insert
const stmts = rows.map(row =>
env.DB.prepare('INSERT INTO logs (id, event) VALUES (?, ?)')
.bind(row.id, row.event)
);
await env.DB.batch(stmts); // Single round-trip
SQLite's default cache is 2MB (negative value = KB). For any serious workload, that's tiny. Increasing it means more data stays in memory, fewer disk reads.
-- Set cache to 64MB (negative = KB)
PRAGMA cache_size = -65536;
-- Or 256MB for larger datasets
PRAGMA cache_size = -262144;
If your database is smaller than your available RAM, set the cache large enough to hold the entire thing. SQLite will load frequently accessed pages into memory and keep them there.
Rule of thumb: For a server or desktop app, use 10-25% of available RAM. For a mobile app, keep it under 32MB.
Memory-mapped I/O lets SQLite access the database file directly through the OS virtual memory system, bypassing the userspace read/write calls. This can significantly speed up read-heavy workloads.
-- Enable mmap for up to 256MB
PRAGMA mmap_size = 268435456;
-- Or match your database size
-- (check with: SELECT page_count * page_size FROM pragma_page_count(), pragma_page_size();)
This is especially effective when your database fits in RAM. The OS handles caching transparently -- no double-buffering between SQLite's page cache and the OS page cache.
Caveat: Don't use mmap on networked filesystems (NFS, SMB) or if your database can be modified by another process simultaneously. On local disk? Turn it on.
SELECT * reads every column of every matching row. If your table has a 10KB text column and you only need the id and name, you're transferring 100x more data than necessary.
-- Reads ALL columns including
-- large text/blob fields
SELECT * FROM articles
WHERE published = 1
ORDER BY date DESC
LIMIT 20;
-- Only reads what you display
SELECT id, title, date, summary
FROM articles
WHERE published = 1
ORDER BY date DESC
LIMIT 20;
This matters even more with covering indexes. If an index contains all the columns you need, SQLite can answer the query entirely from the index -- without touching the table at all:
-- This index "covers" the query above
CREATE INDEX idx_articles_published ON articles(published, date DESC, id, title, summary);
-- Now the query never reads the articles table -- pure index scan
When SQLite needs temporary storage (for sorting, grouping, or creating temp tables), it writes to disk by default. Force it into memory:
PRAGMA temp_store = MEMORY;
This speeds up queries with ORDER BY, GROUP BY, DISTINCT, and subqueries -- anything that requires intermediate results. If you have enough RAM (and you usually do), there's no reason to let SQLite spill temp data to disk.
SQLite's query planner makes decisions about which index to use based on statistics. Without stats, it guesses -- and sometimes guesses wrong.
-- Collect statistics about all tables and indexes
ANALYZE;
-- Or for a specific table
ANALYZE orders;
Run ANALYZE after:
- Creating new indexes
- Bulk inserting or deleting large amounts of data
- Any schema change
The stats are stored in a hidden sqlite_stat1 table and persist across connections. This is a one-time cost that makes every subsequent query faster.
When you delete rows, SQLite doesn't shrink the file -- it leaves empty pages. Over time, the database gets fragmented and bloated. VACUUM rewrites the entire database into a clean, compact file.
-- Rebuild the entire database (locks during operation)
VACUUM;
-- Or enable auto-vacuum for new databases
PRAGMA auto_vacuum = INCREMENTAL;
For ongoing maintenance without locking, use incremental vacuum:
-- Free up to 100 pages of unused space
PRAGMA incremental_vacuum(100);
When to VACUUM: After deleting a large amount of data (>20% of the database), or periodically as maintenance (weekly/monthly). Don't run it after every delete -- it rewrites the entire file.
The Copy-Paste Setup
Run these PRAGMAs at the start of every connection for optimal performance:
-- Run once (persists across connections)
PRAGMA journal_mode = WAL;
-- Run on every new connection
PRAGMA synchronous = NORMAL;
PRAGMA cache_size = -65536; -- 64MB cache
PRAGMA temp_store = MEMORY;
PRAGMA mmap_size = 268435456; -- 256MB mmap
PRAGMA foreign_keys = ON; -- Not performance, but you should always enable this
In Node.js with better-sqlite3:
import Database from 'better-sqlite3';
const db = new Database('app.db');
// One-time setup
db.pragma('journal_mode = WAL');
// Per-connection optimizations
db.pragma('synchronous = NORMAL');
db.pragma('cache_size = -65536');
db.pragma('temp_store = MEMORY');
db.pragma('mmap_size = 268435456');
db.pragma('foreign_keys = ON');
In Python:
import sqlite3
conn = sqlite3.connect('app.db')
conn.execute('PRAGMA journal_mode = WAL')
conn.execute('PRAGMA synchronous = NORMAL')
conn.execute('PRAGMA cache_size = -65536')
conn.execute('PRAGMA temp_store = MEMORY')
conn.execute('PRAGMA mmap_size = 268435456')
conn.execute('PRAGMA foreign_keys = ON')
In Swift (macOS / iOS):
import SQLite3
var db: OpaquePointer?
sqlite3_open("app.db", &db)
sqlite3_exec(db, "PRAGMA journal_mode = WAL", nil, nil, nil)
sqlite3_exec(db, "PRAGMA synchronous = NORMAL", nil, nil, nil)
sqlite3_exec(db, "PRAGMA cache_size = -65536", nil, nil, nil)
sqlite3_exec(db, "PRAGMA temp_store = MEMORY", nil, nil, nil)
sqlite3_exec(db, "PRAGMA mmap_size = 268435456", nil, nil, nil)
sqlite3_exec(db, "PRAGMA foreign_keys = ON", nil, nil, nil)
Benchmark: Before vs After
Here's what these optimizations look like on a real 500,000-row table:
| Operation | Default SQLite | Optimized SQLite | Speedup |
|---|---|---|---|
| Single row lookup (by ID) | 0.5ms | 0.02ms | 25x |
| Full table scan (no index) | 180ms | 0.1ms (with index) | 1800x |
| Insert 10,000 rows | 32,000ms | 85ms | 376x |
| Complex JOIN query | 450ms | 12ms | 37x |
| COUNT(*) on large table | 95ms | 28ms | 3.4x |
| Concurrent reads (10 threads) | Blocked (serialized) | Parallel (WAL) | 10x |
The biggest wins come from WAL mode + synchronous NORMAL + proper indexes + batched transactions. Those four changes alone account for 90% of the improvement.
Common Mistakes That Kill Performance
- Opening a new connection for every query. SQLite connections are cheap but not free. Reuse connections. Use connection pooling or a singleton.
- Not closing prepared statements. Leaked statements hold locks and waste memory. Always finalize them.
- Indexing every column. Indexes speed up reads but slow down writes. Only index columns you actually filter, sort, or join on.
- Using TEXT for dates, then sorting.
ORDER BY date_columnon text dates works if your format is ISO 8601 (YYYY-MM-DD). Use any other format and your sort order breaks. - Storing large blobs in the main table. Put files in a separate table or on disk. Large blobs bloat the page cache and slow down every query that touches that table.
SQLite Is Not Slow. Your Configuration Is.
SQLite is embedded in every smartphone, every browser, every macOS and Windows machine. Apple, Google, Mozilla, and Airbnb all rely on it in production. It handles more queries per day than all other database engines combined.
The difference between "SQLite is too slow for my project" and "SQLite handles everything I throw at it" is usually five lines of PRAGMA statements and a few well-placed indexes.
Stop blaming the database. Start configuring it.
D1 is SQLite at the edge -- most PRAGMA optimizations are handled for you automatically. But you still need to think about indexes, query structure, and batch operations. Use MyD1 to visually browse your D1 tables, run EXPLAIN QUERY PLAN, and spot missing indexes -- all from a native desktop app. Download MyD1 free.
Related: D1 vs MySQL vs PostgreSQL · Build a Full-Stack App on Cloudflare for Free · Managing D1 Without the Terminal · Why AWS Is So Slow