8 Swift Performance Tricks That Made Our macOS App Instantly Faster
8 Performance Tricks That Made Our macOS App Instantly Faster
Real optimizations we applied to a production SwiftUI database client. Every technique includes copy-paste-ready code and a clear explanation of why it works.
Your SwiftUI app is re-rendering everything, 10 times per second, because of one timer -- here is how to fix that and seven other performance killers.
TL;DR
Decouple timers from @Observable stored properties, debounce onChange recomputations with DispatchWorkItem, cache NSView hierarchy lookups, hoist formatters to static let, mark classes final, use @inline(__always) on hot-path properties, replace regex with hasPrefix, and push sort/pagination to the database. These eight changes eliminated thousands of redundant view evaluations in our production macOS app.
We build a native macOS database client in pure Swift and SwiftUI. No Electron, no web views. When you're rendering data grids with thousands of cells and running queries against remote databases, every millisecond matters. Here are 8 optimizations we shipped — each one made a measurable difference.
The problem: We had a query execution timer that called updateElapsed() every 100ms, mutating a property on an @Observable object. Since SwiftUI tracks every property read, this triggered a full re-evaluation of every view that read any property from that object — 10 times per second. Sidebar, toolbar, data grid, footer — all re-rendered for a timer that only one small overlay needed.
The fix: Make the elapsed time computed on-read from startTime instead of storing it as a mutating property. Then use a TimelineView in the one view that displays the counter — the polling cost stays local to that single view.
@Observable final class QueryExecution { var isRunning = false var startTime: Date? var elapsedSeconds: Double = 0 // mutated 10x/sec! func updateElapsed() { elapsedSeconds = Date().timeIntervalSince(startTime!) } } // In AppState.executeQuery(): let timer = Task { @MainActor in while !Task.isCancelled { queryExecution.updateElapsed() // triggers ALL views try? await Task.sleep(nanoseconds: 100_000_000) } }
@Observable final class QueryExecution { var isRunning = false var startTime: Date? // Computed on-read — no mutation, no notification var elapsedSeconds: Double { guard let s = startTime else { return 0 } return Date().timeIntervalSince(s) } } // In the progress overlay ONLY: TimelineView(.periodic(from: .now, by: 0.2)) { ctx in Text(execution.formattedElapsed) .monospacedDigit() }
Key insight: @Observable tracks stored property writes. A computed property that reads startTime only triggers views that call it — and only when startTime itself changes (twice: on start and on finish). The TimelineView is the only thing polling, and its cost is scoped to its own view body.
The problem: Our data grid tracked 7 properties via .onChange modifiers: column names, row count, search text, sort column, sort direction, current page, and text length. Each one triggered a full row recomputation — O(rows × columns). If two changed in the same runloop cycle (e.g. sortColumn and sortAscending during a sort toggle), the grid rebuilt twice.
The fix: Instead of calling recomputeRows() directly, schedule it via a DispatchWorkItem on the main queue. If a second onChange fires before the first executes, the first is cancelled. Multiple rapid changes collapse into a single recomputation.
@State private var pendingRecompute: DispatchWorkItem? = nil /// Coalesce rapid-fire onChange calls into a single recompute. /// If N properties change in one runloop tick, only 1 recompute fires. private func scheduleRecompute() { pendingRecompute?.cancel() let work = DispatchWorkItem { recomputeRows() } pendingRecompute = work DispatchQueue.main.async(execute: work) } // In your onChange observers: .onChange(of: sortColumn) { scheduleRecompute() } .onChange(of: sortAscending) { scheduleRecompute() } .onChange(of: searchText) { scheduleRecompute() } .onChange(of: currentPage) { scheduleRecompute() }
This is the same pattern high-performance renderers use: coalesce display updates into the next frame. A terminal emulator doing 60 FPS uses the exact same technique — queue one render per frame, skip duplicates.
The problem: SwiftUI doesn't expose its backing NSScrollView. To implement keyboard scrolling (arrow keys, Cmd+arrows), we had to walk the entire NSView tree to find it. This recursive walk happened on every single key press.
The fix: Cache the result in a @State property. The lookup runs once; subsequent calls use the cached reference. Invalidate if the window changes.
@State private var cachedScrollView: NSScrollView? = nil private func scrollGrid(dx: CGFloat, dy: CGFloat) { guard let window = NSApp.keyWindow else { return } // Fast path: use cached reference if still valid if let cached = cachedScrollView, cached.window != nil, cached.documentView != nil { performScroll(cached, dx: dx, dy: dy) return } // Slow path: walk view hierarchy once, cache result guard let sv = findLargestScrollView(in: window.contentView!) else { return } cachedScrollView = sv performScroll(sv, dx: dx, dy: dy) }
This is a general principle: never search for something repeatedly if the answer doesn't change. View hierarchy walks, superview lookups, trait collection queries — cache them all.
The problem: NumberFormatter(), DateFormatter(), and ISO8601DateFormatter() are surprisingly expensive to create — each one allocates locale data, calendar info, and Unicode tables. We found three places where formatters were created inside computed properties or per-cell render functions.
In a data grid with 500 rows and 10 columns, that's 5,000 formatter allocations per table load. Each one takes ~0.1-0.2ms. Total: up to 1 second of pure formatter overhead.
func formatNumber(_ n: Int) -> String { let f = NumberFormatter() // NEW alloc! f.numberStyle = .decimal f.groupingSeparator = "," return f.string(from: n) ?? "\(n)" } // Also bad: inside enum computed properties case .date(let value): let f = ISO8601DateFormatter() // per cell! return f.string(from: value)
// Allocated once at program startup private static let decimalFmt: NumberFormatter = { let f = NumberFormatter() f.numberStyle = .decimal f.groupingSeparator = "," return f }() func formatNumber(_ n: Int) -> String { Self.decimalFmt.string(from: n) ?? "\(n)" } // Same for DateFormatter, ISO8601DateFormatter, etc. private static let isoFmt = ISO8601DateFormatter()
Rule of thumb: If a formatter is used more than once, make it a static let. This applies to NumberFormatter, DateFormatter, ISO8601DateFormatter, ByteCountFormatter, MeasurementFormatter — all of them.
final
Free Perf
Unless your class is explicitly designed for subclassing, mark it final. This lets the Swift compiler use static dispatch instead of virtual dispatch for method calls — no vtable lookup, direct function call.
The difference per call is ~2 nanoseconds. But in a data grid rendering 5,000 cells, each calling .stringValue, .isNull, .horizontalPadding, it adds up.
// Every class in our codebase is final: final class AppState { ... } final class MySQLAdapter: DatabaseConnection { ... } final class ToastManager { ... } final class QueryHistoryManager { ... } final class SSHTunnelManager { ... } final class AppDelegate: NSObject, NSApplicationDelegate { ... }
This is a zero-effort optimization. grep -r "^class " src/ to find every class, add final. Done.
@inline(__always) for Hot-Path Properties
Free Perf
Small computed properties that are called thousands of times per render — like isNull, stringValue, or verticalPadding — benefit from forced inlining. The compiler usually inlines these, but @inline(__always) guarantees it.
enum DatabaseValue: Sendable, Hashable { case null case string(String) case int(Int64) // ... @inline(__always) var isNull: Bool { if case .null = self { return true } return false } @inline(__always) var stringValue: String { description } } enum DataGridDensity { @inline(__always) var verticalPadding: CGFloat { switch self { case .condensed: return 4 case .normal: return 10 case .large: return 16 } } }
Use this on properties that are: (a) trivially small, (b) called in tight loops or per-cell renders, and (c) always return quickly. Don't use it on anything that allocates or does I/O.
The problem: Our smart search bar uses a SQL-like syntax: > 198, <= 50, != 0. The original implementation used NSRegularExpression to parse the operator and value — compiled a regex pattern, created match objects, extracted ranges.
The fix: The set of operators is fixed and small (>=, <=, !=, <>, >, <, =). A simple hasPrefix loop is faster and allocates nothing.
let pattern = #"^(>=|<=|!=|<>|>|<|=)\s*(.+)$"# let regex = try NSRegularExpression(pattern: pattern) let match = regex.firstMatch(in: input, ...) // extract range, create substrings...
// Ordered longest-first so ">=" matches before ">" let operators = [">=", "<=", "!=", "<>", ">", "<", "="] for op in operators { if input.hasPrefix(op) { let val = input.dropFirst(op.count) .trimmingCharacters(in: .whitespaces) if Double(val) != nil { return .comparison(op: op, value: val) } } }
This is a micro-optimization for a function called once per keystroke — but the principle matters: don't use regex for fixed pattern matching. hasPrefix, hasSuffix, contains, and split are almost always faster for structured inputs.
The problem: Our data grid was sorting rows client-side — sorting the 50 rows on the current page, then fetching the next page independently. Page 2 showed the "second page of unsorted data, re-sorted locally." The results were completely wrong across page boundaries.
The fix: When the user clicks a column header, inject ORDER BY into the SQL query. When they change pages, use LIMIT/OFFSET. The database does the work, and every page shows the correct slice of globally-sorted data.
/// Append ORDER BY + LIMIT/OFFSET to any SQL query private func appendOrderAndLimit(to sql: String) -> String { var result = sql // Server-side sort if let col = sortColumn { result += " ORDER BY `\(col)` \(sortAscending ? "ASC" : "DESC")" } // Server-side pagination let offset = (currentPage - 1) * limit result += " LIMIT \(limit)" if offset > 0 { result += " OFFSET \(offset)" } return result } // On column header click: private func handleSort(column: String) { if sortColumn == column { sortAscending.toggle() } else { sortColumn = column; sortAscending = true } currentPage = 1 // reset to first page Task { await executeSQL(currentSQL) } }
This is both a correctness fix and a performance win. The database has indexes — it can sort millions of rows in milliseconds. Client-side sort on a 50-row page is always wrong for paginated data.
The Underlying Principles
Every optimization above follows one of three rules:
1. Minimize observation scope
Only notify views that actually need updating. Computed properties, TimelineView, and split observable objects all serve this goal.
2. Never compute the same thing twice
Cache formatter instances, view hierarchy lookups, column widths, and computed indices. Invalidate only when the underlying data actually changes.
3. Push work to the right layer
Databases sort faster than Swift arrays. hasPrefix matches faster than NSRegularExpression. Static dispatch is faster than virtual dispatch. Always ask: "is there a cheaper way to get the same answer?"
What Each Fix Actually Changed
| Fix | What it eliminated | Impact |
|---|---|---|
| Decoupled timer | 10 stored property writes/sec | Critical |
| Debounced recompute | N redundant onChange callbacks → 1 | Critical |
| Server-side sort | Wrong results across page boundaries | Critical |
| Cached scroll view | Repeated NSView tree walks per keystroke | High |
| Static formatters | Thousands of per-cell allocations | High |
final / @inline |
Virtual dispatch overhead on hot paths | Free |
| Prefix check | NSRegularExpression allocation per search | Free |
A Common Misconception About @Observable
You might read advice suggesting you should split a large @Observable class into smaller objects so that "views only subscribe to the properties they need." This was necessary with the old ObservableObject + @Published pattern, which used a single objectWillChange publisher — any property change notified all subscribers.
Swift 5.9's @Observable macro already does property-level tracking. When a view reads appState.isConnected, SwiftUI tracks only that specific property. Changes to appState.selectedTable or appState.isLoadingData will not re-evaluate that view.
The real danger isn't reading from a large observable — it's writing to stored properties unnecessarily. That's exactly what the timer fix addressed: a Task was mutating elapsedSeconds (a stored property) 10 times per second. Every write triggers observation. By making it computed, we eliminated the writes entirely. The TimelineView polls the computed value locally — no stored property is touched, no notification is fired.
So don't split your observable for performance — split it for organizational clarity if the class gets unwieldy. Focus instead on eliminating unnecessary writes to stored properties: replace timers with computed values, debounce rapid-fire mutations, and push work to the view layer (via TimelineView or local @State) when only one view needs the result.
MyD1 uses these same optimizations internally — and its AI Agent applies similar performance thinking to your database queries. If you work with SQLite, MySQL, PostgreSQL, or Cloudflare D1 on macOS, give it a try.
Built with Swift and SwiftUI. No Electron. No compromises.
myd1.app