Hacker Newsnew | past | comments | ask | show | jobs | submit | ochiba's commentslogin

PowerSync supports OPFS as SQLite VFS since earlier 2025: https://github.com/powersync-ja/powersync-js/pull/418


This is great to see and I like the simplicity of the approach. You can also look at PowerSync (which I work on). It's in a similar space as ElectricSQL. It syncs to SQLite on the client-side and provides built-in reactivity. On the web, it uses wa-sqlite with either OPFS or IndexedDB. It also takes care of things like multi-tab on web, and queueing and uploading client-side mutations to the backend.


This site also has a directory of devtools: https://lofi.so/


You can look at PowerSync: https://www.powersync.com/


Realm's sync functionality (Atlas Device Sync) has been deprecated by MongoDB: https://www.mongodb.com/docs/atlas/app-services/sync/device-...


I really enjoyed the talk at Local-First Conf today — well done. I thought it was very well explained and made compelling arguments for the event-sourcing materialized into SQLite architecture.

Thank you for championing SQLite and especially OPFS Wasm SQLite on the web — we (PowerSync) are clearly also big proponents of it, so love to see other projects having success with it too.


I am not sure about Turso but I've seen a few different approaches to this with other sync engine architectures:

1. At a database level: Using something like RLS in Postgres

2. At a backend level: The sync engine processes write operations via the backend API, where custom validation and authorization logic can be applied.

3. At a sync engine level: If the sync engine processes the write operations, there can be some kind of authorization layer similar to RLS enforced by the sync engine on the backend.


It's definitely quite a hard engineering problem to solve, if you try to cover a wide range of use cases, and layer on top of that things like permissions/authorization and scalability


There are niche use cases where the former (work for days to weeks offline) are useful and even critical - like certain field service use cases. Surviving glitches in network connectivity is useful for mainstream/consumer applications for users in general, especially those on mobile.

In my experience, it can affect the architecture and performance in a significant way. If a client can go offline for an arbitrary period of time, doing a delta sync when they come back online is more tricky, since we need to sync a specific range of operation history (and this needs to be adjusted for specific scope/permissions that the client has access to). If you scale up a system to thousands or millions of clients, having them all do arbitrary range queries doesn't scale well. For this reason I've seen sync engines simply force a client to do a complete re-sync if it "falls behind" with deltas for too long (e.g. more than a day or so.) Maintaining an operation log that is set up and indexed for querying arbitrary ranges of operations (for a specific scope of data) works well.


I would say this is why server-authoritative systems that allow for custom logic in the backend for conflict resolution work well in practice (like Replicache, PowerSync and Zero - custom mutators coming in beta for the latter). Predefined deterministic distributed conflict resolution such as CRDT data structures work well for certain use cases like text editing, but many other use cases require deeper customizability based on various factors like you said.


CRDT falls flat for rich text editing though. So many nasty edge cases and nobody has solved them all, despite their claims.


have you tried loro



Server-authoritative conflict resolution kind of mirrors my thinking as well, having resolution work like multiplayer net code, where the client and server may or may not attempt to resolve recent conflicts, but server has the final say on state. Just not sure how this plays out when a client starts dropping conflicting data because the server says so...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: