I'm always glad to see people experimenting with different approaches to building and deploying apps. That said, the general idea of this Serverfree approach doesn't appeal to me. On any given day, I'll use 4 different devices. I need my data to synchronize seamlessly. I wouldn't use a program (in-browser or traditionally installed application) unless I can synchronize that data either by storing files in a Dropbox-like program or in the cloud. I don't want to have to remember which computer/browser combination I was working on.
For me the sweet spot are applications that store data on the local file system, but where the data is actually synchronized using something like NextCloud or Dropbox (optionally encrypted with something like Cryptomator) or iCloud, and the applications are built to support the synchronization by merging changes from different devices and detecting and resolving conflicts.
Meaning, the only cloud component should be “dumb” data storage, and it should remain entirely optional, only needed for use across multiple devices.
I often think we should have a separation of concern between storing/syncing data and application code. That syncing app files should be an OS level feature, with many different available data stores, and sync well across different platforms.
It should be OS-level, but in 2024 NO vendor (Apple, Microsoft, Google) is going to make a new open protocol for cross-platform syncing, w/o binding it tightly to their authentication platform/device security system/etc.
Essentially there's an OS mechanism for getting the different versions of the file, your app can detect conflicts and choose how to resolve them, including by displaying arbitrary UI to the user. IIRC This UI can even be integrated with the file open dialog. As a last resort fallback if the app doesn't resolve it Finder/Files will let you keep either version or both (as separate files)
> your app can detect conflicts and choose how to resolve
Like GP said - app specific. If I have two different versions of a SQLite database, that started from a common revision but were both updated independently, and want the updates that were applied in each of them to be preserved - there is quite a lot of work left to do after Apple throws their hands up.
You wouldn't (and don't) use this kind of mechanism at the file-level on an entire SQLite database: just because someone suggests something be an OS feature doesn't mean that existing software would work without changes.
> You wouldn't (and don't) use this kind of mechanism at the file-level on an entire SQLite database:
But the OFA is about databases, and this thread is about syncing databases. So you agree with me: the Apple feature is worthless for this use case. GGP was right that its a lot of work to build sync with conflict resolution into an application, and its application specifc.
Depending on the nature of the data, you can design the app to have a write only journal with independent entries, that are always possible to merge in a consistent way at a later time.
This obviously works better for very simple things like a "play history" and not for complex things like collaborative document writing.
I agree, and the other downside is that with the server-free approach described in the article, there wouldn't be a backup of your data off your device.
The author does mention privacy concerns — hence the appeal of storing the data locally on your device.
I work on PowerSync https://www.powersync.com/ — using embedded SQLite for local-first/offline-first which syncs with Postgres in the background.
I think using an architecture like that where an encrypted version of the data is synced to Postgres, and decrypted for access on the client, would balance the trade-offs well.
There are private, distributed, synchronization protocols people are working on, such as Willow Protocol.
They are still working things out. But synchronization across devices controlled by a principal is doable with the primitives they have already come up with.
It was only an example, and in both platforms I use an installed application.
A more interesting example: I use pCloud instead of MS OneDrive, or Google Drive. They don't have supported Linux clients.
I rejected Dropbox for other reasons, like the limitation to have only one synchronized folder, but technically it has an Ubuntu service and an Android app.
Icedrive seems to be a suitable service, but I found pCloud first.
And believe me: I will not use a browser based interface when I could simply save the files into several folders of my preference, and the sync happens in the background. Every single file storage service I mentioned has a browser based interface. I consider all of them unusable.
Exactly, initially we try to ditch data storage, lets say we do it by encrypting db and syncing to a cloud service, then expect it syncs on other devices when we're about to use the app. This would be just CRUD. Background processes would be a whole different problem to tackle with. I still can't envision a self-hosted decentralized backend on trivial devices.
> Veilid is a peer-to-peer network and application framework released by the Cult of the Dead Cow on August 11, 2023, at DEF CON 31. Described by its authors as "like Tor, but for apps", it is written in Rust, and runs on Linux, macOS, Windows, Android, iOS, and in-browser WASM. VeilidChat is a secure messaging application built on Veilid.
This can be done through a shared database file that syncs however you want- e.g. stored in a shared cloud, etc. The app itself then needs to have very robust conflict resolution code, but it can be done. I think some of the more security focused open source password manager apps already use an approach like this.
I was just made aware TursoDB has a version of their client [1] with the same underlying technology so it seems it might be possible to have the best of both worlds, interacting locally with the db in the browser while it's being synced to the remote instance (not 100% sure though).
Is there anything stopping you from also syncing an encrypted copy of your data to a central server to be synced down or your other devices? It doesn’t appear so.
A websocket between all your devices and a server (or even webrtc between devices) could achieve this in parallel.
I was going to mention WebRTC! It seems designed for video calling, but there are lots of cool use cases - I recently ran across https://github.com/dmotz/trystero , a dead simple WebRTC library for peer-to-peer multiplayer browser games.
There are a lot of exciting sync technologies being develop for this use case, I work on one of them at ElectricSQL (mentioned at the end by the OP), but we maintain a list of alternative here: https://electric-sql.com/docs/reference/alternatives
Looking at this with an open mind, I'm curious what benefits running SQLite in WebAssembly with a proxied web worker API layer gives compared to using localStorage or something similar.
* Using SQL has clear benefits for writing an application. You can use existing stable tools for performing migrations.
* Using SQLite in a filesystem offers many advantages w.r.t performance and reliability. Do these advantages translate over when using WebAssembly SQLite over OPFS?
* How does SQLite / OPFS performance compare to reading / writing to localstorage?
* From what I know about web workers, the browser thinks it is making http requests to communicate with subzero, while the web worker proxies these requests to a local subzero server. What is the overhead cost with doing this, and what benefits does this give over having the browser communicate directly with SQLite?
* I remember seeing a demo of using [SQLite over HTTP](https://hn.algolia.com/?q=sqlite+http) a while back. I wonder if that can be implemented with web workers as an even simpler interface between the web and SQLite and how that affects bundle size...
> Using SQLite in a filesystem offers many advantages w.r.t performance and reliability. Do these advantages translate over when using WebAssembly SQLite over OPFS?
I would say generally yes. SQLite is known for its performance, and with Wasm SQLite, performance is strongly related to how the file system operations are implemented. There have been some good advances in this area in the last couple of years. My co-founder wrote this blog post which talks about the current state of SQLite on the web and goes into performance optimizations:
* localStorage is small, volatile, OPFS is big/durable
* main thread <-> db vs main thread <-> worker <-> db:
- firstly, sqlite with OPFS has to run in a webworker
- even if it was possible to run in main thread, this approach allows for a code structure that is similar to a traditional architecture (frontend/backend split) and it's easy to route some request to the web worker while let over request fall through and reach backend server and not needing to worry about that in the "frontend code"
I am thinking the Willow Protocol would make a good base for local-first. There would be no privileged “backend”, but some peers can provide automated services.
I believe what is being proposed is a static site where user data is persisted locally using the WASM sqlite + OPFS. I guess it is also organized like a typical web app, but the app logic and database logic run locally.
I was expecting something different because it started with phrases like "no servers at all" and "entirely without any servers", but there's a regular web server serving static files.
I'm not a fan of the term "serverfree", though, since there is a web server. Also, the app and database servers from classic web apps continue to exist, albeit in a logical, local form. If this term somehow catches on for this style of app it will just cause endless confusion. I suppose it isn't a lot worse than some existing terms we've gotten used to (like "serverless"), but I'm always going to advocate to not repeat the mistakes of the past.
Speaking of WASM. Is there a way to run code in the browser that calls endpoints secured with CORS? I tried looking it up recently but no luck. I feel its a pretty big limiter when trying to call out to third parties directly and letting people bring their own API key.
I've been thinking about making a web GUI over gmailctl for easy editing but want to make it very easy to use without hosting and without people sending me their keys.
If I understand correctly, you're talking about making HTTP requests from code running in the browser to a third-party API.
Whether you're doing WASM or Javascript, you use fetch() and need to have your CORS ducks in a row. How exactly you call fetch() depends on your toolchain, but anything trying to be general-purpose will expose it somehow.
Thanks for clarifying. I hope that one day there is a way to securely make those kinds of requests in the client side. Until then I'll be using a proxy to strip the cors headers.
As I like to put it - Local-first is the real serverless. Your user's device is the real edge.
I think the future of the web needs to be that the server is optional, we need our data (albeit personal or our companies) to be on our own devices.
We are all carrying around these massively powerful devices in our pockets, let use that capability rather than offload everything to the cloud.
One of the things I find most exciting about local-first (and I'm very fortunate enough to be working on it full time), is the sync tech thats being developed for it. 2024 is I think going to be the year local-first goes mainstream.
Hopefully the sync tech being developed for it is solidly open, but it's good to have access to your data regardless.
The thing that always bothered me about that article is:
> Notably, the object server is open source and self-hostable, which reduces the risk of being locked in to a service that might one day disappear.
It appears that the object server is neither open source and nor self-hostable. The repository that they link is mostly empty. It has a rich version history of "releases" that only change the changelog file.
I assume the article was accurate when written, and have always wondered what happened. So I suspect mongo rewrote the git history to remove the code when they bought Realm. Was it ever open source? Did they intimidate people into taking down forks or did nobody bother?
I do see an edit to the README around that time adding that a license is required to run the self-hosted server. It is dated about two months before the linked article, but they may not have noticed or it may be back-dated:
There are many projects developing sync tech for local-first, I work on a fully open source one - ElectricSQL - and we are fortunate to have a couple of the CRDT co-inventors on the team. We maintain a list of any local-first sync project we know of here: https://electric-sql.com/docs/reference/alternatives
Electricsql looks cool but on a mobile browser the "local first" instant reactivity example is waaaay slower than the cloud version. Latencies of 600+ms Vs 120ms.
The comparison list is very useful as a collection, thank you.
Clarification: I'm not the author of the linked post.
I read your post some time back and feel it's been an organizing force for developers in this space — great job and thanks for the work you put into it.
I often wonder about terminology. What was the reason you chose "local-first" over "offline-first" (or even "serverfree" as in this case)?
For me (not the author) "local first" makes it clear that "on device" is a first-class citizen, and the server is an afterthought. "Offline-first" or "server free" sounds much more limiting, like it will be able to limp when offline, but really wants to connect to a server eventually.
I also (personally) don't like "serverfree" because servers are good - they're not the problem! It's the "servers you don't and can't control" cloud dependencies that are the issue.
What we need is updates that only include security patches.... FEATURE CHANGES SHOULD BE OPTIONAL. Because all software tend to decrease in quality overtime.
While a desirable state for users, this could quickly balloon into a nest of support issues for the maintainers due to having many different versions to patch when a security issue or other significant bug becomes apparent, increasing the project's response time to anything important like that. You could try to mitigate this by maintaining only a couple of versions (perhaps preview, current, and LTS similar to Debian's sid/testing/stable) which would work for small projects, but for large ones or those that see fairly rapid development you runt he risk of stable gaining a reputation for being out of date, unless you bump it forward regularly which puts you back at square one with giving people feature updates they may not need.
If you have an idea for a process that could deal with the potential significant extra load caused by many optional updates on the general case, feel free to share.
It can work well and easily when the features are relatively distinct, but as they start to interact it can become a huge burden.
If you're in the Microsoft world, try out the LTSC flavors (of Windows and Office). Its basically just that -- security updates and patches, no new features. Since switching, my environments have been much more stable, no Windows Updates that ruin my day -- just the stuff I need to stay secure, none of the new crap they're trying to push...
RHEL and many other companies offer that. It's called long term support, extended long term support and so on. And costs a fortune. Most people wouldn't want to buy it.
> One morning, as I was contemplating ways to procrastinate on doing marketing for SubZero without feeling guilty, an idea struck me. "I know, I'll engage > in some content marketing... but what do I need for that? Oh, a cool demo > project!" And just like that, I found a way to spend a month doing content > marketing writing code.
I absolutely don’t understand the point of this. Just reading the intro, it reads about technology for its own sake, just because you can. But what is the value, what are the downsides?
If you store it on your phone, then it's not showing up on your other devices. If you lose or break your phone, then your data is gone. There are very few applications for which that's acceptable - basically just your calculator app.
If you don't store it on your phone, then it's stored on some kind of server, somewhere. Do you own and control that server, or does someone else? How does the application consume and update the data?
The article clearly states that data is stored on the device. That's the exact use case described - where people do not want any of their data stored on a server they don't control.
If that's you, and if privacy is important enough to you to want a local-only app, I think you'll find a solution to back up your data.
I don't think it's unreasonable to continue producing apps that do not require an internet connection to function and be useful.
Fair point. To quibble though, I guess you are only thinking of iPhones and iPads. There are other mobile devices with built-in support for sdcards, and even USB drive, that easily allow you to save your data on them or use them to create backup of your data. These features allows the user to have better control of their data.
On one hand, if I were to have the goal to make a bunch of money, and software just happens to be the means to an end, making a gated software portal where I control everything would suit me very well. You get nothing until I get the money, and I only maintain what I want to maintain. (pretty much the model every SaaS has)
On the other hand, if I know I have a very small customer base, and everyone is making a lot of money or because of my program, and I don't really care that much about the money above a certain number, I might as well distribute it as a static/stale build. You get a binary, or a virtual machine or something like that, and it just does everything. Maybe if piracy were a concern I would add some sort of hardware dongle, but I would also be aware that it's going to get cracked anyway and the only people that are annoyed/limited by it would be my actual paying customers.
On the other more different hand (third hand?): if my program has requirements about robustness, locality or longevity, I would make sure it depends on as few things as possible, make sure that it's documented well enough for future users and administrators to run it on future environments, and perhaps not sell the software in itself as much as I'd sell support. The risk and downside is that specialised and unique software tends to be quite annoying and costly to create while there isn't a lot of telemetry or feedback to figure out what's working well and what isn't, so that would drive up the price significantly. I'd say you're looking at two orders of magnitude vs. a SaaS thing.
Mh, a small note: in the very past of IT DocUI was the norm. Actually WebUI are a limited and limiting DocUI, limited by the fact the user can't easy bend it to his/her own needs and desire, limiting because the WebVM underneath, the whole stack is NOT made for end-user programming as classic systems was.
Just try to look at a modern Emacs with org-mode and elisp: links. It's a DocUI, with some limits, but far more simpler than a WebApp in a local WebVM. And it's pretty local with full filesystem access and so on.
I'm curious how many more DECADES will be needed to reinvent the wheel discovering that classic local DocUIs are far superior and can be networked as we with technically. My own local fast GMail is notmuch-emacs, and it's not one, it's fully integrated and integrable by myself with few SLoC in my desktop. It's not so nice for some aspects not because of the model but because the little development base. If we invest a fraction of the effort put in modern web the classic desktop would outshine any other tech.
I want this too, there isn’t a great FOSS way to currently do this besides supabase or roll your own unfortunately. For PWA local save/write, try network update, refresh local upon success is the gold standard for data integrity. And for network reading fallback to local cache upon offline is great for UX. I haven’t found good tooling for this yet and i’ve been looking.
Yes but it’s a lot of code to maintain data integrity between your local store, app state, and networked store. Sqlite is ideal but indexeddb for the browser could work fine too.
this is very close to an inventory control app I built at work (with the exception that eventually when the client is online it will sync data to the server).
I've often thought, if I had the time and capability ... take it a step further. No server sync at all. Clients form a peer-to-peer network and sync data between themselves. (perhaps bluetooth or something like Apple's Bonjour etc)
Actually something like that, plus an optional server sync when server is available is really even better. I'm thinking specifically of a use case in large warehouses that often have no internet connectivity but in which there are multiple users performing inventory who are duplicating work because they don't know a peer already inventoried a specific area and neither of them can sync to server because no wi-fi.
dang even better. something like a bit-torrent swarm with something like an admin certificate for releasing code patches, and user-level certs for syncing app data.
The trouble with this is discoverability and reducing friction.
It all sounds nice in theory until the device suddenly doesn’t want to talk over Bluetooth or your bonjour shares timeout. In 2024 they aren’t foolproof enough to handle constant data interchange, particularly if we are talking about a lot of devices.
Not to mention with BT it’s easy to hit radio interference unexpectedly
So is this article advocating storing everything in a browser SQLite database and then the SQLite database syncs with another server somewhere else? But to do that doesn’t it need to call a server somewhere. I’m trying to understand as it seems there are still servers here??
The server-free architecture described at the end of the article does not seem to involve syncing data to somewhere else — it seems the data would only reside on the user's local device.
I like the idea of a hybrid of client-side software that optionally uses third-party cloud storage for cross-device sync. NetNewsWire, ByWord (markdown editor), and Scrivener are some examples. Older versions of 1Password also had this, but I don't know if it still does anymore.
Some of these programs managed this better than others. NetnewsWire and 1Password seemed to just work. Byword and Scrivener had occasional sync conflicts that had to be resolved. In general, though, this seems like a nice system: if you (the user) are subscribed to cloud storage, then you get syncing without paying for an extra service. If not, you can still use the software without syncing.
Imho, good integration with existing file-clouds is a good approach. Then it's just serverless but with helpers to say "Store the config on Google Drive", and you get free sync of your config between devices.
I like the concept and I'm building a similar framework but I think you made some confusion in the implementation?
Your server.ts (which uses express) runs in Vite or in a worker (which would require a lot of adaptations/might not even be possible)? If it runs using vite, that's a server. Then the distribution of your app is compromised: or people would run it locally having to start the server or they would need to spin up a server somewhere. How does it become ""serverfree"?
I go to fairly great lengths to do everything in the browser to avoid having to support any backends (for the tools, etc. I make; & hobby project gamedev). It would be a great thing if (perhaps legislation) could break the app-store model, then fully-fledged apps could be distributed as web sites (with their own localstorage, etc.)
I wonder if there's some legal way of saying, "the web is critical communication infrastructure and all core comms devices need to support X standards"
I've done user support with users in the mountains who don't have a reliable internet connection.
Being able to say 'don't worry, the app works offline, you can (optionally) sync when you're next in the city' is extremely rewarding, and vital for software to work for these people.
----
Offline-only is not enough either. Ideally users should be able to sync between devices when offline, and have the option to sync to the cloud when online
I'm lucky enough to have had tons of time to build something with these values.
A huge blocker I didn't grok at the beginning is API keys. Unless the app interacts with 0 services, at all, you need edge functions that essentially just add an API key header to a request from the client.
It offends me because I don't want people to have to trust me, but...there isn't anyone who will recommend otherwise. :/
What about installation and updating/patching? Or is the intention here to still serve this package of web code, db, etc over the net (via cdn maybe), and then execute locally?
Well if the OS annoys me with "update x available, install and restart in 5 mins?" every 10 mins, I'm more likely to do it and still be using it in 10 years (true story).
This goes together with the solution to "a way to export/import the database".
Export/import is a usability dead end. What we need is syncing. There should be a serverless way to sync data. On a LAN there are several ways to do it: databases, files. With possibly untrusted devices (phones) on different networks (PC on home LAN, phone on the operator network) the solution is... I don't know.
I was actually thinking of this the other day, but taking it a step further by actually distributing computation such as worker queues in a peer-to-peer fashion.
At least we’ve got Damien Katz & co who kept the idea well and alive by giving us Erlang/OTP based CouchDB; its master-master replication kicking in always felt magical.
So is this basiclly a fully peer-to-peer application, like bittorrent clients?
Or something like bisq (https://bisq.network) when the program runs locally peer to peer and hosts all user data locally, but still pings oracle servers for outside market price data?
I believe what the author is describing is called A Desktop Application. They were these crazy forms of web apps, that ran on a local computer, stored data locally, and didn't use a server. Or a web browser. Legend has it that they used little memory, were fast and snappy, and enabled native integration with all of an operating system's capabilities, widgets, etc, while still allowing a user to completely control what happens with the data the program used.
Porting this type of application could take a lot of work! So at some point, somebody invented a programming language called Java, which used a thing called a Virtual Machine, so that one application could run the same way on any computer that could run the Java Virtual Machine. They called this A Portable Desktop Application.
Unfortunately, this obscure language was somewhat difficult to use, so this paradigm was abandoned for simpler languages, and with it, so went the idea of A Portable Desktop Application.
Decades later, somebody reinvented the idea. But it required a much more limited application platform, running inside a kind of hacked-together document viewer, with a simplistic programming language designed to control how documents were displayed. It took about 1000x as much memory and CPU, but, with addition of about 50 different frameworks and 30 different new custom APIs, you could finally get close to the same functionality as the first solution.
...and if this "desktop application" sometimes communicates with the server it should not be called a "fat client" and it's architecture should not be called "client-server", because those words aren't fancy anymore...
This is really the fault of the literally zero cooperation between os vendors to make it possible to run non-native apps (i.e. apps that are the same everywhere not an app that's windows-like on Windows and mac-like on Mac) that can look like anything a designer can draw. So the instant someone assembled the bits of course it took off like wildfire even if it's otherwise suboptimal in every way. Someone finally made the thing devs actually want out of a cross-platform sdk instead of yet-another-rigid-widget-toolkit.
"fuck your platform" or less crass "your platform is just one of my delivery vectors" could be the 2015 on mantra for appdev.
Game engines mostly get this right too. You get a rectangle to render everything from scratch and enough abstractions where interacting with the OS directly is a rare occurrence.
If only mobile phones hadn’t had been so slow when running jvm and flash when they first came out, we might be writing java and flash web apps in NodeJF by today.
That's all well and good but how am I supposed to track users, serve them 100 ads, waste their time/resources, and ruin their experience all at the same time?? These desktop applications sound terrible, think of the marketing loss! /s
Once upon a time at a town hall at Google about 7 or 8 years ago where the SVP over GCP was present a man wiser than I asked the question, "In computing the pendulum has already swung a couple of times between client-centric and server-centric. What are we doing to prepare ourselves for the next swing back into client-centric?"
The SVP responded as if the guy asking that question had just stepped out of an alien spacecraft from Alpha Centuri. At the time in that room it seemed incomprehensible to most present how anybody could possibly bask in the glory of the Google infrastructure and then want anything other than that.
I guess this is the dilema of all programming languages, to port stuff on cross platform with very low to no effort. The content composition method, whether it is markup language or Class.Resource.References, would matter less.
Right. This whole thing reminds me of that one "men will do literally anything rather than go to therapy" meme, except it's "developers will do literally anything rather than make a desktop app". And yes, I read the author's disclaimer about how this isn't just a desktop app, and no, I'm not persuaded that this is filling a need desktop apps didn't already fill.
in principle yes, i agree but one aspect the desktop apps lose, the distribution model. it's way easier to just say "go to this link" and the app is running (no install step, no central store, no "code signing").
I looked at the example app. It was basically a spreadsheet. To me that's what make the internet special is not only can I share my shit but people can their shit w/ me. Sharing is what makes it special.
Edit: forgot some words