Hacker Newsnew | past | comments | ask | show | jobs | submit | billywhizz's commentslogin

SQLite has a wal hook which calls you back every time a transaction is committed to the WAL. https://www.sqlite.org/c3ref/wal_hook.html

That only catches changes made by the database connection being "hooked."

This has a thread running in the background trying to catch changes made by other connections, potentially (I'm not sure here, but I suspect as much) in different processes that are modifying the same database.


good point. but ime and as seems to be widely understood writing from multiple connections is a bit of a minefield in SQLite. and afaik it still would be possible to have a hook on all connections you expect to be writing?

That wouldn't work across processes. And if you only care about in-process queuing then you might find it easier/faster to use another kind of storage or roll your own WAL.

i did a quick benchmark on this with a single db connection updating user_version in a tight loop with the wal_hook callback enabled.

on my crappy old i5 with the db file on /dev/shm it can do ~150k writes a second with the wal_hook callback called on every write. and this is using JS bindings to C++ so has some unnecessary overhead.


how can you say "it ended up being a surprisingly good measure of the quality of the model for other tasks" and also "It should not be treated as a serious benchmark" in the same comment?

if it is indeed a good measure of the quality of the model (hint: it's not) then, logically, it should be taken seriously.

this is, sadly, a great example of the kind of doublethink the "AI" hypesters (yes - whether you like it or not simon - that is what you are now) are all too capable of.


I genuinely don't see how those two statements conflict with each other.

Despite not being a serious benchmark (how could it be serious? It's a pelican riding a bicycle!) it still turned out to have some value. You can see that just by scrolling through the archives and watching it improve as the models improved.

If your definition of doublethink is "holding two conflicting ideas in your head at once" then I would say doublethink is a necessary skill for navigating the weird AI era we find ourselves inhabiting.


"some value" is not the same as "a surprisingly good measure of the quality of the model for other tasks".

doublethink does not mean holding two conflicting ideas in your head at once. it means holding two logically inconsistent positions/beliefs at the same time.


you probably could have written the low stakes productivity app in a fraction of the time you wasted on this.


Or learnt to use an existing one.

I vibed a low stakes budgeting app before realising what I actually needed was Actual Budget and to change a little bit how I budget my money.


what's even more amazing is it took them two weeks to fix what must have been a pretty obvious bug, especially given who they are and what they are selling.


why do you find it surprising? these models have no actual understanding of anything, never mind the physical properties and capabilities of a bicycle.


Sad to see this downvoted. So many people think that LLM have understanding?


SQL Server was very good and used in a lot of enterprises. ime the decision between Oracle and SQL Server tended to be down to whether the IT department or company was a "Microsoft Shop" or not. There were a lot of things that came free with SQL Server licenses and it had really nice integrations with other Microsoft enterprise systems software and desktop software.

Oracle was definitely seen as the more mature and resilient (and expensive!) RDBMS in all the years I worked in that space. It also ran on Unix/Linux whereas SQL Server was windows only. Many enterprises didn't like running Microsoft servers, for lots of (usually good) reasons.


the chinese can


that sounds way off. there is a big perf hit to async, but it appears to be roughly 100 nanoseconds overhead per call. when benchmarking you have to ensure your function is not going to be optimized away if it doesn't do anything or inputs/outputs never change.

you can run this to see the overhead for node.js Bun and Deno: https://gist.github.com/billywhizz/e8275a3a90504b0549de3c075...


i played around with this a while back. you can see a demo here. it also lets you pull new WAL segments in and apply them to the current database. never got much time to go any further with it than this.

https://just.billywhizz.io/sqlite/demo/#https://raw.githubus...


> We are just dumber than them.

you are, for sure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: