Do you want to have connect to your existing HA instance or okay with a new docker instance?
I was planning to have both but would like to know which one makes better sense.
Right now the meeting happens on the fly and then is cached. In the future I imagine the finished merge will be saved as JSON to the database, depending on which is more expensive, the merging or a database call.
Merging on the fly kinda works for the future too, for when data change or for when the merging process changes.
No idea what the future will hold. The idea is to pre-warm the database after the schema has been refactored, and once we have thousands of books from that, I’ll know for sure what to do next.
TLDR, there is a lot of “think and learn” as I go here, haha.
No, I decided pretty early on to make it database specific instead of more generic, so we do use some PostgreSQL features right now, like their UUIDv7 generation.
But once the database refactor is done, I wouldn’t say no to a patch that made the service database agnostic.
It seems someone found a bug that triggered a panic, and systemd failed to restart the service because the PID file wasn't removed. Fixed now, should be back online :)
No hug of death, the server is sitting at 3% CPU usage under current load; it seems someone found a bug that triggered a panic, and systemd failed to restart the service because the PID file wasn't removed. Fixed now, should be back online :)
reply