I don’t buy that LLMs won’t make off-by-one or memory safety errors, or that they won’t introduce undefined behavior. Not only can they not reason about such issues, but imagine how much buggy code they’re trained on!
That says nothing about this particular situation. Written language has been a thing for 5,000 years, and it's used for this bid, so nothing remarkable here ...
I guess the OP is just making an unrelated comment, because it almost sounds like he thinks that a hostile bid is evidence that the US has Ukraine-levels of corruption. Leaving aside the odd time period (Ukraine was much less corrupt pre-war than it was pre-Maidan, not to speak of its other more corrupt neighbor), the fact that hostile bids have been around for a long time in the US is good evidence to suggest that they don't indicate the level of corruption implied by OP. If OP made the same comment under a post about verb conjugation, wouldn't that seem odd to you too?
Or maybe they just happened to make an off-topic comment that had nothing to do with the hostile takeover.
IOPS only solves throughput, not latency. You still need to saturate internal parallelism to get good throughput from SSDs, and that requires batching. Also, even double-digit microsecond write latency per transaction commit would limit you to only 10K TPS. It's just not feasible to issue individual synchronous writes for every transaction commit, even on NVMe.
tl;dr "multi-transaction group-commit fsync" is alive and well
In practice, there must be a delay (from batching) if you fsync every transaction before acknowledging commit. The database would be unusably slow otherwise.
Right, I think the lazy thing implies that it would happen post "commit" being returned to the client, but it doesn't need to be. The commit just needs to be wait for "an" fsync call, not its own.
You can push the safety envelope a bit further and wait for your data to only be in memory in N separate fault domains. Yes, your favorite ultra-reliable cloud service may be doing this.
Pretty much, given that any decent pthreads implementation will offer an adaptive mutex. Unless you really need a mutex the size of a single bit or byte (which likely implies false sharing), there's little reason to ever use a pure spinlock, since a mutex with adaptive spinning (up to context switch latency) gives you the same performance for short critical sections without the disastrous worst-case behavior.
Some people don't want to block for a microsecond when their lock goes 1ns over your adaptive mutexes spin deadline. That kind of jitter is unacceptable.
I wish the article had made it clearer that quadtree positions are encoded as strings over the alphabet on 2 bits (similarly, octrees use the alphabet over 3 bits). This makes storing keys and lexicographically comparing them very simple.
> positions are encoded as strings over the alphabet on 2 bits
This is the most pedantic way of saying "binary 2-tuples" I've ever seen. Also for quadtrees this is inferior to base 4 because you can assume clockwise (or counter) ordering.
I don't think that's what they meant. It's the case you can use literal strings of bits to encode a (2^n)-tree node, so you use actual bitstring comparisons and operations to manipulate them. Rightshift gives you the parent and things like that.
I don't think this is something the article cares about, though.
Uh, base 4 is exactly what I meant. I guess I wasn't very clear that I mean positions are encoded as bitstrings, with one pair of bits for each level (and triples of bits for octrees). Is that clear enough for you?
reply