My favourite PostgresSQL optimization was running a SELECT on a multi-billion row table before running a DELETE so that the shared_memory_buffer would be filled with the rows that would need to be deleted.
Postgres makes DELETEs single threaded, this includes the selection part of the DELETE. By running a completely separate SELECT first Postgres would multithread the SELECT and populate the cache fast. Then the single thread DELETE can operate on in-memory data and not endlessly block loading data from disk.
Its more accurate to say that leftist policies polled well, which they always do. I think the main issue is that people don't really trust democrats to do anything they say they want to do.
Yes. But it's no different from the question of how a non-tech person can make sure that whatever their tech person tells them actually makes sense: you hire another tech person to have a look.
This seems like a glib one liner but I do think it is profoundly insightful as to how some people approach thinking about LLMs.
It is almost like there is hardwiring in our brains that makes us instinctively correlate language generation with intelligence and people cannot separate the two.
It would be like if for the first calculators ever produced instead of responding with 8 to the input 4 + 4 = printed out "Great question! The answer to your question is 7.98" and that resulted in a slew of people proclaiming the arrival of AGI (or, more seriously, the ELIZA Effect is a thing).
Heres my old benchmark question and my new variant:
"When was the last time England beat Scotland at rugby union"
new variant
"Without using search when was the last time England beat Scotland at rugby union"
It is amazing how bad ChatGPT is at this question and has been for years now across multiple models. It's not that it gets it wrong - no shade, I've told it not to search the web so this is _hard_ for it - but how badly it reports the answer. Starting from the small stuff - it almost always reports the wrong year, wrong location and wrong score - that's the boring facts stuff that I would expect it to stumble on. It often creates details of matches that didn't exist, cool standard hallucinations. But even within the text it generates itself it cannot keep it consistent with how reality works. It often reports draws as wins for England. It frequently states the team that it just said scored most points lost the match, etc.
It is my ur example for when people challenge my assertion LLMs are stochastic parrots or fancy Markov chains on steroids.
> For LLMs, there is an added benefit. If you can formally specify what you want, you can make that specification your entire program. Then have an LLM driven compiler produce a provably correct implementation. This is a novel programming paradigm that has never before been possible; although every "declarative" language is an attempt to approximate it.
That is not novel and every declarative language precisely embodies it.
I think most existing declarative languages still require the programmer to specify too many details to get something usable. For instance, Prolog often requires the use of 'cut' to get reasonable performance for some problems.
reply