Young singers brought up listening to autotuned vocals can unknowingly learn and emulate the sonic signature of the tuning algorithm (and the telltale lilt when it's used as an effect, but the subtle tuning case is more surprising).
If you read too much sloppy LLM prose, it's going to influence how you write and structure your own.
The USA didn't New Deal hard enough to keep up with the rest of the "Western" world's more humane form of capitalism, which you can see in how consistently the USA lags behind other countries in measures of actual human progress and comfort.
While most "Western" countries suffered through similar technocratic, neoliberal turns in the 80s, they were built on a stronger social democratic base than the USA.
And don't get fooled by "horseshoe theory" ignorance: Fascist states were/are authoritarian by definition. Communist states were/are authoritarian by culture.
Before OLED (and similar), most displays were lit with LEDs (behind or around the screen, through a diffuser, then through liquid crystals) which was indeed the dominant power draw... like 90% or so!
But the article is about an OLED display, so the pixels themselves are emitting light.
> At Reco, we have a policy engine that evaluates JSONata expressions against every message in our data pipeline - billions of events, on thousands of distinct expressions.
The original architecture choice and price almost gave me a brain aneurysm, but the "build it with AI" solution is also under-considered.
This looks like a perfect candidate for existing, high quality, high performance, production grade solutions such quamina (independent successor to aws/event-ruler, and ancestor to quamina-rs).
There's going to be a lot of "we were doing something stupid and we solved it by doing something stupid with AI [LLM code]" in our near future. :-|
As far as I see it, AI is the reason they're unnecessarily paying 300k/year in the first place. A human engineer was the one that identified the problem with this JS dependency, and the human told then made AI fix its' original mistake.
Digital neural networks and "neurons" were already vastly simpler than biological neural networks and neurons... and getting to transformers involved optimisations that took us even further away from biomimicry.
This is a very optimistic, pro-technology-cleverness point of view.
I recommend reading the linked persona selection model document. It's Anthropic through and through - enthusiastic while embracing uncertainty - but ultimately lots of rationalisation for (what others believe is) dangerous obfuscation.
reply