I think that this is an interesting attempt at taxonomy, but it's a bit on the magical thinking end (and I say this as somebody that does a good amount of what's described as the incanter role). It's a combination of the author's previous witchy aesthetic (see his excellent "<x>ing the technical interview" series) and progressive labor politics (which are asymptotically doomed in the current automation push).
The biggest failure of imagination, I think, is the assumption we'd use humans for most (or *any) of these jobs--for example, the work of the haruspex is better left to an LLM that can process the myriad of internal states (this is the mechanical interpretation field).
Yes, I had the same impression. I'm sympathetic to the author's perspective but I can't muster even the minimal optimism they've shown here. The "process engineers" as described would themselves quickly be replaced by an automated system. The "statistical engineers", I think, would never be able to keep up with the rate of change of the AI models, which would likely have different statistical behavior and biases in each language/context/etc with each update, and so it's unlikely anyone would pay them to develop that required deep expertise in the first place. More likely, that work would be done at an AI foundation model company -- but it would be done just once, and then incorporated into the training process.
I take certain medications--nothing interesting, nothing controlled, nothing abusable. I have to deal with a whole thing just to get refills, because my PCP forces me to come in every time--and even that is now just a telehealth call that is annoying.
In Mexico, for meds like mine, you can just buy them at the pharmacy. There's no reason for all this nonsense.
(Edit: same PCP refused to prescribe GLP-1s early, without any scientific or medical reason not to. Delayed my weightloss by months until I found a place that would.)
Sora was just a bad product, and the pivot in Sora 2 to be some kind of weird social tool with leaderboards and whatnot was a mistake that probably seemed like a good idea to somebody who wasn't good at their job.
Product management at OpenAI appears listless and flaccid and out-of-touch.
More serious tooling and improvements to the original storyboard interface, better ways of doing clip management, better interfaces period would've probably helped adoption...but instead, they settled for some weird pastiche of social stuff with "characters" and other crap.
"Smart" is something you do, not something you are. People with very large amounts of raw intelligence fall down some very dumb intellectual rabbit holes that its practically a meme: https://www.smbc-comics.com/comic/2012-03-21
Having raw intelligence doesn't help if you don't apply rigor to your thinking. I suspect that very successful people actually end up falling into habitual mental shortcuts that cause them to promote stupid things at a later time.
reply