Hacker Newsnew | past | comments | ask | show | jobs | submit | captainclam's commentslogin

Author really should have figured out a better word than "vanity."


Feels like a pretty tidy parallel to luxury beliefs. Luxury activities would fit, especially since some of these are the activity equivalent to the belief.


Why?


It looks to me like OpenAI's image pipeline takes an image as input, derives the semantic details, and then essentially regenerates an entirely new image based on the "description" obtained from the input image.

Even Sam Altman's "Ghiblified" twitter avatar looks nothing like him (at least to me).

Other models seem much more able to operate directly on the input image.


You can see this in the images of the Newton: in GPT's versions, the text and icons are corrupted.


Isn't this from the model working o. really low res images, and then bein uppscalef afterwards?


You must not end up reading much scientific literature then.


lol


The two dogs I know that share this behavior are border collies.


Have a non-working line border collie, and he has had zero interest in chasing a ball his whole life. All he ever wanted to do was run or chase birds. He failed all his training, didn't even get through puppy preschool as he's not that food motivated either.

Every cattle dog I have known has been ball-obsessed.


The seahorse emoji is one of the canonical "Mandela effects". These are things that a large group of people collectively (mis)remember, but turn out to have never existed. Classic examples include the cornucopia in the Fruit of the Loom label (never there), and the wording on car mirrors "objects in the mirror may be closer than they appear." (There's no record of 'may be closer', just 'are closer').

Unfortunately, the discussion around Mandela effects gets tainted by lots of people being so sure of their memory that the only explanation must be fantastical (the timeline has shifted!), giving the topic a valence of crazy that discourages engagement. I find these mass mis-rememberings fascinating from a psychological perspective, and lacking satisfying explanation (there probably isn't one).

So here we're seeing LLMs "experiencing" the same mandela effect that afflicts so many people, and I sincerely wonder why? The obvious answer is that the training data has lots of discussions about this particular mandela effect, ie people posting online "where is the seahorse emoji"? But those discussions are probably necessarily coupled with language that ascertains 'no, the seahorse emoji does not exist.' That's why the discussion is there in the first place! so why does the model take on the persona of someone that is sure it does exist? Why does it steer the models into such a weird feedback loop?


I've always been surprised by the official homeless population count, but it turns out there's a lot more to it.

The department of HUD generates this ~771K figure from a "point-in-time" estimate, a single count from a single night performed in January. They literally have volunteers go out, count the number of homeless people they observe, and report their findings.

It's not hard to imagine why this is probably a significant undercount. There is likely a long tail of people that happened to be in a situation that night where they were not able to be counted (i.e. somewhere secluded, sleeping in a friend's private residence that night, etc).

Even if these numbers are correct, to my mind a "crisis" is still more characterized by the trend than the numbers in absolute. From the first link you provided, we saw a 39% increase in "people in families" experiencing homelessness, and 9% in individuals. A resource from the HUD itself suggests a 33% increase in homelessness from 2020-2024, 18% increase from 2023-2024. That is far apace of the population increase in general.

https://www.huduser.gov/portal/sites/default/files/pdf/2024-...

And even then, I would say many people would suggest that the change in visible homelessness they've experienced in the last 10 years would amount to "crisis" levels, at least relative to the past.

It's completely fair to argue that it is not in fact a crisis, but claiming that it is certainly not "baseless."


It's kind of wild that they pick maybe the coldest month of the year to do this. You'd think that would be when people are most likely to try to find some sort way of avoiding direct exposure to the open air even if it's extremely short term.


Wow, there really is an xkcd for everything.


Those are original cartoons drawn in the style of XKCD. But strangely enough, in the second cartoon, the Megan clone seems to change from a thin stick figure to suddenly wearing clothes?

I'm not sure if the comic was AI-assisted or not. AI-generated images do not usually contain identical pixel data when a panel repeats.


The script is uncanny as well. My guess is the author used AI to generate the panels/dialogue, then stitched together cutouts from real xkcd comics over the top of the AI-generated panels here and there. That exact shape of the heads is too close to not be a copy-paste job, but other variations suggest AI involvement. The desk gets rotated in the third panel of the first comic, the female character in the second comic gets clothes out of nowhere, etc.

Regardless of how the author made the comics, they're very weird.


My read is that the author is saying it would have been really nice if there had been a really good protocol for storing data in a rich semantically structured way and everyone had been really really good at adhering to that standard.

Is that the main thrust of it?


It's very easy to imagine a world where all these things are solved, but it is a worse world to live in overall.

I don't think it is "bad" to be sincerely worried that the current trajectory of AI progress represents this trade.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: