Hacker Newsnew | past | comments | ask | show | jobs | submit | pants2's commentslogin

Also Discord - tons of people use Discord as a social network and keep up with friends. I must have 5 friend groups that have their own Discords with some overlap.

So did you disclose this responsibly? Posting about it publicly first is asking for that sensitive data to be leaked. Might as well hack and repost that PII yourself.

This is not a data leakage. They deliberately included 999 of their customers' email addresses in publicly accessible JavaScript code in order to test certain features on them.

Certainly that wasn't intentional to broadcast to the public? Sounds like a textbook data leak.

> A data leak is the unauthorized, often unintentional exposure of sensitive, confidential, or personal information to an external party, usually resulting from weak infrastructure, human error, or system errors.


Consider medical device software. Often embedded C code, needs to be rigorously documented and tested, has longer development cycles, and certainly no attitudes of "bugs are fine, ship it and we'll patch later."


Doesn't give much information about how they were generated


Is anyone here actually using pro models through the API? I'd be very curious what the use-case is.

Yes. High value work where cost (mostly) doesn't matter. For example, if I need to look over a legal doc for possible mistakes (part of a workflow i have), it doesn't matter (in my case) whether it costs $0.01 or $10.00, since it's a somewhat infrequent event. So i'll pay $9.99 more, even if the model is only slightly better.

I'm surprised I never heard people talking about using -Pro variants, even though their rates ($125-175/M?) aren't drastically larger than old Opus ($75/M), which people seemed to use

Indeed, even just Terms of Service and Privacy Policy work. Infrequent enough that cost isn't an issue, but model quality absolutely is

Yes? The same reason you would use it via the tooling.

And "valet" is supposed to rhyme with "ballot" not "ballet" but you'll still sound like an idiot if you say "take your car to the val-it"


Your Merriam Webster source has "val-it" as the first pronunciation (but I think in this case both are correct and valit is less common)

It does.. and I've never heard anyone say it that way (and I appreciate that you chose the only dictionary that gave anything close to your argument).. but that's still nothing like "ballot".

Drink some clarit with the valit over a good filit.

Jeeves (the gentleman's personal gentleman) is a valet that would be pronounced VAL-et.

Labs still aren't publishing ARC-AGI-3 scores, even though it's been out for some time. Is it because the numbers are too embarrassing?

Honest answer is that it isn't done running yet. It takes some human bandwidth and time to run, so results weren't ready by this morning. We don't know what the score will be, but will probably go up on the leaderboard sometime soon. I personally don't put a lot of stock in the ARC-AGI evals, as it's not relevant to most work that people do, but should still be interesting to see as a measure of reasoning ability.

(I work at OpenAI.)


GPT-5.5 was just released and OpenAI didnt mention ARC AGI 3 at all, their score probably sucks.

To be fair, there's not much to report. Isn't it pretty much at 0?

Opus-4.6 with 0.5% currently leads GPT-5.4 with 0.2%[1].

Seems meaningful even if the absolute numbers are very low. That's sort of the excitement of it.

2. https://arcprize.org/leaderboard


Especially these days you can SSH to a baremetal server and just tell Claude to set up Postgres. Job done. You don't need autoscaling because you can afford a server that's 5X faster from the start.

You just use docker.

It is like 4 lines of config for Postgres, the only line you need to change is on which path Postgres should store the data.


You also probably want the Postgres storage on a different (set) of disks.

Maybe change the filesystem?


For closed-source, I'd expect defenders to have a greater advantage because they can run Mythos on the source code, while attackers only get an opaque API/protocol to try messing with.

There is definitely a closed-source defender advantage where an attacker doesn't have access to the code, binary, or environment that can be instrumented (so basically, running in the cloud), but there have been several very effective technical demonstrations of LLM guided or agentic approaches to assessing the security of closed source tools, and I have had some successes personally using LLMs with tool use to manage binary analysis tools to perform reverse engineering of closed source packages.

For many attack scenarios the boundary is really if you can establish an effective canary or oracle for determining if a change in input results in a change in output, once you have that, it's simply a matter of scaling your testing or attack (for fuzzing, for blind injection, or any other number of attacks that depend on getting signal from a service).


To some extent yes, but models are good at reverse engineering such that it isn't as great advantage as you might think.

The second 4K image definitely has a raccoon on the left there! Nice.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: