Hacker Newsnew | past | comments | ask | show | jobs | submit | habinero's commentslogin

Hahahahaha, I wish, but no. That's not how it works.

I take Adderall and literally fall asleep because it lets my brain shut down. It _decreases_ my anxiety.

I don't get high or hyper off of it, it literally just lets me function enough to do my laundry. It's honestly like wearing glasses.


No? "DEI" has nothing to do with the ADA.

My "I do not have an actual coke problem" shirt is generating a lot of questions answered by my shirt.

I love these kinds of sites, since they're indistinguishable from honeypots. Sure, have my license plate and the information that I'm worried about being watched.

With no other identifying info, though, what can they do with a license plate number in isolation?

> With no other identifying info, though, what can they do with a license plate number in isolation?

For typical users not taking extra precautions, visiting a page in a browser is providing additional identifying info, a fact that monetization of the free-as-in-beer web relies heavily upon, but which can be leveraged in other ways, e.g., by a site that draws you in with privacy fears as a technique to get you to submit additional information that can be correlated with it.


Some states, like Michigan, you can request owner information (including address) by a in-person SOS visit and $15 a plate. I've always thought this should be PII and shouldn't be allowed on reddit, for example, where PII is banned. Post a driver with plate in Michigan and you may have doxxed them.

> Some states, like Michigan, you can request owner information (including address)

If the car is leased, wouldn’t this just give leasing company details?


Most people park at their home and many drive to work. If you have both of those data points, you can identify people.

That's not very useful?

For homeowners, the real estate transactions are public and majority of white collar people have LinkedIn accounts.


You're starting with the plate, getting the home, and then you can get the real estate info.

Most people don't expect their identity to be discoverable from their driving.


Wait really? I feel like this was happening in the 90s. Now every car has a full gps spy system integrated to the point I barely trust that my conversation is private in a modern vehicle. But I guess if you think it's just your car company, Android, Apple, roadside assistance, the local police, and probably the music you're playing that can pin your location you're probably ok.

Isn't that the whole idea of licence plates? So you're identifiable?

So, from home and work, you identify me. Then you figure out which church I attend, and which strip club I attend.

"Wait, user compliance scan identified location traces associated with participitation in community groups prohibited by EasyLife Health™ policy update 2025-12-06b. Recommend to annul contract."

> majority of white collar people have LinkedIn accounts.

What a time to live in!


LinkedIn has always struck me like a kind of contemporary slave management/market place, only one in which pick-mes try to be the best alpha slave they can be.

The fact that you are linked in, as in a chain, sure does not help with dispelling my impression.


Exactly - you can collect license plates numbers way easier than this. The best data they can really get is a connection to an IP address.

Checksum?

Sell it to the cops and/or ICE as belonging to "self-identified persons of interest."

Surely this implies that the easiest route to pedophilia is to join ICE

They list their sources, if you care but don't trust them you could replicate it on your own.

Lmao I got honeypotted in h.s. by one of those 'does your crush like you' astrology sites

I totally understand your sentiment, but you could just check a random assortment of license plate numbers you collected while driving around, which also includes yours. At the very least that would effectively obfuscate your license plate sufficiently that it could not be attributed beyond other methods that likely already have done so.

Who isn't worried about being watched? I am certainly not confident the government can tell their ass from their face, so anyone could be suspect.

Sounds like social media ;-)

Lol I actually tried it with my plate, i hope i don't get SWATed

"They're free to do whatever they want with their own service" != "You can't criticize them for doing dumb things"

Yeah. You're not a coder, so you don't have the expertise to see the pitfalls and problems with the approach.

If you want to use concrete to anchor some poles in the ground, great. Build that gazebo. If it falls down, oh well.

If you want to use concrete to make a building that needs to be safe and maintained, it's critical that you use the right concrete mix, use rebar in the right way, and seal it properly.

Civil engineers aren't "threatened" by hobbyists building gazebos. Software engineers aren't "threatened" by AI. We're pointing out that the building's gonna fall over if you do it this way, which is what we're actually paid to do.


Sorry, carefully read the comments on this thread and you will quickly realize "real" coders are very much threatened by this technology - especially junior coders. They are frightened their job is at stake by a new tool and take a very anti-AI view to the entire domain - probably more-so for those who live in areas where the wages are not high to begin with. People who come from a different perspective truly see the value of what these tools can help you do. To say all AI output is slop or garbage is just wrong.

The flip of this is to understand and appreciate what the new tooling can help you do and adopt. Sure, junior coders will face significant headwinds, but I guarantee you there are opportunities waiting to get uncovered. Just give it a couple of years...


No. You're misreading the reactions because you've made some incorrect assumptions and you do a fundamentally different job than those people.

I legit don't know any professional SWE who feels "threatened" by AI. We don't get hired to write the kind of code you're writing.


> if it was true, the system wouldn't be able to produce coherent sentences. Because that's actually the same problem as producing true sentences

It is...not at all the same? Like they said, you can create perfectly coherent statements that are just wrong. Just look at Elon's ridiculously hamfisted attempts around editing Grok system prompts.

Also, a lot of information on the web is just wrong or out of date, and coding tools only get you so far.


I should've said they're equally hard problems and they're equally emergent.

Why are you just taking it for granted it can write coherent text, which is a miracle, and not believing any other miracles?


"Paris is the capital of France" is a coherent sentence, just like "Paris dates back to Gaelic settlements in 1200 BC", or "France had a population of about 97,24 million in 2024". The coherence of sentences generated by LLMs is "emergent" from the unbelievable amount of data and training, just like the correct factoids ("Paris is the capital of France"). It shows that Artificial Neural Networks using this architecture and training process can learn to fluently use language, which was the goal? Because language is tied to the real world, being able to make true statements about the world is to some degree part of being fluent in a language, which is never just syntax, also semantics.

I get what you mean by "miracle", but your argument revolving around this doesn't seem logical to me, apart from the question: what is the the "other miracle" supposed to be?

Zooming out, this seems to be part of the issue: semantics (concepts and words) neatly map the world, and have emergent properties that help to not just describe, but also sometimes predict or understand the world.

But logic seems to exist outside of language to a degree, being described by it. Just like the physical world.

Humans are able to reason logically, not always correctly, but language allows for peer review and refinement. Humans can observe the physical world. And then put all of this together using language.

But applying logic or being able to observe the physical world doesn't emerge from language. Language seems like an artifact of doing these things and a tool to do them in collaboration, but it only carries logic and knowledge because humans left these traces in "correct language".


> But applying logic or being able to observe the physical world doesn't emerge from language. Language seems like an artifact of doing these things and a tool to do them in collaboration, but it only carries logic and knowledge because humans left these traces in "correct language".

That's not the only element that went into producing the models. There's also the anthropic principle - they test them with benchmarks (that involve knowledge and truthful statements) and then don't release the ones that fail the benchmarks.


And there is Reinforcement Learning, which is essential to make models act "conversational" and coherent, right?

But I wanted to stay abstract and not go into to much detail outside my knowledge and experience.

With the GPT-2 and GPT-3 base models, you were easily able to produce "conversations" by writing fitting preludes (e.g. Interview style), but these went off the rails quickly, in often comedic ways.

Part of that surely is also due to model size.

But RILHF seems more important.

I enjoyed the rambling and even that was impressive at the time.

I guess the "anthropic principle" you are referring to works in a similar direction, although in a different way (selection, not training).

The only context in which I've heard details about selection processes post-training so far was this article about OpenAIs model updates from GPT-4o onwards, discussed earlier here:

https://news.ycombinator.com/item?id=46030799

(there's a gift link in the comments)

The parts about A/B-Testing are pretty interesting.

The focus is ChatGPT as an enticing consumer product and maximizing engagement, not so much the benchmarks and usefulness of models. It briefly addresses the friction between usefulness and sycophancy though.

Anyway, it's pretty clever to use the wording "anthropic principle" here, I only knew the metaphysical usage (why do humans exist).


I can type a query into Google and out pops text. Miracle?

At that speed? Yes. They spent a lot of money making that work.

Because it's not a miracle? I'm not being difficult here, it's just true. It's neat and fun to play with, and I use it, but in order to use anything well, you have to look critically at the results and not get blinded by the glitter.

Saying "Why can't you be amazed that a horse can do math?" [0] means you'll miss a lot of interesting phenomena.

[0] https://en.wikipedia.org/wiki/Clever_Hans


Huh. What I get out of this is you can do corporate espionage for like $20.

In this case, the corporate espionage is all useless culty nonsense, but imagine you could get something that moved stock prices.


That's movies. Ask anyone in the military what "military grade" means.

Some of the time it's there to scare the suits into investing, and other times it's nerds scaring each other around the nerd campfire with the nerd equivalent of slasher stories. It's often unclear which, or if it's both.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: