Hacker Newsnew | past | comments | ask | show | jobs | submit | kurthr's commentslogin

See, there's your problem, introspection.

https://youtube.com/shorts/b6Zw50f5jJk


Wow. I thought you were being snarky, but he actually says to skip all kinds of introspection, reflexion, and also therapy.

I don't mean to derail (and thank you for the - horrifying - link), but why the hell is that a "short". It would have been much better being a normal video (with time controls).

Because YouTube wants to be TikTok nowadays.

No way... that's the most retarded thing I've heard for a while now, and I did read about international news.

Introspection is basically THE core mechanism for learning. That's HOW one learns on any topic. It's not a "wishy washy hippie feeling" (being provocative here) but rather introspection is (and to be fair I verified with https://plato.stanford.edu/entries/introspection/ just to make sure I wasn't talking out of my own ass) precisely looking at your inner workings. How you function IN ORDER to do better. You notice flaws, inefficient behaviors, things you enjoy, etc THEN you act on it.

Having no introspection is like doing math without verifying. It's like coding without compiling, linting or even executing without looking at the output.

So dumb it hurts.


Jesus. Yeah, that explains a lot.

By the way, his notion that introspection is an "invention of the 1920s" is historical bullshit. I think he's taking potshots at psychotherapy? Whatever, man, but then do that. It's not like a Freudian concept of the self is beyond criticism - far, far from it - but using that to interdict "introspection" is just sloppy thinking.

Anyway, leaving aside anything else to be said on the topic, the idea that "great men of history don't introspect" is utter bullshit. I'll see you Abraham Lincoln, and raise you Marcus fucking Aurelius.

So, if what you really want to say is that "most 'great men of history' were sociopaths" then, well, yeah: you're probably onto something. If your next thought is "and I want to be like them", then that's 1) a pretty damning confession, and 2) also evidence that you, sir, aren't actually a sociopathic "great man" at all, just an insecure nerd who got lucky a few times, and now are getting high on your own farts.


What is wrong with his upper respiratory system?

Can LLMs convince a human who has power over each and everyone of those things to use them for a(n unstated) prompts goal?

Yeah, probably over 50% of the population already, and if not many of the rest soon.


It's fairly hilarious in a dangerous way, how confident people are that neither they, nor their boomer parents could be fooled by a persistent LLM with access to their mail, text, voice, and that of their co-workers and supervisors. The social engineering attacks have always been a weak point, and now they can be combined with other information to target individuals and fake voice/sms tone.

Look at what happened on r/changemyview. That was over a year ago, using only text, and not only went undetected, but was highly effective at changing opinions.

https://www.reddit.com/r/changemyview/comments/1k8b2hj/meta_...


> probably over 50% of the population

On which end of this split do you place yourself? Most people believe they're smarter than average [0].

And have some more respect for your fellow human, please.

[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC6029792/


It would be shocking to me if the large model trainers didn't have tools like this to analyze their outputs, but this is interesting work!

You can see who likely (post)trained/distilled their models or borrowed parameters from each other. I do wonder if the 32 dimensions were chosen/named from principal components or pre-selected and designed, but the tool seems like an effective discriminator in any case.

Were the prompts similarly selected for orthogonality? I've wondered how the different LLMs would respond from iterative zero-shot prompt_n generation by summary from a previous response_n to generate zero-shot response_n+1. Would it statistically converge to a more distinguishable prompt for that LLM?


Actually, you just described most of the tele-health and compounding pharmacies that carry GLP1s!

Where do you think Hims, Ro, Brello, or the rest get the APIs they sell to their customers? They get them from grey market suppliers in China. They don't go to Ely Lilly or NovoNordisk and say, "politely sir, may I skirt around your IP and sell your drugs for 10x what they cost instead of 10,000x what they cost?" Hopefully, they test them and filter them and use sterile/pharma processes for what they sell to their customers. Well, except for the Medspas, those are just wild west snake oil farms.


This actually isn't true. Hims compounded the GLP1s themselves. They broke/are breaking the law. Theres lawsuits.


We need to change the law so that the crime is selling mislabeled or contaminated drugs. If you can cook it well enough in your kitchen, you should be allowed to share it with a neighbor in need.


They compounded them, which just means buying the API, reconstituting it, adding another compound (Vit B6), and putting it in sterile vials.

They did not make the peptides. They sourced them from China.


Things have changed a little, but during the time that compounding was explicitly allowed, the licensed pharmacies were buying from FDA approved manufacturers, sometimes in China, and sometimes the same manufacturers who also do contract manufacturing for Lilly.

Today ... who knows? It might just be the same gray market stuff us plebes can get.


Actually, at this moment, the top 3 parent posts are all about how people aren't responding positively enough to this event. I think it's really cool, and more people would be more exited, if there wasn't so much else going on. To be fair, I already had the conversation this weekend that the late 60s-70s were also quite fraught.

Maybe we really have just been jaded by hours of youtube and tiktok shorts? I watched it on a 9" B/W crt and I was amazed! Of course I hadn't seen 2001, StarWars, Contact, or The Expanse.


The difference seems to be that Nixon may have been crooked, but he was largely competent. He operated on experience, expertise, and causal reality. Our current political situation is largely free of facts, knowledge, or causality. Much of the corruption that happens today is in plain sight and basically ignored. The goal is governance through depoliticization and post-truth infotainment.

Note that Nixon was actually impeached by his own party and would have been removed for what would now be a single day of news cycle, only on a few networks/papers, and completely ignored by a major political party.


[flagged]


Nixon was on track to be impeached, convicted, and thrown in jail. The people were demanding it. His resignation was basically a "you can't fire me, I quit!" moment. Ford's pardon of Nixon was and remains controversial.


So.. he wasn’t impeached. Thanks for interrupting.


Yep, nothing by even a subset of those authors. Closest paper from that Conference:

Rethinking Attention-Model Explainability through Faithfulness Violation Test Yibing Liu, Haoliang Li, Yangyang Guo, Chenqi Kong, Jing Li, Shiqi Wang

https://proceedings.mlr.press/v162/liu22i.html

https://icml.cc/virtual/2022/spotlight/18082


The losing move is using missile interceptors.

Whether it's high altitude drone swarms, terminally guided artillery munitions, hypersonic rail guns, or high energy laser defense, all are orders of magnitude cheaper than the interceptors and could be less than the cost of the (nuclear?) missile. It's true that generically defending against nukes is basically a fools errand, but if they're (also stupidly) limited to putting them on ICBMs with non-detonating fail safes, then it's probably economically doable and cheaper than the $10T forever war.

I'm sorry, the whole framing of this (OP) question/answer seems artificial and fundamentally silly.


It looks like a lot of them are missing something big. I'd think the two big ones are the evaporative cooling as you pour into the cup, and heating up the cup (by convection) itself. The convective cooling to the air is tertiary, but important (and conduction of the mug to the table probably isn't completely negligible). If there's only one exponential, they're definitely doing something wrong.

I'd like to see a sensitivity study to see how much those terms would need to be changed to match within a few %. Exponentials are really tweaky!


Is that what that first drop is? The cold cup stealing heat from the coffee?


It's a mix of course, but I think it should be mainly that and evaporative cooling. Evap is _very_ effective but will fall off rapidly as you get away from boiling. The conduction into the mug will depend a lot on the mug material but will slow down a lot as the mug approaches the water temperature.

I'd be very interested in seeing separate graphs for each major component and how they add up to the total. Even asking the LLMs to separate it out might improve some of their results, would be interesting to try that too.


Yes, since they didn't explicitly list the evaporative cooling when the coffee was poured into the cup, I suspect it was not included (as if the coffee started in the cup). That means that the starting temperature is off and screws up all the other calculations.

The evaporative cooling as you pour into the cup is when the coffee is at the highest temperature and has the most surface area even though it only takes a few seconds. One could test this either by including it explicitly in the requested calculation, or by putting the fill spout directly at the bottom of the cup when filling.


I agree and disagree with different parts of this article. I've read/seen a lot of the source documentation so I think there's plenty of hyperbole, even if there's a nugget of truth.

   Because the “The AI Bubble Is 17 Times the Size of the Dot-Com Frenzy — and
   Four Times the Subprime Bubble” (oh, and also, there is also a new subprime
   bubble—and it’s already collapsing, which will make all of this worse).
It's not 17 times the size of the Dot-Com, certainly not scaled with the total market valuation. Much of the money that has been "promised" has not be delivered and there is a LOT of circularity to the funding, but it's not $Trillons yet. Many of the companies are still private, haven't gone up much, or are only fractionally floated so the numbers look big. But, it doesn't look like there's a huge moat, and it's going to be expensive to pay for those training servers with inference. The depreciation is all wrong, they're not in the right places, and power consumption isn't optimized. TSMC is probably gonna sell more chips to make it so.

At the same time, it's great to be a user of a "free" product. LLMs work as well as Google search used to! It's great! You can't believe everything you read on the internet, but if you know enough to verify it, it's incredibly useful. If you build an OpenClaw footgun, you deserve the consequences, even if your other victims probably don't. Will the "AI" companies end up paying for it all by exfiltrating their "customers' data? Facebook and Google did.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: