As if the sight of this dystopian thread wasn't depressing enough, there is your one gold nugget of a comment, downvoted into oblivion, grayed out at the bottom of the comment section.
A hundred comments of people reverse-engineering vendor handshakes, writing Python daemons, and debating the finer points of CEC frame injection - and not one of them asking why this is necessary. The answer is in three letters: DRM.
Your PlayStation is a computer. Your Xbox is a computer. Your Apple TV is a computer. Your "smart TV" is a computer. You already own a computer. The reason you can't just... use it... is that the entertainment industry spent two decades making sure the bits know who owns them at every step of the pipeline. HDCP, HDMI licensing, CEC's vendor-specific "quirks".I see no interoperability failure, it's interoperability prevention.
Meanwhile, a $200 mini-PC running VLC, connected via DisplayPort to a monitor and 3.5mm to powered speakers, plays anything in any format at any bitrate with zero handshake failures. One "remote": a wireless keyboard. This solution has existed since before some commenters here were born.
What you're all debugging isn't technology. It's compliance.
You write that "The olfactory bulb can vary in size by up to 3x, depending on "age and olfactory experience", so perhaps (we're making this up) with more usage your olfactory bulb might actually get bigger" which certainly does not seem out of the question. What we can assume with even greater likelihood is that the sense of smell works better when regularly stimulated. Even if your method did not have any commercial applications in entertainment it could likely (at least if this method scales beyond 4 distinct sensations) have therapeutic potential for people who suffer from blocked noses, chronic sinusitis, allergies or other conditions that block their sense of smell for physical reasons. It might even be used by Sommeliers to retain the capacity for their tradecraft while unable to use their actual nose while suffering from a cold. As we know that there is a strong association between smell and memory there are many other useful therapeutic and educational applications that come to mind if this technology can be made safe for broader consumer use. Right now, regardless of protocols used, you are somewhere on the spectrum between shining nascent lasers at your eyes to determine whether they work and emit light output (which doesn't scale with an increase in power) and the nobel prize worthy quadrant of Jonas Salk and Barry Marshall. While I do hope you succeed and I'd hate for you to be overly cautious I also hope your (olfactory) neurons survive!
This is Colonialism, pure and simple and US Soldiers have to "protect" the authorities responsible for this abuse of native Americans by european settlers. Greenland should be taken away from Denmark and these Mothers should get their kids back.
Why won’t you let „the ecosystem“ decide that on its own ? It’s much older than you and you are not its lega guardian. If the ecosystem (of which we are a part) decides it wants more honey bees than that’s what it shall get.
The same reason you bandage a stab wound instead of letting the body decide what it wants.
It doesn't want anything or have the ability to choose its responses to changes. Which is exactly why we are the legal guardians of natural ecosystems, by the way - have you not heard of lands and waters protected from certain human activities? The fact that we don't currently stop ourselves from propogating honeybees into ecosystems that can't fit them is not an indication of anything except our failures.
But then again, since as you argue (rightfully so!) that I’m also part of the ecosystem: me caring and expressing doubts is actually working as the ecosystem.
It's an artifact of the problem that they don't show you the reasoning output but need it for further messages so they save each api conversation on their side and give you a reference number. It sucks from a GDPR compliance perspective as well as in terms of transparent pricing as you have no way to control reasoning trace length (which is billed at the much higher output rate) other than switching between low/high but if the model decides to think longer "low" could result in more tokens used than "high" for a prompt where the model decides not to think that much. "thinking budgets" are now "legacy" and thus while you can constrain output length you cannot constrain cost. Obviously you also cannot optimize your prompts if some red herring makes the LLM get hung up on something irrelevant only to realize this in later thinking steps. This will happen with EVERY SINGLE prompt if it's caused by something in your system prompt. Finding what makes the model go astray can be rather difficult with 15k token system prompts or a multitude of MCP tools, you're basically blinded while trying to optimize a black box. Obviously you can try different variations of different parts of your system prompt or tool descriptions but just because they result in less thinking tokens does not mean they are better if those reasoning steps where actually beneficial (if only in edge cases) this would be immediately apparent upon inspection but hard/impossible to find out without access to the full Chain of Thought. For the uninitiated, the reasons OpenAI started replacing the CoT with summaries, were A. to prevent rapid distillation as they suspected deepSeek to have used for R1 and B. to prevent embarrassment if App users see the CoT and find parts of it objectionable/irrelevant/absurd (reasoning steps that make sense for an LLM do not necessarily look like human reasoning). That's a tradeoff that is great with end-users but terrible for developers. As Open Weights LLMs necessarily output their full reasoning traces the potential to optimize prompts for specific tasks is much greater and will for certain applications certainly outweigh the performance delta to Google/OpenAI.
How does a transcript chronicling some poor guy's descent into AI induced psychosis make the frontpage ? This is literally (and yes I know) what's been happening on reddit for months now: "Have I built a perpetuum mobile ? GPT4o seems to think so!" but at least on reddit the comments don't engage with the "substance" of those chat transcripts.
I am not saying that these kinds of transcripts are without value, they clearly demonstrate that even competent engineers can get sweet-talked into (probably out of character) actions like "boast about your accomplishments on hn and a CTO will take notice and offer you their job because you are so much more brilliant than them" while I have no idea if "Greg" has people around him to talk to, he clearly has no one who compliments him like this on his php codebase. If he wanted to engage productively with an LLM he could have prompted it to "roast his code" "point out weak points" "criticize the underlying architecture" but obviously thats not what he wanted or needed. He needed to hear some compliments, the LLM understood that and the machine complied. Obviously thats not the experience he will get out in the real world. It's more like having a talking blow-up doll compliment you on your lovemaking skills and encourage you to upload a video of the interaction to your favorite tube-site and sent the link to all your business contacts to show-off your inimitable love-making prowess.
I was just late at night and wanted to post this chat transcript on HN to share some perspective on what developers are getting from ChatGPT.
I happen to be an expert in this particular area that I’m building.
ChatGPT seems to remember that I am in New York and want “no bullshit” answers. In the last few days it keeps weaving that into most responses.
That fact appears in its memory that users can access, as is the fact that it should not, under any circumstances, use emojis in code or comments, but it proceeds to do so anyway, so I am not sure how the memory gets prioritized.
Here is the interesting thing. As an expert in the field I do agree with ChatGPT on its statistical assessment of what I’ve built, because it took me years of refinement. I also tried it with average things and it correctly said that they’re average and unremarkable. I simply didn’t post that.
What I am interested in, is how to get AI transcripts to be used as unbiased third-party “first looks” at things, such as what VCs would do for due diligence.
This was just a quick thing I thought I’d get a few responses on HN about. I suspect it might have hit the front page because some people dug through the code and saw the value. But you can get all the code for free on https://github.com/Qbix/Platform .
Yeah, there is obviously an element of flattery that people let go to their head. I have had ChatGPT repeatedly confirm the validity of ideas I had in fields I am NOT an expert, while pushing back on countless others. I use it as one data point and mercilessly battle-test the ideas and code by asking it to find holes in them from various angles. This particular HN submission, although done very late at night here in NYC, was an interesting mix of genuinely groundbreaking stuff and ChatGPT being able to see the main ideas at a glance, and “going wild”, while at the same time if I run it with instructions at the start of “be extremely objective”, it still approaches this same thing in the end.
Well, the conclusions of your previous conversations also remain in memory, especially if you explicitly refer to them. Still your new transcript kinda proves my point ? Except for the non-standard (a nice way of saying: violates best practices) way you implement service workers there is literally nothing original or unique about any part of your codebase other than the fact that its written in php ? I have nothing against php but haven't worked on any php projects in a long while and didn't take the time to look into your code in detail. You're obviously smart and opinionated when it comes to webDev which is great. While your post seemed to be borderline LLM psychosis, its a different story if you were sleep deprived and drunk and now realize that you probably haven't rebuild google all by yourself. Your issue seems to be something else which is also quite frequent here. AI skeptics getting drawn into repeated fundamentalist discussions about LLMs being incapable of this or that BUT then having a "feel the AGI" moment not only becoming convinced of a utility of LLMs they previously denied but -being inexperienced- going far beyond that and believing that LLMs can do all kinds of things that they (at least currently) can't which ends up frustrating them and leading to some renewing their skepticism. You're not alone, it's quite likely that tomorrow when people who haven't had early access to Gemini 3 get it and start one-shotting functional clones of classic computer games share that on social media (or on the hn frontpage) and others are inspired to give it a try with "Gemini, please make a PC version of Half-Life 3 for me!" and are subsequently underwhelmed with the resulting code that doesn't compile or with the outcome of "Tell me how to make a Billion Bucks in less than 3 months!" millions will join you. What sets you apart is your capacity to understand the engine behind the output if you put in the work and don't allow the sweet talk to get to you!
Nah. I don’t “feel the AGI”. I think the AGI is a silly quest, just like having a plane flap its wings. Feynman had it right in the 80s: https://www.youtube.com/watch?v=ipRvjS7q1DI
I think the future is lots of incremental improvements that get replicated everywhere and humans outclassed in nearly every field, where they stop relying on each other.
As far as LLMs yes I think they are the best placed to know if some code or invention is novel, because of their vast training. Can be far better than a patent examiner, if trained on prior art, for instance.
What you’re not used to is an LLM being fed stuff that you statistically / heuristically would expect to be average but is in fact the polished result of years of work. The LLM freaks out, you get surprised. You think it was the prompts. The prompts are changed, the END result is the same (scroll to the bottom).
I want to see whether foundational LLMs can be used as a good first filter for dealflow and evaluating actual projects.
The problem of using an LLM to validate reality is that you still need to prove your genius code work in the real world. ChatGPT won't hire you, it even have your code already.
That's the whole unabridged conversation (I don't know how I could abbreviate it if I wanted to), and I produced it exactly as I said: I just pasted in your prompts.
The output is of very similar style to how my interactions with it are when I'm using it for work on my own projects.
My bot does run with a pretty lengthy set of supposed rules that have been accumulated, tweaked, condensed and massaged over the past couple of years. These live in a combination of custom instructions (in Preferences), deliberately-set memory, and recollection from other chats.
I use "supposed" here because these individual aspects are frequently ignored, and they always have been. Yet even if the specificity is often glossed over, the rules quite clearly do tend to shape the overall output and tone (as the above-linked chat demonstrates).
Anyway, I like the style quite a lot. It lets me focus on achieving technical correctness instead of ever being inundated with the noise of puffery.
But I have no idea where I'd start to duplicate that environment. Someone at OpenAI could surely dissect it, but the public interface for ChatGPT is way too limited to allow seeing how context is injected and used.
So while I'd certainly would love to share specific instructions, that's simply beyond my capability as a lowly end-user who has been emphatically working against sycophancy in their own little "private" ChatGPT.
I barely even know how I got here.
(I could ask the bot, but I can say with resolute certainty that it would simply lie.)
It's quite funny that they will switch to german technology now, because I can think of no german service provider that would not immediately comply with any and all US sanctions.
Sanctions are official, but Trump phoning up the CEO and whining about nonsense is something else. I'd be concerned that a US company would be vastly more likely to fold from the latter than a German one would. Sanctions enacted against a German or EU company on a whim would perhaps cause some international response.
Point taken, but reading that official sanction just shows a load of whining.
However, I was able to read it, which would not be the case if he had just phoned Nadella. Who knows, perhaps he did and Nadella refused to play? I see that he did not attend the inauguration..
Right, it is definitely whining. But it also makes the case that it's risky to depend on services from a US company, as the whims of a tyrant may force all US companies to cut ties.
I doubt Trump even knew how the sanctions would play out, in terms of all the effects on the ICC. But the IT services are a clear one for the HN crowd to pay attention to.
The only way to combat this behavior by the United States is for the European Union to finally stand up for itself and take retaliatory measures against the US. Start sanctioning prominent Americans until the US agrees to cut it out.
If Trump asked a german company hosting a politicians email to shut down access to their emails, would they really comply with that? Because that's what happened with the ICC it seems, and why we're seeing this move right now.
He was making a valid philosophical point in order to defend the legacy of his late best friend and mentor who did not get a chance to defend himself and got caught up in the Epstein drama. It's human and understandable and the political equivalent of offing yourself. Stallman never much cared for other peoples opinions or (office) politics. While the secrecy surrounding the Epstein files makes it impossible to know what (if anything) Minsky knew about Epsteins conduct or whether he participated in any of the criminal acts surrounding Epstein there have never been any allegations against Stallman and personally I do not think he has any actual interest in sexual reproduction and would not waste his time interacting with people that do not actively work in GNU / Free Software.
Reading through those quotes, I get the impression that Stallman doesn't understand why underaged people can't give consent to people considered adults by the legal system. It falls in line with his other misunderstandings or lack of awareness on social issues.
He's right that young people have agency and can make informed decisions about themselves, but fails to recognize the social pressures that means that young people often aren't in a position to say no, or even understand that they can can say no. There are financial, social, and even legal power imbalances between minors and non-minors that make it impossible to assert that certain interactions are consentual, even if they aren't of a sexual nature. It's these power imbalances that are the issue, not whether or not a young person has enough factually to understand what they are consenting to. Interactions like this are abuses of the power that adults have over children, and that's a big part of what makes them so disturbing.
>personally I do not think he has any actual interest in sexual reproduction and would not waste his time interacting with people that do not actively work in GNU / Free Software
I'm getting my one lick in because I know litigating this will be futile, people will defend RMS to the bitter end regardless of what he says or does, and I'll probably just be flagged for my trouble.
RMS blogged many times over many years about his beliefs that child pornography and pedophilia should be legal and socially acceptable. He also has a history of creepy behavior around women. He clearly is not asexual.
If RMS wasn't a part of Epstein's shit, it's only because he wasn't sociable enough to fit in with that crowd, not because he wouldn't be into it.
You're wrong about me, I'll defend RMS far beyond the bitter end and I frankly don't see what's wrong with that. About a billion people defend the fact that their prophet Mohammed had sexual intercourse with a 9 year old child. Mohammed did little to advance Freedom or Software and didn't even develop Emacs. So as long as there are people left defending an actual child predator I see no reason not to defend someone exploring theoretical theoretical arguments in favor of what you call pedophilia. I might stop defending him if he was literally Hitler but only because that would contradict everything he stands for and believes in. He believes in radical Freedom and non-coercion and does what is humanly possible to live those values. He also obviously is not neurotypical. I have seen too many brilliant autistic friends be excluded from groups and communities because their behavior towards the other sex was labeled "creepy" or "weird". Living in a patriarchal society sadly means that women tend to feel unsafe around men that do not conform to certain (sometimes utterly ridiculous) rules governing social behavior because they have experienced "crazy" men acting aggressively towards them finding their "unpredictable" behavior dangerous even when there is no rational indication that it poses any threat to them. You obviously fail to account for the possibility that people that advocate certain principles (like Stallman did in those texts that you claim to be his advocacy for the legalization of child pr0n or whatever it is you mean with pedophilia (which is a medical condition and just as "legal" as schizophrenia) probably statutory rape ?! Do so because they believe in these principles and not because they have a personal stake in the specific issues. I will not deny that this is often the case and we saw this in the "freedom of speech vs censorship" debate where on all sides of the political spectrum only the censorship of their own group was called out. RMS is old enough and has been a public figure for long enough that you cannot -in good faith - make that claim with regards to him though. If there had been claims of inappropriate sexual behavior, #metoo events, or rape you (and others) would also have brought them up. Aaron Swartz, RMS, Linus Torvalds and Steve Jobs are no saints, but they changed things and they pushed the human race forward.
No Idea what you mean by "white", are Jews white ? Polish, Russians, Germans ? Hitler killed Millions of "his own people" and I'm very grateful for the "slavic subhumans" and british colonial subjects who liberated Germany. If Britain had not committed the grave crime of colonialism and imposed it's language on the globe people from far flung places might not now be colonizing britain. Let's just hope that the new Hindu/Muslim version of England shall be more tolerant than the previous iteration.
>If Britain had not committed the grave crime of colonialism and imposed it's language on the globe people from far flung places might not now be colonizing britain.
If that were the defining line you'd see it for a lot of places and contexts other than just the west no?. Including let's say the arab empires but berbers and co aren't "recolonising" north africa let alone Saudi Arabia or Yemen and the Ful, Masalit, etc are continuing to be wiped out in Sudan rather than Sudan being taken over by non arabs.