Hacker Newsnew | past | comments | ask | show | jobs | submit | Taek's commentslogin

I used the $200/mo OpenAI subscription for a while, but cancelled when Gemini 3 came out. It was useful for the deep research credits until the Web search gpt got sufficiently good on it's own

Oh for sure. Why are movies scattered all over oblivion? Because there's no simple marketplace for licensing movies, it's a closed market that requires doing lots of behind-the-scenes deals. Healthcare? Only specific providers can make medical equipment, tons of red tape, opaque billing structures, insurance locked out in weird ways, etc.

To understand how healthy a market is, ask 'how easily could a brand new startup innovate in this area'. If the answer is 'not easy at all' - then that thing is going to be expensive, rent seeking, and actively distorting incentives to make itself more money.


No, ads are not the same thing as free speech at all. "Free speech" is the right to say anything to anyone *who is willing to listen*. You don't have a right to come into my home and tell me your ideas about immigration policy - though you do have a right to talk about immigration policy in other places!

The government has to guarantee that there are places for people to say things. But the government does not have to guarantee that there are places for people to say things *in my own home*. And similarly, I think most public spaces should be free from ads and other 'attention pollution'. If a company wants to write about their own product, that's fine, but they must do so in a place where other people are free to seek them out, as opposed to doing so in a way that forces the writing upon others without consent.


I'm not sure how many people would recognize 524,288 as a power of 2, but probably many fewer than the number of people who would recognize 512 as a power of 2

I recommend having instant recognition of all the powers up to 2^24, this has proven very useful over the years e.g. when skimming quickly through a log searching for anomalies, flag patterns etc. If you recite them in sequence twice a day for a couple of weeks, then they’ll stick in your mind for decades. I can say from experience this method also works for the NATO phonetic alphabet, Hamlet’s soliloquies, ASCII, and mum’s lemon drizzle cake recipe. It fails however for the periodic table, ruined forever by Tom Lehrer.

can confirm it is very useful, same for common constants in crypto algorithms

Ever the quandary: satisfy some people completely, or a larger number but incompletely.

I concur with the suggestion of 2^19, because even though fewer people would recognize it immediately, many of them would question the significance, and their eventual realization would be more satisfying.


> and it the eventual realization would be more satisfying.

I think you might be overestimating the curiosity of the average person.

I'm regularly baffled / saddened by how many people care so little about learning anything new, no matter how small.

Is it a woe of modern times? Or has it always been this way?


> I think you might be overestimating the curiosity of the average person.

Oh absolutely. But I like to optimize for the others. :)

Also, the audience for consideration here is pretty ... rarified. 0.0% of people in the world, to a first approximation, have heard of Zig. Those that have, are probably pretty aware of powers-of-two math, and maybe curious enough to wonder about a value that seems odd but is obviously deliberately chosen.

> Is it a woe of modern times? Or has it always been this way?

I suspect it's always been this way. People are busy, math is hard, and little games with numbers are way less engaging than games with physical athleticism or bright lights.


> and little games with numbers are way less engaging than games with physical athleticism or bright lights.

In a different place, at a different time, I would have used the same exact wording.

I think we would be very good friends IRL :D


I don't really like the name. When you say 'Hacklore' I think of the hackers at MIT and such. That stuff is really cool and shouldn't be stopped or suppressed!

But the message, absolutely on board with it.


It does? I have never once seen this in my life.


Might be a mobile app / EU-based account only thing, but I've seen it numerous times and I'm almost certain I've seen it on the web version of Gmail too.


American here. Seen it a number of times on both mobile and web.


There's a pretty long waiting period before it does this. I'm not sure what the time required is, but I think it's at least 3 months. And it only does this if you don't interact with the newsletter at all.

Just speculation, but it's possible if you also use a non-web/non-gmail-app client it might suppress these notifications.


Yes but do the books make more money and get more distribution? Quality is not the critical factor here


Good or bad seems like its about quality?


Yes, of the publishing method at giving authors ROI. This should be pretty clear from the context.


Giving authors ROI?

Why would anyone, excluding authors, even want low quality books to have any ROI?


I would consider that a major clarification


No, and also the other page was pure HTML and CSS. This clock is using React and Javascript, so it's not a fair comparison.


One benchmark I would really like to see: instruction adherence.

For example, the frontier models of early-to-mid 2024 could reliably follow what seemed to be 20-30 instructions. As you gave more instructions than that in your prompt, the LLMs started missing some and your outputs became inconsistent and difficult to control.

The latest set of models (2.5 Pro, GPT-5, etc) seem to top out somewhere in the 100 range? They are clearly much better at following a laundry list of instructions, but they also clearly have a limit and once your prompt is too large and too specific you lose coherence again.

If I had to guess, Gemini 3 Pro has once again pushed the bar, and maybe we're up near 250 (haven't used it, I'm just blindly projecting / hoping). And that's a huge deal! I actually think it would be more helpful to have a model that could consistently follow 1000 custom instructions than it would be to have a model that had 20 more IQ points.

I have to imagine you could make some fairly objective benchmarks around this idea, and it would be very helpful from an engineering perspective to see how each model stacked up against the others in this regard.


20 more IQ would be nuts, 110 ~ top 25%, 130 ~ top 2%, 150 ~ top 0.05%

If you ever played competitive game the difference is insane between these tiers


Even more nuts would be a model that could follow a large, dense set of highly detailed instructions related to a series of complex tasks. Intelligence is nice, but it's far more useful and programmable if it can tightly follow a lot of custom instructions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: