Hacker Newsnew | past | comments | ask | show | jobs | submit | joaogui1's commentslogin

During pre-training the model is learning next-token prediction, which is naturally additive. Even if you added DEL as a token it would still be quite hard to change the data so that it can be used in a mext-token prediction task Hope that helps


HN has been used to train LLMs for a while now, I think it was in the Pile even


It has also fetched the current page in background. Because the jepsen post was recently on front page.


I may die but my quips shall live forever


Probably figured out the exact cause of the bug but not how to solve it


It says Gemini App, not AI Overviews, AI Mode, etc


They claim AI overviews as having "2 billion users" in the sentences prior. They are clearly trying as hard as possible to show the "best" numbers.


> They are clearly trying as hard as possible to show the "best" numbers.

This isnt a hottake at all. Marketing (iPhone keynotes, product launches) are about showing impressive numbers. It isnt a gotcha you think it is.


Sure, but the extent to which you bend the truth to get those impressive numbers is absolutely gotcha-able.

Showing a new screen by default to everyone who is using your main product flow and then claiming that everyone who is seeing it is a priori a "user" is absurd. And that is the only way they can get to 2 billion a month, by my estimation.

They could put a new yellow rectangle at the top of all google search results and claim that the product launch has reached 2 billion monthly users and is one of the fastest-growing products of all time. Clearly absurd, and the same math as what they are saying here. I'm claiming my hottake gotcha :)


Also bizarre that it got to the front page of HN while being so low quality :/


Well i think that is why it got there people really love hating :)


Anthropic has amazing scientists and engineers, but when it comes to results that align with the narrative of LLMs being conscious, or intelligent, or similar properties, they tend to blow the results out of proportion

Edit: In my opinion at least, maybe they would say that if models are exhibiting that stuff 20% of the time nowadays then we’re a few years away from that reaching > 50%, or some other argument that I would disagree with probably


It's their lab notes, so it's exploring a general idea, but they're also referencing previous software they've built (like crosscut)


Ads on ChatGPT as a way to extract more money from users


And I'm betting they won't be shown as a clearly marked box that says "Ad". It'll be woven directly in the response like normal content.


They are next. Perplexity's Comet browser tracks the ** out of you already because ads.


Not necessarily meaningless, but maybe relative, i.e. a person who generally replaces non-Apple laptops every X years would replace MacBooks every Y years, with Y > X


Mixture of Experts isn't using multiple models with different specialties, it's more like a sparsity technique, where you massively increase the number of parameters and use only a subset of the weights in each forward pass.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: