Hacker Newsnew | past | comments | ask | show | jobs | submit | rokobobo's commentslogin

I don’t think Nasdaq is free float based.

Also, I would be a lot more pessimistic of the index tracking fund managers’ ability or willingness to find extra shares: their goal is to match the index, not beat it. If the index includes the new firm at a blown-up price because everyone sent their buy orders at the same closing auction, then all the index-tracking funds still track their underlying index. They do not care that after that closing auction, the price of the new firm—and likely the index itself—is going to drop.


>I don’t think Nasdaq is free float based.

I recommend the NDX proposal from February which the whole discussion is based upon:

"To balance index integrity and investability, Nasdaq proposes a new approach for including and weighting low-float securities (those below 20% free float). Each low-float security’s weight will be adjusted to five times its free float percentage, capped at 100%. Securities with more than 20% free float will continue to be weighted at full, eligible listed market capitalization, while those below 20% free float will be weighted proportionally to preserve investability."

The document includes a scenario with the rules applied to SpaceX. "Company C" in the table is SpaceX (with some estimated numbers).

https://indexes.nasdaqomx.com/docs/NDX_Consultation-February...


Do you have an example of an algorithm that learns, rather than is trained/trains itself? I don’t really see the boundary between the two concepts.


If we make some massive physics breakthrough tommrow is an LLM going to be able to fully integrate that into its current data set?

Or will we need to produce a host of documents and (re)train a new one in order for the concept to be deeply integrated.

This distinction is subtle but lost on many who think that our current path will get us to AGI...

That isn't to say we haven't created a meaningful tool but the sooner we get candid and realistic about what it is and how it works the sooner we can get down to the business of building practical applications with it. (And as an aside scaling it, something we arent doing well with now).


Why is retraining not allowed in this scenario? Yes, the model will know the breakthrough if you retrain. If you force the weights to stay static by fiat, then sure it's harder for them to learn, and will need go learn in-context or whatever. But that's true for you as well. If your brain is not allowed to update any connections I'm not sure how much you can learn either.

The reason that the models don't learn continuously is because it's currently prohibitively expensive. Imagine OpenAI retraining a model each time one of its 800m users sends a message. That'd make it aware instantly of every new development in the world or your life without any context engineering. There's a research gap here too but that'll be fixed with time and money.

But it's not a fundamental limitation of transformers as you make it out to be. To me it's just that things take time. The exact same architecture will be continuously learning in 2-3 years, and all the "This is the wrong path" people will need to shift goalposts. Note that I didn't argue for AGI, just that this isn't a fundamental limitiation.


What is the subtle distinction? I'm "many" and it's not clear at all here. If we had some massive physics breakthrough, the LLM needs to be tought about it, but so do people. Teaching people about it would involve producing a host of documents in some format but that's also true of teaching people. Training and learning here seem to be opposite ends of the same verb no matter the medium, but I'm open to being enlightened.


Not sure exactly what the parent comment intended, but it does seem to me that it's harder for an LLM to undergo a paradigm shift than for humans. If some new scientific result disproves something that's been stated in a whole bunch of papers, how does the model know that all those old papers are wrong? Do we withhold all those old papers in the next training run, or apply a super heavy weight somehow to the new one, or just throw them all in the hopper and hope for the best?


You approach it from a data-science perspective and ensure more signal in the direction of the new discovery. Eg saturating / fine-tuning with biased data in the new direction.

The "thinking" paradigm might also be a way of combatting this issue, ensuring the model is primed to say "wait a minute" - but this to me is cheating in a way, it's likely that it works because real thought is full of backtracking and recalling or "gut feelings" that something isn't entirely correct.

The models don't "know". They're just more likely to say one thing over another which is closer to recall of information.

These "databases" that talk back are an interesting illusion but the inconsistency is what you seem to be trying to nail here.

They have all the information encoded inside but don't layer that information logically and instead surface it based on "vibes".


Humans, and many other creatures, learn. While they are performing a task, they improve at the task.

LLMs are trained. While they are training, they are not doing anything useful. Once they are trained, they do not learn.

That's the distinction.


I think people were asking you to explain what kind of strategies people run at sharpe 4


From people I know personally:

"Arbs" on stuff that big desks don't touch because of capacity (small mergers for example, you lever up on 2-3 small merger arbs per year and you are almost there);

DEX to liquidity pool latency arbs for shitcoins if you want a crypto example;

Pure arbs (One of my friends who admittedly is not satisfied with 1mio USD comp did this trade: https://notion.moontowermeta.com/financial-hacking-etf-vs-ne... ).

Edit: The other option is that if you are a trader in "special" markets (the best example is biotech/medstocks) where domain knowledge really matters being 4 sharpe is basically 1 good trade a year, and at 5mio USD AUM you are always at capacity.


I wonder why people always assume that the strategy would be algorithmic or systematic. How about global macro, long/short equity, or even plain long only done well ? Actually studying markets and assets fundamentally, and finding asymmetric bets ? There are plenty of people that have done that successfully over really long periods of time, I doubt markets are perfectly efficient just because some academics claim so, especially for bets with strong convexity.


One thing is fiduciary duty to the shareholders, another is “pleasing the shareholders” as you describe it. Pleasing the shareholders is necessary only when displeasing them means they will sell the stock when there is no buyer. If there is a buyer, the current shareholders are less relevant — as long as management cannot be accused of not fulfilling their fiduciary duty to them.


>sell the stock when there is no buyer

It's hard to imagine what you mean here: a holder of shares cannot sell them unless they find someone willing to buy.


If a friend doesn’t give you 4% of their net worth, how can you be certain you are one of their 25 closest friends?


I believe theirs was to deliver internet


I may be wrong, but I believe spending time in a deeper gravitational well means you observe everything outside of the well to be happening much faster; at the singularity, the entire future of the parent universe will appear to you as happening all at once. There is no notion of “matter that falls in later” — once you reach the singularity, you travel to the end of time in the parent universe. And the passage of time in our universe isn’t a continuation of time in the parent universe; it’s not even the same dimension, the latter is collapsed.


Thank you! That answered my question whether there would be in our universe a "white fountain" spitting matter coming from the back hole in the top universe. In your hypothesis where we lose one dimensione over the top universe than all the events of the top universe, like the mass arriving, happen in our universe all at the once in the beginning (the big bang).


Right--in log scale, the gap has stayed roughly the same until around 1980.


How much do we know about military AI’s capabilities? As in, is there any evidence that the government/military was ahead of big tech on the AI research front?


Seconded. Sometimes when someone says XYZ was likely used it's because they've read something from a credible source, or maybe are a subject matter expert, or have grasped some other similarly solid chain of evidence.

But sometimes, they mean "likely" in the more colloquial sense of a guesstimation, which can range anywhere from informed guess to low effort fan-fiction. I default toward the latter unless otherwise specified.


"Please summarize the maintenance procedure for a tomahawk missile"

boom


Presumably, the energy output from a fusion plant (if we ever get there) should be self-sustaining. For starting up, I’m guessing the plant can draw power from the grid itself, no?


This depends upon the approach. The person you replied to mentioned inertial confinement. If this is used, we'll get pulses that each need to be triggered (with energy gain). Other approaches (e.g. conventional tokamaks) aim to produce a continuous stream of energy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: