Hacker Newsnew | past | comments | ask | show | jobs | submit | more DavidSJ's commentslogin

I once got an email about the funeral arrangements for somebody's mother. I know this person very well, because he uses my email address for everything. I know what internet subscription he has. I know where he bought his e-bike. Where he goes on holiday. Etc.

I was expecting this person to be you.


A larger rocket mitigates the effects of the rocket equation.

The wet (loaded with propellant) to dry (empty of propellant) mass ratio is determined via the rocket equation to be the exponential of delta V divided by exhaust velocity.

Certain parts of the rocket, such as the external tank structure, scale sub-cubically with the rocket's dimension, as do aerodynamic forces; whereas payload and propellant mass scale cubically.

Hence if the rocket is smaller than a critical threshold size, the requisite vehicle structures are too large relative to its propellant capacity to permit the required wet:dry mass ratio to achieve the delta V for orbit.

At exactly this size, the rocket can reach orbit with zero payload.

As the rocket increases in size beyond this threshold, it is able to carry a payload which is increasingly large relative to the rocket's total mass.


This is also why no hobby rockets get to orbit. Even a 1 gram payload to low earth orbit is beyond what a human-sized rocket can manage due to the way rockets don't scale downwards well.


Smallest orbital anything so far is 31ft(9.54m) long, 20in(54cm) wide, 2.9t/2.6t(2600kg?), does 9lbs(4kg) to random-ish LEO: https://en.wikipedia.org/wiki/SS-520


How does this compare to the cube-square law scaling effects applied to propeller- and wing-lifted vehicles like quadcopters/helicopters and RC aircraft/jumbo jets? Or even the squat shape of a housefly that zigs and zags through the air like an acrobat compared to the ponderous lift-off of a large goose?

I understand vaguely that those operate and scale based on the area (a square function of their length) of their lifting surfaces, and are pulled down by their mass (a cube function of their length).

A little Estes toy rocket lifts off the pad much more aggressively (in the blink of an eye!) than a full size rocket...


>A little Estes toy rocket lifts off the pad much more aggressively (in the blink of an eye!) than a full size rocket...

If you really want to, you can reach Mach 10 (~3300 m/s) with a 8 meter long 3500 kg missile in 5 seconds:

https://en.wikipedia.org/wiki/Sprint_(missile)

All of that in the lower atmosphere with the missile heat shield glowing white hot. :)


They are almost entirely unrelated. When trying to leave the gravity well of a planet, the atmosphere is only a dragging force acting to reduce your thrust. It might be proportional to the surface area of the vehicle, but likely not - I think it's only proportional to the surface area of the "nose" of the rocket. But what's certain is that it's strictly a force that hinders you - in a rocket, all of your thrust comes from the engines, you don't get any boost from the air.

However, even if you're taking off of a planet with no atmosphere, you still have a huge force to deal with - you need to maintain an acceleration to exit the gravity well of the planet, and you need to burn fuel for that. But you also have to carry the fuel you'll burn with you, so the more fuel you have, the more fuel you'll need - this is what the rocket equation codifies.


> But you also have to carry the fuel you'll burn with you, so the more fuel you have, the more fuel you'll need

Isn't this the entire point of using methane as fuel so that they can build a gas station once they get there so that return fuel is not required to be considered in this equation?


I'm not talking about fuel that you need to get back, we're still at the "leaving Earth" case. The point is that you need, say, 1000 tons of fuel to leave the Earth. Your rocket then will weigh [weight of empty rocket] + [weight of payload] + 1000 tons. And it is this mass that the engines will have to push while ascending. Of course, the fuel gets spent as you ascend - by the time you reach orbit, your rocket is now 1000 tons lighter.


ahh, I misread the part I quoted. doh!


The refueling idea is so that for example you don't need to carry the fuel needed to get to the moon or Mars all in one rocket. You just need to carry enough to get to the refueling orbit - which is much less.


The toys have to be aggressive. You have less than three feet worth of launch rail--by the time the rocket clears the rail it must be going fast enough that the fins make it stable. Meanwhile, it's light, overengineering the body to take a high g load is trivial.

An orbital class rocket--taking that kind of g load is going to break it (just look at the payload specs for the Falcon Heavy--its maximum permitted payload is well below it's performance to low orbit. You load it up to what the engines can do, it breaks. The only use case is when it's going farther than low orbit.) And an orbital class rocket has active steering rather than fins, it doesn't need to be booking it to be stable.


Most of the aggressiveness of a toy rocket is the smaller length. Orbital class rockets are literally the size of skyscrapers.


I'd posit that Mark Rober just used a hobby rocket to put that selfie satellite into orbit. Perhaps he's the first?


He used a commercial rideshare program.

> Our satellite launched on a SpaceX Falcon 9 rocket from Vandenberg Space Force Base in California (USA) on Jan 14, 2025. The rocket mission is a Transporter, and SAT GUS was dropped off in low-Earth orbit at about 375 miles above the surface of our pale blue dot.

https://space.crunchlabs.com/


Can you identify the rocket he used? Because from what I saw, I’m pretty sure it was a F9.


Added to that, Full-flow stage combustion engines are bigger, heavier, and more expensive, but are way more efficient. So a bigger rocket is the only option to get one of those onboard, and helps with taking more mass to orbit because they are more efficient than other options.


I don't believe there's any performance advantage for full-flow, which SpaceX alone is attempting. The only point is to lower the combustion temperature inside the turbines, at the expense of (much) higher flow rates through those turbines, in order to increase their lifespan.

(There's a large difference between staged combustion generally and gas-generator engines, which throw away performance by dumping fuel out of the turbine exhaust).


Since the temperature limit of available materials is the fundamental limitation (even after making custom high-temp alloys), this allows them to maximize mechanical power from the turbopumps, which raises performance.

We might imagine a conservative FFSC design which accepts very low temperatures in exchange for making it easy (low R&D cost) to reach high longevity. Raptor is not a conservative design, so it requires more R&D to achieve that longevity.

https://www.youtube.com/watch?v=twnZYPdFgbU


But you also have a limit on the other side: going extreme to make the point, we haven't managed to build a mile-tall building yet, and a rocket that size would be a nightmare to engineer (while perhaps technically possible -- you might have to scale up another 10x or 20x to make it physically impossible).

So there's some sort of curve, zero at both ends, between overall rocket size and the payload to orbit. The question is where Starship sits on that curve, and to your point it seems likely that it's looking good on that metric alone.

But then you have another curve that I think starts small and increases near-monotonically, which is the complexity/likelihood-to-fail factor to the size of the rocket. It's (relatively) easy to launch a toy rocket, (fairly) simple to build a missile-sized sub-orbital rocket, difficult to build a small-to-medium orbital rocket, and apparently very difficult to build a Saturn/N-1/Starship-sized rocket. More props to the crazy '60s team that pulled it off.


> So there's some sort of curve, zero at both ends, between overall rocket size and the payload to orbit.

This doesn't follow. Engineering complexity is not a limit on payload to orbit, it is a fundamentally different parameter. Yeah building a mile tall rocket would be hard, but it would get a shit ton of payload to orbit. There is no maximum beyond which making a bigger rocket starts to reduce your payload to orbit.

> But then you have another curve that I think starts small and increases near-monotonically, which is the complexity/likelihood-to-fail factor to the size of the rocket. It's (relatively) easy to launch a toy rocket, (fairly) simple to build a missile-sized sub-orbital rocket, difficult to build a small-to-medium orbital rocket, and apparently very difficult to build a Saturn/N-1/Starship-sized rocket.

Complexity does not increase with size, people just become more risk averse with size. Toy rockets fail all the time, just nobody really cares. No one would bet the lives of multiple people and hundreds of millions of dollars on a successful toy rocket launch. If complexity increases, it is with capability. If you want to land on the moon, you need something a bit more advanced than a hobby rocket. There is no reason to believe a floatilla of physically smaller rockets capable of achieving any given mission will be less complex in aggregate than a single physically larger rocket.


>> So there's some sort of curve, zero at both ends, between overall rocket size and the payload to orbit.

> This doesn't follow. Engineering complexity is not a limit on payload to orbit

At this point I'm merely talking about size (which I think is clear from the words I use. I don't think "building a mile tall rocket would be hard" adequately describes the difficulty when we haven't even built a mile tall building.

Sea Dragon[1] was only envisioned as 490 feet tall, and as near as I can tell even the Super Orion[2] would only have been 400-600 meters tall. And of course, neither of those was even close to implementation. Therefore I stand by my statement that a mile tall rocket is, for all practical purposes, impossible, and thus has a payload to orbit of zero. If you disagree then add a zero -- surely you agree we can't build a ten-mile-tall rocket?

As far as complexity, I'm not sure what to say. Toy rockets might fail all the time, but the point was complexity, and a toy rocket can be constructed from under a dozen parts. Even larger model rockets have at most a few dozen to a few hundred parts. The part count of the Falcon 9 has to number in the thousands, if not tens of thousands (9 merlin engines with at least several hundred parts each?).

To be clear, I agree with you that complexity increases with capability.

But also, to push back a bit, I don't think complexity aggregates the way you're saying it does. A box of hammers is not more complex than a nailgun, even if it has more parts in total.

   1. https://en.wikipedia.org/wiki/Sea_Dragon_(rocket)
   2. https://en.wikipedia.org/wiki/Project_Orion_(nuclear_propulsion)


> At this point I'm merely talking about size (which I think is clear from the words I use. I don't think "building a mile tall rocket would be hard" adequately describes the difficulty when we haven't even built a mile tall building.

I was assuming you were using a comical example to illustrate a "nightmare to engineer." The comparison to a building doesn't actually work at all. The practical limitation on how high we can build buildings is how fast we can make elevators. Just making something tall is not a problem.

> Sea Dragon[1] was only envisioned as 490 feet tall, and as near as I can tell even the Super Orion[2] would only have been 400-600 meters tall. And of course, neither of those was even close to implementation. Therefore I stand by my statement that a mile tall rocket is, for all practical purposes, impossible

First, the optimal design for a rocket is not to just keep making it taller, and second, size was not the obstacle to either of these projects not being built. That does not at all prove that it is impossible. What kind of world would we be living in we presumed anything that hadn't already been actively pursued was impossible?

> and thus has a payload to orbit of zero.

My point was that this does not equate to a payload of zero. Surely you wouldn't argue that the weight of this mile high rocket is zero, and therefore that there is some curve for the weight of rockets where making the rockets larger starts to make them lighter. Just as we can calculate the weight for something without actually building it, so too can we calculate the payload, and it can increase far beyond anything we can actually implement.

> If you disagree then add a zero -- surely you agree we can't build a ten-mile-tall rocket?

I agree it would be impractical, but not that it would be so non-physical that we couldn't calculate what its payload capacity would be were it to be built.

> Toy rockets might fail all the time, but the point was complexity, and a toy rocket can be constructed from under a dozen parts. Even larger model rockets have at most a few dozen to a few hundred parts. The part count of the Falcon 9 has to number in the thousands, if not tens of thousands (9 merlin engines with at least several hundred parts each?).

Falcon 9 is a liquid rocket designed to take people into space. That is the source of its part count. You could scale up a solid rocket motor to an arbitrarily large size while keeping the parts count exactly the same. It's probably not the optimal way to make a solid rocket of that size, and you'd be missing out on a lot of capabilities that are important for a real rocket, but if you just wanted a toy no more capable than what you buy in a hobby store it would be no more complicated. Conversely, try to make a fully functional falcon 9 complete with 9 working liquid rocket engines small enough to hoverslam on your desk and you have an immense engineering challenge on your hands.

> But also, to push back a bit, I don't think complexity aggregates the way you're saying it does. A box of hammers is not more complex than a nailgun, even if it has more parts in total.

I concur that part count is not the same as complexity, but that point is in my favor. Making something bigger is like adding hammers to a box of hammers. The quantity goes up, and at some point you're going to need to make some improvements to the box if you want to keep adding more hammers, but conceptually it is simple. Making something more capable, like a nail-gun, is much harder.


The failure modes of a mile-tall rocket would be spectacular. The sort of spectacular you want to be several hundred miles away from.


Some attempt to visually represent molybdenum disulfide and tungsten diselenide with the keys of a QWERTY keyboard.


Which if it was done properly would have WSe2 and MoS2 rather than seemingly random keys


It shows just the symbols of the elements (W, Se, Mo) and the number 2, not the compounds. The "W", "S", "M", and "2" characters are in the correct place on a QWERTY keyboard, and they appended the necessary additional characters to complete the symbols as needed, even if the "e" in Se and "o" in Mo aren't in the correct spot on the layout.


Here’s a concise explanation:

- High sparsity means you need a very large batch size (number of requests being processed concurrently) so that each matrix multiplication is of sufficient arithmetic intensity to get good utilization.

- At such a large batch size, you’ll need a decent number of GPUs — 8-16 or so depending on the type — just to fit the weights and MLA/KV cache in HBM. But with only 8-16 GPUs your aggregate throughput is going to be so low that each of the many individual user requests will be served unacceptably slowly for most applications. Thus you need more like 256 GPUs for a good user experience.


I’m serving it on 16 H100s (2 nodes). I get 50-80 tok/s per request, and in aggregate I’ve seen several thousand. TTFT is pretty stable. Is faster than any cloud service we can use.


H200s are pretty easy to get now. If you switched I'm guessing you'd get a nice bump because the nccl allreduce on the big mlps wouldn't have to cross infiniband.


You're presumably using a very small batch size compared to what I described, thus getting very low model FLOP utilization (MFU) and high dollar cost per token.


Yes, very tiny batch size on average. Have not optimized for MFU. This is optimized for a varying (~1-60ish) numbers of active requests while minimizing latency (time to first token and time to last token from final token) given short to medium known "prompts" and short structured responses, with very little in the way of shared prefixes in concurrent prompts.


You could do it on one node of 8xMI300x and cut your costs down.


Using vllm?


Oh, SGLang. Had to make a couple modifications, I forget what they were, nothing crazy. Lots of extra firmware, driver and system config too.


> High sparsity means you need a very large batch size

I don't understand what connection you're positing here? Do you think sparse matmul is actually a matmul with zeros lol


It's sparse as in only a small fraction of tokens are multiplied by a given expert's weight matrices (this is standard terminology in the MoE literature). So to properly utilize the tensor cores (hence serve DeepSeek cheaply, as the OP asks about) you need to serve enough tokens concurrently such that the per-matmul batch dimension is large.


i still don't understand what you're saying - you're just repeating that a sparse matmul is a sparse matmul ("only a small fraction of tokens are multiplied by a given expert's weight matrices"). and so i'm asking you - do you believe that a sparse matmul has low/bad arithmetic intensity?


An MoE's matmuls have the same arithmetic intensity as a dense model's matmuls, provided they're being multiplied by a batch of activation vectors of equal size.


In SI, kg is the base unit, and g is a derived unit.


There's an etymological reason for the word gram. It derives from a greek word γράμμα which roughly translates as "small weight" and made its way into French via the latin gramma to the French gramme, and the English gram. And 1kg is just very chunky. It wouldn't be right to refer to that as small.

As the name kilogram implies, gram is actually the unit here. But it was derived from the mass of a standard 1 kg chunk of metal that lives in a museum somewhere near Paris. This is the literal base unit of mass (at least historically, the definition has since been redefined using the Planck constant). A 1 gram chunk would have been tiny and be tedious to work with doing e.g. experiments with gravity.

They also have the original prototype meter in the form of a length of platinum-iridium alloy bar. And because the specific reference object for mass weighs 1kg instead of 1g, it means 1kg is the base unit in SI.

But quite obvious in the system of measurements, the gram is the logical unit here that you augment with prefixes and people commonly handle a lot of mass quantities that are in the order of grams rather than kg.

Derivations are simple. Simply apply powers of ten and their commonly used prefixes (kilo, milli, mega, micro etc.). The base unit is something physical that you can point at as the base unit. Or at least historically that was the intention.

There's also convenience. A 1l of water is about 1kg and a volume of 10x10x10cm. or 1 dm3. That's not accidental but intentional. It makes it easy to work with volumes and masses for people. Never mind that a liter of water isn't exactly a kg (because water purity, temperature, and a few other things).


Kilogram is indeed the base SI unit and not gram. It’s an exception.

Every formula using SI will expect mass in kg and you will be off a factor of 1000 if you use gram as the base unit. Same with derivative units like the newton which all use mass in kg for conversion.


It’s an historical artifact, as it was easier to manufacture a reference kilogram than a reference gram.

Considering today we set the kilogram by fixing the Planck constant and deriving it from there, we can just divide each side of the definition by 1000 and use that as a base unit. Using kg as the base unit is completely arbitrary, as we can derive each unit of weight directly from the meter and the second, not from the base unit.


Why not call the thing that weighs ~2.2 pounds a 'gram'?


For the same reason it was not renamed "Wug".


It's not the same reason. Gram is already part of the nomenclature, wug is not. The change I asked about would shift the relation of the prefixes to the masses: kilogram would represent a mass 1,000 times larger than it does now.


It's exactly the same reason: gram referenced a known quantity. Changing it by a few insignificant digits because of the kilogram update wouldn't force people to realign their perception of it.

Changing it to ~1,000 times what it used to be, or giving it a new name, would force people to realign.

There's reason many people still prefer customary and imperial units, and it's not just bigotry and nationalism (even if they play a part in that preference).


I agree with you that AI has much larger theoretical benefits than cryptocurrency, but I don’t think it’s fair to say cryptocurrency is wasteful by design. Bitcoin’s proof of work serves a vital function: securing the network from double-spending attacks. At the time of bitcoin’s invention, it was the only known solution to that problem, so it’s no more “wasteful by design” than a bank hiring a security guard. There are admittedly alternatives to proof of work today; I’m unsure how well they work by comparison, but even if they suffice for security, that only means that bitcoin is wasteful due to being more primitive technology, not by design.



MSFT is up 18x during that time and the S&P 500 is up 5x during that time. His investments are some mixture of MSFT and other things, so we might say he would have been up around 10x if he'd given no money away.

Since his net worth is only up 3x, that means he gave away about 70% of his wealth.


He won't have given away 70% of his wealth. If he gave away a dollar at the beginning that is 10 dollars that dollar didn't turn into etc.


AFAIK he didn't give it all away in one lump sum at the start.


First giving away X% and then getting a Y% return on investment has exactly the same effect on his wealth as first getting a Y% return and then giving away X%.

So to determine X, we can just ask how much money he’d have now had he not given any away, and it looks like he has about 70% less than that.


He said he would do it over 5 years. MSFT was up roughly 2x not 18x


However long he took, he has ~70% less wealth than he would have had if he didn't do it. If it took him longer, that only means he gave the wealth after it had more time to appreciate.


How? Why do you take him at face value?

You confidently gave an explanation earlier that was hogwash.


every drug targeting amyloid plaques has failed to even slow Alzheimer's

Lecanemab and donanemab succeeded in slowing Alzheimer’s.

As did gantenerumab in a recent prevention trial: https://www.alzforum.org/news/research-news/plaque-removal-d...


The way Alzheimer's is diagnosed is by ruling out other forms of dementia, or other diseases. There is not a direct test for Alzheimer's, which makes sense, because we don't really know what it is,

Correction here: while other tests are sometimes given to rule out additional factors, there is an authoritative, direct test for Alzheimer's: clinically detectable cognitive impairment in combination with amyloid and tau pathology (as seen in cerebrospinal fluid or PET scan). This amyloid-tau copathology is basically definitional to the disease, even if there are other hypotheses as to its cause.


The article says:

Yet despite decades of research, no treatment has been created that arrests Alzheimer’s cognitive deterioration, let alone reverses it.

Nowhere in the article does it mention that anti-amyloid therapies such as donanemab and lecanemab have so far successfully slowed decline by about 30%. They may not yet be "arresting" (fully stopping) the disease, but it's pretty misleading for the article to completely omit reference to this huge success.

We are currently in the midst of a misguided popular uprising against the amyloid hypothesis. There were several fraudulent studies on amyloid, and those responsible should be handled severely by the scientific community. But these fraudulent studies do not constitute the foundational evidence for the amyloid hypothesis, which remains very solid.


From what I've read, those drugs are very good at removing amyloid, but despite that, they don't seem to make much of a noticeable (clinically meaningful) difference in the people treated with them. I personally would not call that a "huge success".

If they are so good at cleaning up the amyloid, why don't people have more of an improvement? I think everyone agrees amyloid is associated with Alzheimer's, the question is how much of a causative role does it play.


From what I've read, those drugs are very good at removing amyloid, but despite that, they don't seem to make much of a noticeable (clinically meaningful) difference in the people treated with them. I personally would not call that a "huge success".

After many decades of research, we've gone in the last few years from no ability whatsoever to affect the underlying disease, to 30% slowdown. To be clear, that's a 30% slowdown in clinical, cognitive endpoints. Whether you call that "meaningful" is a bit subjective (I think most patients would consider another couple years of coherent thinking to be meaningful), and it has to be weighed against the costs and risks, and there's certainly much work to be done. But it's a huge start.

If they are so good at cleaning up the amyloid, why don't people have more of an improvement?

No one is expected to improve after neurodegeneration has occurred. The best we hope for is to prevent further damage. Amyloid is an initiating causal agent in the disease process, but the disease process includes other pathologies besides amyloid. So far, the amyloid therapies which very successfully engage their target have not yet been tested in the preclinical phase before the amyloid pathology initiates further, downstream disease processes. This is the most likely reason we've seen only ~30% clinical efficacy so far. I expect much more efficacy in the years to come as amyloid therapies are refined and tested at earlier phases. (I also think other targets are promising therapeutic targets; this isn't an argument against testing them.)

I think everyone agrees amyloid is associated with Alzheimer's, the question is how much of a causative role does it play.

To be clear, the evidence for the amyloid hypothesis is causal. The association between amyloid and Alzheimer's has been known since Alois Alzheimer discovered the disease in 1906. The causal evidence came in the 1990's, which is why the scientific community waited so long to adopt that hypothesis.


Reading between the lines if we gave people those drugs before they show any symptoms we should be able to do even better. Has this been tested? How safe are those drugs? What should the average person be doing to avoid accumulating amyloids in the first place?


Reading between the lines if we gave people those drugs before they show any symptoms we should be able to do even better. Has this been tested?

I do expect early enough anti-amyloid treatment to essentially prevent the disease.

Prevention trials of lecanemab and donanemab (the two antibodies with the clearest proof of efficacy and FDA approval) are ongoing: https://clinicaltrials.gov/study/NCT06384573, https://clinicaltrials.gov/study/NCT04468659, https://clinicaltrials.gov/study/NCT05026866

They have not yet completed.

There were some earlier prevention failures with solanezumab and crenezumab, but these antibodies worked differently and never showed much success at any stage.

How safe are those drugs?

There are some real safety risks from brain bleeding and swelling, seemingly because the antibodies struggle to cross the blood-brain barrier, accumulating in blood vessels and inducing the immune system to attack amyloid deposits in those locations rather than the more harmful plaques in brain tissue. A new generation of antibodies including trontinemab appears likely to be both more effective and much safer, by crossing the BBB more easily.

What should the average person be doing to avoid accumulating amyloids in the first place?

There's not much proven here, and it probably depends on your individualized risk factors. There's some evidence that avoiding/properly treating microbial infection (particularly herpes viruses and P. gingivalis) can help, since amyloid beta seems to be an antimicrobial peptide which accumulates in response to infection. There may also be some benefit from managing cholesterol levels, as lipid processing dysfunction may contribute to increased difficulty of amyloid clearance. Getting good sleep, especially slow wave sleep, can also help reduce amyloid buildup.


What about supplementation with curcumin?


Would it be fair to say that it's causal in terms of process, but perhaps not in terms of initiation?

That is, there's a feedback loop involved (or, likely, a complex web of feedback processes), and if a drug can effectively suppress one of the steps, it will slow the whole juggernaut down to some extent?

Am reminded a little of the processes that happen during/after TBI - initial injury leads to brain swelling leads to more damage in a vicious cycle. In some patients, suppressing the swelling results in a much better outcome, but in others, the initial injury, visible or not, has done too much damage and initiated a failure cascade in which treating the swelling alone won't make any difference to the end result.


I’m not sure I understand the process vs. initiation distinction you’re asking about, but yes I do believe there are other targets besides amyloid itself which make sense even if the amyloid hypothesis is true. Anything in the causal chain before or after amyloid but prior to neurodegeneration is a sensible target.


Sure, I was just talking about a step in a feedback loop or degenerative spiral rather than whatever initiates the feedback loop in the first place.


>If they are so good at cleaning up the amyloid, why don't people have more of an improvement?

I have zero knowledge in this field, but there's a very plausible explanation that I think is best demonstrated by analogy:

If you shoot a bunch of bullets into a computer, and then remove the bullets, will the computer be good as new?


Have you seen the price of ammunition lately? I think we'll need a huge NIH grant to run that experiment.


... Had to wipe the screen.

THANK YOU

> a huge NIH grant

One sentence like a Simo Häyhä round.

NIH & grants are a result -- of what cause? Urgently I encourage curious minds to rigorously & objectively discover "what cause"


Does your computer exhibit any plasticity? After how long are we taking the post-sample?


Those quoting the 30% figure may want to research where that figure comes from and what it actually means:

“Derek Lowe has worked on drug discovery for over three decades, including on candidate treatments for Alzheimer’s. He writes Science’s In The Pipeline blog covering the pharmaceutical industry.

“Amyloid is going to be — has to be — a part of the Alzheimer’s story, but it is not, cannot be a simple ‘Amyloid causes Alzheimer’s, stop the amyloid and stop the disease,'” he told Big Think.

“Although the effect of the drug will be described as being about a third, it consists, on average, of a difference of about 3 points on a 144-point combined scale of thinking and daily activities,” Professor Paresh Malhotra, Head of the Division of Neurology at Imperial College London, said of donanemab.

What’s more, lecanemab only improved scores by 0.45 points on an 18-point scale assessing patients’ abilities to think, remember, and perform daily tasks.

“That’s a minimal difference, and people are unlikely to perceive any real alteration in cognitive functioning,” Alberto Espay, a professor of neurology at the University of Cincinnati College of Medicine, told KFF Health News.

At the same time, these potentially invisible benefits come with the risk of visible side effects. Both drugs caused users’ brains to shrink slightly. Moreover, as many as a quarter of participants suffered inflammation and brain bleeds, some severe. Three people in the donanemab trial actually died due to treatment-related side effects.”

https://bigthink.com/health/alzheimers-treatments-lecanemab-...

And here’s a Lowe follow-up on hard data released later:

https://www.science.org/content/blog-post/lilly-s-alzheimer-...


“Amyloid is going to be — has to be — a part of the Alzheimer’s story, but it is not, cannot be a simple ‘Amyloid causes Alzheimer’s, stop the amyloid and stop the disease,'”

It's not quite that simple, and the amyloid hypothesis doesn't claim it to be. It does, however, claim that it's the upstream cause of the disease, and if you stop it early enough, you stop the disease. But once you're already experiencing symptoms, there are other problem which clearing out the amyloid alone won't stop.

What’s more, lecanemab only improved scores by 0.45 points on an 18-point scale assessing patients’ abilities to think, remember, and perform daily tasks.

As I point out in another comment, the decline (from a baseline of ~3 points worse than a perfect score) during those 18 months is only 1.66 points in the placebo group, It's therefore very misleading to say this is an 18-point scale, so a 0.45 point benefit isn't clinically meaningful. A miracle drug with 100% efficacy would only achieve a 1.66 point slowdown.


“But once you're already experiencing symptoms, there are other problem which clearing out the amyloid alone won't stop.”

Ok, maybe we’re just arguing different points here. I’ll grant that amyloids have something to do with all of this. I’m having a more difficult time understanding why one would suggest these drugs to a diagnosed Alzheimer’s patient at a point where it can no longer help.

Or is the long term thought that drugs like these will eventually be used a lot earlier as a prophylactic to those at high risk?


I’m having a more difficult time understanding why one would suggest these drugs to a diagnosed Alzheimer’s patient at a point where it can no longer help.

My central claim is the the drugs help quite a lot, by slowing down the disease progression by 30%, and that it's highly misleading to say "only 0.45 points benefit on an 18 point scale", since literally 100% halting of the disease could only have achieved 1.66 points efficacy in the 18 month clinical trial.

This is like having a 100-point measure of cardiovascular health, where patients start at 90 points and are expected to worsen by 10 points per year, eventually dying after 9 years. If patients given some treatment only worsen by 7 points per year instead of 10, would you say "only 3 points benefit on a 100 point scale"?

Or is the long term thought that drugs like these will eventually be used a lot earlier as a prophylactic to those at high risk?

I do believe that they will be more (close to 100%) efficacious when used in this way, yes.


And that is the core problem with what happened. There may actually be a grain of truth but now there is a backlash. I'd argue though that the mounds of alternative explanations that weren't followed up on should likely get some priority right now since we know so little about them there is a lot to learn and and we are likely to have a lot of surprises there.

I see this as the same problem with UCT (upper confidence for trees) based algorithms. If you get a few initial random rolls that look positive you end up dumping a lot of wasted resources into that path because the act of looking optimizes the tree of possibilities you are exploring (it was definitely easier to study amyloid lines of research than other ideas because of the efforts put into it). Meanwhile the other possibilities you have been barely exploring slowly become more interesting as you add a few resources to them. Eventually you realize that one of them is actually a lot more promising and ditch the bad rut you were stuck on, but only after a lot of wasted resources. To switch fields, I think something similar happened to alpha-go when it had a game that ended in a draw because it was very confident in a bad move.

Basically, UCT type algorithms prioritize the idea that every roll should optimize the infinite return so it only balances exploration with exploitation. When it comes to research though the value signal is wrong, you need to search the solution space because your goal is not to make every trial find the most effective treatment, it is to eventually find the actual answer and then use that going forward. The trial values did not matter. This means you should balance exploration, exploitation AND surprise. If you do a trial that gives you very different results than you expected then you have shown that you don't know much there and maybe it is worth digging into so even the fact that it may have returned less optimal value than some other path its potential value could be much higher. (Yes I did build this algorithm. Yes it does crush UCT based algorithms. Just use variance as your surprise metric then beat alpha-go.)

People intrinsically understand these two algorithms. In our day to day lives we pretty exclusively optimize exploration and exploitation because we have to put food on the table while still improving, but when we get to school we often take classes that 'surprise' us because we know that the goal at the end is to have gained -some- skill that will help us. Research priorities need to take into account surprise to avoid the UCT rut pitfalls. If they had for the amyloid hypothesis maybe we would have hopped over to other avenues of research faster. 'The last 8 studies showed roughly the same effect, but this other path has varied wildly. Let's look over there a bit more.'


yeeeess...but when you look at the slope of the decline on the NEJM papers describing the clinical trials of lecanumab and donemumab...are you really slowing the decline?


To be clear, I think you're asking whether maybe the drugs just provide a temporary "lift" but then the disease continues on the same basic trajectory, just offset a bit?

The studies aren't statistically powered to know for sure, but on lecanemab figure 2, the between-group difference on CDS-SB, ADAS-Cog14, ADCOMS, and ADCS-MCI-ADL (the four cognitive endpoints) widens on each successive visit. Furthermore, while not a true RCT, the lecanemab-control gap also widens up to 3 years in an observational study: https://www.alzforum.org/news/conference-coverage/leqembi-ca...

On donanemab figure 2, there is generally the same pattern although also some tightening towards the end on some endpoints. This could be due to the development of antidrug antibodies, which occurs in 90% of those treated with donanemab; or it could be statistical noise; or it could be due to your hypothesis.


What kind of soured me on whether to recommend lecanumab in the clinic or not - the effect size and the slope, vs. the risk of hemorrhages/"ARIAS".

I mean, if you're looking at an steady 0.8 pt difference in CRS-SB, but the entire scale is 18 points, yes, it's "statistically significant" w/ good p-values and all, but how much improvement is there really in real life given that effect size?

Plus, if one is really going to hawk something as disease modifying, I'd want to see a clearer plateauing of the downward slow of progression, but it's pretty much parallel to the control group after a while.

There is some chatter in the Parkinson's world - the issue and maybe the main effort isn't so much clearing out the bad stuff (abnormal amyloid clumps/synuclein clumps) in the cells, it's trying to figure out what biological process converts the normal, functioning form of the protein into the abnormal/insoluble/nonfunctional protein.....at least assuming amyloid or synuclein is the root problem to begin with...


What kind of soured me on whether to recommend lecanumab in the clinic or not - the effect size and the slope, vs. the risk of hemorrhages/"ARIAS".

I don't claim that it's obviously the right move for every Alzheimer patient at the moment. It would be great to increase the effect size and reduce ARIA rates. My central claim, again, is that the amyloid hypothesis is correct, not that we have a cure.

the issue and maybe the main effort isn't so much clearing out the bad stuff (abnormal amyloid clumps/synuclein clumps) in the cells, it's trying to figure out what biological process converts the normal, functioning form of the protein into the abnormal/insoluble/nonfunctional protein

Yes, but it appears that these are one and the same thing. That is, amyloid and tau (mis)conformation seems to be self-replicating via a prion-like mechanism in locally-connected regions. This has been established by cryo-electron microscopy of human proteins, as well as controlled introduction of misfolded proteins into mouse brains.


Downvoters, are you sure you have a rational basis for downvoting this informative post? Do us HNers really know enough to discredit the amyloid hypothesis when 99.9% of us know nothing other than it's gotten some bad press in recent years?

I googled lecanemab and it does have the clinical support claimed. I don't see anyone questioning the data. I'm as surprised as anyone else, even a little suspicious, but I have to accept this as true, at least provisionally.

For anyone who wants to start grappling with the true complexity of this issue, I found a scholarly review [1] from October 2024.

[1] The controversy around anti-amyloid antibodies for treating Alzheimer’s disease. https://pmc.ncbi.nlm.nih.gov/articles/PMC11624191


https://www.reddit.com/r/medicine/comments/1057sjo fda_oks_lecanemab_for_alzheimers_disease/

"Lecanemab resulted in infusion-related reactions in 26.4% of the participants and amyloid-related imaging abnormalities *with edema or effusions in 12.6%*."

https://en.wikipedia.org/wiki/Cerebral_edema

"After 18 months of treatment, lecanemab slowed cognitive decline by 27% compared with placebo, as measured by the Clinical Dementia Rating–Sum of Boxes (CDR-SB). This was an absolute difference of 0.45 points (change from baseline, 1.21 for lecanemab vs 1.66 with placebo; P < .001)"

https://www.understandingalzheimersdisease.com/-/media/Files...

Sum of boxes is a 19 point scale. So, for those keeping track at home, this is an incredibly expensive treatment that requires premedication with other drugs to control side affects as well as continuous MRIs for an ~%2.3 absolute reduction in the progression of dementia symptoms compared to placebo, with a 12% risk of cerebral edema.

Now, I'm no neurologist, but I'd call that pretty uninspiring for an FDA-approved treatment.


"This was an absolute difference of 0.45 points (change from baseline, 1.21 for lecanemab vs 1.66 with placebo; P < .001)"

Sum of boxes is a 19 point scale.

It's an 18 point scale, but more to the point: the decline in the placebo group was only 1.66 points over those 18 months, and the mean score at baseline was just over 3 points. So even 100% efficacy could only possibly have slowed decline by 1.66 out of 18 points (what you would call a 9.2% absolute reduction) in the 18 months of that experiment. And full reversal (probably unattainable) would have only slowed decline by about 3 points.

I agree that the side effects of anti-amyloid therapies are a serious concern. The reasons for this are being understood and corrected in the next generation of such therapies. For example, I expect trontinemab to achieve better efficacy with much greater safety, and there is already preliminary evidence of that. Furthermore, there are improved dosing regimens of donanemab which improve side effects significantly.

Note that my claim is not that the existing drugs are stellar, and certainly not that they're panaceas. Simply that the amyloid hypothesis is true and there has been tremendous progress based on that hypothesis as of late.


As much as you're chiding people for being a part of a "misguided popular uprising", you're not really making a good case for anti-amyloid therapies. It started at "wow, 30%!" in this comment chain, and now it's at "barely having an effect over a placebo" being tremendous progress?


It seems like you didn’t understand my comment if you think I’ve changed my position from 30% efficacy.


I don't think you've changed your position. Reading the thread, your mention of 30% is super misleading and you should've lead with how little progress has been made instead of chastising people correctly upset with the lack of progress.


You have to understand that CDR-SB is a very sensitive measurement. Yes, it's an 18-point scale, but from 4.5 to 18 it's just measuring how bad the dementia has gotten. The vast, vast majority of healthy people will score 0. Going from 0 to 0.5 is a massive difference in cognitive ability.


To emphasize your point, I don't think anyone will notice if someone's alzheimers is 2.3% better.

These rating scales like CDR-SB (invented by drug companies or researchers who are funded by drug companies) are very good at making the tiniest improvement sound significant.


> Downvoters, are you sure you have a rational basis for downvoting this informative post?

Citing relative improvement (30%) instead of absolute improvement (2%) and not explicitly designating it as such.


“Slowed decline by 30%” is explicitly designating it as such.


I disagree. "Slowed decline by 30%" to me means an absolute reduction of 30% in some rate expressed as unit X over unit time, and that's what I thought you meant until another commenter pointed out that it was a relative reduction. IMHO it's not an explicit callout unless you are using the words 'relative' and or 'absolute'


I did not downvote, but OP failed to provide a link to back up his claim, or to make explicit what "slowing decline by about 30%" even means.

In light of the fraudulent and scandalous approval of aducanumab [0] (which also targeted amyloid), such claims must be thoroughly referenced.

[0] https://en.wikipedia.org/wiki/Aducanumab#Efficacy


If it helps, here’s info from Dr. Derek Lowe, a 30+ year pharma chemist and author of In The Pipeline. For further research on the topic, he has many other posts on the topic, some of which are linked in the links below.

https://www.science.org/content/blog-post/aduhelm-again

https://www.science.org/content/blog-post/goodbye-aduhelm

https://www.science.org/content/blog-post/alzheimer-s-and-in...


How do you know what the downvote status is?


There is anecdotal evidence and perhaps even some small studies showing that a keto diet can halt and even reverse Alzheimer's symptoms.

Compared to that, reducing the speed of decline isn't terribly impressive. It's better than nothing to be sure! But what people want is BIG progress, and understandably so. Billions have been spent.


Billions have been spent because it's a challenging disease to understand and treat. I want big progress too. But we shouldn't let our desire for big progress cause us to lose our ability to objectively evaluate evidence.

I have no opposition to a properly controlled randomized controlled trial of the keto diet, or other proposed therapies (many of which have been conducted, and are for targets other than amyloid which are completely compatible with the amyloid hypothesis). Until a proper RCT of keto is conducted, anecdotal claims are worth very little compared to the evidence I referred to.


I'm far, far more interested in anecdotes about completely halting or reversing decline than I am in rock solid data about a 30% reduction in decline speed.

Antibiotics started out as an anecdote about something whose effect was so stark it couldn't be missed. Chasing promising anecdotes is far more valuable (in my opinion) than attempting to take a 30% effect to a 100% effect.

Others are free to feel differently of course. I'm open to hearing about 100 different times that finding a tiny effect that got grown and magnified into a huge effect that totally changed medicine. I'm just not aware of many at this point.


You can be interested in what you want. But the interest in anti-amyloid therapy came from the basic science indicating amyloid pathology as the critical but-for cause of the disease. It wasn't just a blind shot in the dark.

To my knowledge, there's no such basic science behind a keto diet for Alzheimer's.


Turns out there are enough studies for a meta analysis. Is that basic science? I'm not sure what counts.

https://www.sciencedirect.com/science/article/pii/S127977072...

If amyloid is truly the critical "but for" cause then how on earth is it possible that reducing amyloid burden doesn't really make a difference?

https://scopeblog.stanford.edu/2024/03/13/why-alzheimers-pla...


Turns out there are enough studies for a meta analysis. Is that basic science?

Basic science in this context means research investigating the underlying disease process to develop knowledge of how it works mechanistically, as distinguished from (and as a precursor to) developing or testing treatments for the disease. This helps us direct resources in plausibly useful directions rather than merely taking shots in the dark, and it also helps us to interpret later clinical findings: e.g. if we see some cognitive benefit in a three-month trial, is that because the underlying disease process was affected (and hence the benefit might persist or even increase over time), or might it be because there was some symptomatic benefit via a completely separate mechanism but no expectation of a change in trajectory? For example, cholinergic drugs are known to provide symptomatic benefit in Alzheimer disease but not slow the underlying biological processes, so that worsening still continues at the same pace. Or if we see results that are statistically borderline, is it still worth pursuing or was the very slight benefit likely a fluke?

So a meta-analysis of ketogenic diets in Alzheimer disease is not basic science, though that doesn't mean it's useless. But what I'm saying is it's really helpful to have a prior that the treatment you're developing is actually targeting a plausible disease pathway, and the amyloid hypothesis gives us that prior for amyloid antibodies in a way that, to my knowledge, we don't have for ketogenic diets.

https://www.sciencedirect.com/science/article/pii/S127977072...

Thanks, I just took a look at this meta-analysis. The studies with the strongest benefits on the standard cognitive endpoints of MMSE and ADAS-Cog — Taylor 2018, Qing 2019, and Sakiko 2020 — all lasted only three months, which makes me suspect (especially given the context of no theoretical reason to expect this to work that I'm aware of) this is some temporary symptomatic benefit as with the cholinergic drugs I mentioned above.

But it's enough of a hint that I'd support funding a long-term trial just to see what happens.

If amyloid is truly the critical "but for" cause then how on earth is it possible that reducing amyloid burden doesn't really make a difference?

I've argued elsewhere in the thread that it does make quite a difference, but there's still a lot of work to do, and I've said what I think that work is (mainly: improving BBB crossing and administering the drugs earlier).


There was absolutely no theoretical reason that some moldy cheese would kill bacteria but thankfully Fleming noticed what happened and we got antibiotics.

There was no theoretical reason that washing your hands would do anything to combat the spread of disease and all the smart doctors knew otherwise. Some kooky doctor named Semmelweisz proposed that doctors should wash their hands between childbirths in 1847, 14 years before Pasteur published his findings on germ theory in 1861. When some doctors listened to him maternal mortality dropped from 18% to 2%.

I'm all for basic science when the statistical significance becomes so great it really starts to look like causality and then you start figuring stuff out.

It doesn't seem like the statistical significance of the amyloid theory is strong enough that the direction of the arrow of causality can be determined. That's too bad.

The strength of the effect of keto diet interventions in Alzheimer's is pretty strong to my understanding. Which should be aggressively hinting that there's likely some as-yet unknown causality that's worth investigating. We don't have to spend billions to do that. But we do need more funding for it which is hard to get while all the amyloid hypothesis folks are really invested and clamoring.


There was absolutely no theoretical reason that some moldy cheese would kill bacteria but thankfully Fleming noticed what happened and we got antibiotics.

Again, I'm in favor of people investigating all sorts of random shit.

I agree that sometimes unexpected things pan out. If you want to run a carefully conducted, large long-term trial on ketogenic diets in Alzheimer's, I support you. I'm just skeptical it'll pan out, and on priors I'll put greater expectation on the approach with a scientifically demonstrated mechanistic theory behind it.

I'm all for basic science when the statistical significance becomes so great it really starts to look like causality and then you start figuring stuff out.

It doesn't seem like the statistical significance of the amyloid theory is strong enough that the direction of the arrow of causality can be determined.

What are you basing this one? The p-value on lecanemab's single phase 3 trial was below 0.0001. And the causal role (not mere association) of amyloid in the disease has been demonstrated for years before significant efforts were invested developing therapies to target amyloid in the first place; most convincingly in the genetic mutations in APP, PSEN1, and PSEN2.


I agree more science is certainly better than less. But patentable therapies will always get a disproportionate amount of funding for big science (versus a basically-free dietary change).

For ketones and cognition, also look for studies on MCT oil. Such as https://pmc.ncbi.nlm.nih.gov/articles/PMC10357178/


There are certainly theoretical reasons why it might help. There is definitely a link between AD and blood sugar. Having diabetes doubles your risk of AD. The brain regions hit first and worst in AD have the highest levels of aerobic glycolysis (in which cells take glucose only through glycolysis and not oxidative phosphorylation, despite the presence of adequate oxygen).

To the extent a keto diet can reduce resting blood sugar levels and improve insulin sensitivity, there is good reason to think it is a candidate to slow AD.


It's possible you might adopt a different attitude if one day you're diagnosed with rapid onset Altzheimer. At that stage you'd be forgiven for muttering 'basic science be blowed'. Keto (or whatever) offered some relief for my friend Bill, I'll give it a try given it's my survival at stake.

Plate tectonics was suggested in 1913 and not supported (to put it politely) at that point by 'basic science'. It took until 1960 to be accepted. A paradigm shift was needed as Kuhn explained.

Meanwhile, this paper (2024) https://www.sciencedirect.com/science/article/pii/S127977072... 'Effects of ketogenic diet on cognitive function of patients with Alzheimer's disease: a systematic review and meta-analysis'

concludes "Research conducted has indicated that the KD can enhance the mental state and cognitive function of those with AD, albeit potentially leading to an elevation in blood lipid levels. In summary, the good intervention effect and safety of KD are worthy of promotion and application in clinical treatment of AD."


There are also studies showing a plant-based diet can reverse Alzheimer's symptoms as well. It has to do with atherosclerosis.


Can you provide a source for this?

I'm not aware of any RCT showing long-term improvement of Alzheimer's symptoms from any treatment. I am aware of 1) long-term slowing of worsening (not improvement) from anti-amyloid therapy, 2) short-term benefits but no change in long-term trajectory from other therapies, and 3) sensational claims without an RCT behind them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: