Hacker Newsnew | past | comments | ask | show | jobs | submit | viktour19's commentslogin

US healthcare is expensive because prices aren’t real and complexity is profitable. Hospitals and pharma have monopoly power, insurers add massive administrative drag, and nobody sets hard price limits. ~25–30% is spent on billing and paperwork alone. Hospitals consolidate into regional monopolies. Drug companies charge whatever they want. Insurance act as a middleman for routine care instead of risk protection. Outcomes don’t justify the cost. The system is optimized to extract revenue, not deliver care efficiently.

Fixing it doesn’t require utopia: cap prices for common services and drugs, enforce antitrust to break hospital monopolies, standardize billing nationally, and let Medicare (and others) negotiate drug prices. Shift insurance back to catastrophic coverage while covering primary care publicly, and cut hospital admin while paying clinicians more. These are boring, proven levers that haven’t happened because every inefficiency has a powerful lobby defending it.


The world’s first hydrogen exploration company was founded in Mali. No one knows for a fact what the development pathway is. Solar might turn out to be inconsequential.

https://news.mit.edu/2024/iwnetim-abate-aims-extract-hydroge...

> In 1987, well-diggers drilling for water in Mali in Western Africa uncovered a natural hydrogen deposit, causing an explosion. Decades later, Malian entrepreneur Aliou Diallo and his Canadian oil and gas company tapped the well and used an engine to burn hydrogen and power electricity in the nearby village.

> Ditching oil and gas, Diallo launched Hydroma, the world’s first hydrogen exploration enterprise. The company is drilling wells near the original site that have yielded high concentrations of the gas.


Promising but how do pipelines deal with hydrogen?


They leak, hydrogen is smaller then all other atoms, so they are hard to contain. local production and immediate use is the only option that makes sense. Pipelines are non-sensical


This must limit electricity generation to local hydrogen sources which may be challenging compared to natural gas. Prob similar to nuclear?


> LoRA and full fine-tuning, with equal performance on the fine-tuning task, can have solutions with very different generalization behaviors outside the fine-tuning task distribution.

The ability for nnets to generalize is inherently tied to their trainable parameter count via mechanisms we don't understand but we know parameter count is the key. When you finetune with lora, you're updating maybe 5% of the parameters, I really don't think there is an illusion of equivalence in the field.


> When you finetune with lora, you're updating maybe 5% of the parameters

I'm not sure I understand this comment. The LoRA paper[1] specifically says that all of the pretrained weights remain frozen.

> keeping the pre-trained weights frozen

Specifically, the LoRA paper differentiates itself from updating some parameters by stating

> Many sought to mitigate this by adapting only some parameters or learning external modules for new tasks.

1. https://arxiv.org/pdf/2106.09685


The effective parameters of the model are the parameters of the original model + lora parameters i.e lora updates only lora parameters, and full finetuning updates only original model parameters.


More magnitude than count [1] I think, but I haven't kept up in a while.

[1] https://proceedings.neurips.cc/paper_files/paper/1996/file/f...


Well, I think it depends who you talk to. I suspect quite a few practitioners (as opposed to researchers) regard LoRA as a valid shortcut without full consideration of the difference.


Diffusion models are already being evaluated using pretrained SSL models à la CLIP Score [1]. So it makes sense that one would incorporate that directly into training the model from scratch.

[1] https://huggingface.co/docs/diffusers/en/conceptual/evaluati...


What are details of the technique?


It's great how we went from "wait.. this model is too powerful to open source" to everyone trying to shove down their 1% improved model down the throats of developers


I feel quite the opposite. Improvements, even tiny ones are great. But what's more important is that more companies release under open license.

Training models isn't cheap. Individuals can't easily do this, unlike software development. So we need companies to do this for the foreseeable future.


People are building and releasing models. There's active research in the space. I think that's great! The attitude I've seen in open models is "use this if it works for you" vs any attempt to coerce usage of a particular model.

To me that's what closed source companies (MSFT, Google) are doing as they try to force AI assistants into every corner of their product. (If LinkedIn tries one more time to push their crappy AI upgrade, I'm going to scream...)


I'm 90% certain that OpenAI has some much beefier model they are not releasing - remember the Q* rumour?


Got to justify pitch deck or stonk price. Publish or perish without a yacht.


If hallucination is inevitable? What should developers do?

Design user experiences that align users with this behaviour!

Relatedly, I built a game to demonstrate how one might calibrate users to the responses of LLMs:

https://news.ycombinator.com/item?id=39255583


Combine this with something like Copilot and you get the 10x programmer :)


Yes please!


I love this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: