Hacker Newsnew | past | comments | ask | show | jobs | submit | tsenturk's commentslogin

When we set out to implement this within an n8n automation, we encountered some implementation challenges. The issues stemmed from the self-training process—specifically regarding the interest and stress score tree—reaching information saturation, which led to a decay in curiosity. However, keeping the threshold constant (not updating it) fundamentally resolved the issue. To be clear, the experiment has only just begun; this is merely an introductory post outlining the basic architecture.

In that case I would start by studying the literature. The first two uncertainty estimation & out-of-distribution (OOD) detection approaches you mention, "Embedding Distance" and "Self-Interrogation", are sometimes called feature-space density estimation and consistency-based uncertainty quantification. Practical algorithms include Semantic Entropy, Self-Consistency / Verbalized Confidence, and Embedding-based Density (Mahalanobis Distance).

References:

A Survey of Uncertainty Estimation Methods on Large Language Models (https://aclanthology.org/2025.findings-acl.1101/)

A Survey of Uncertainty Estimation in LLMs: Theory Meets Practice (https://arxiv.org/abs/2410.15326v1)


This article wasn’t written by ChatGPT — I wrote it myself. It’s meant to be just a brief explanation. I’m currently preparing a detailed piece about Workers & Pages, but it’s not that easy. The documentation isn’t very clear, so you end up figuring out many things empirically.


You’re absolutely right. There are two different ways to build: 1.Workers 2.Pages And they can be quite confusing. I’ve started writing a new article to explain the differences in detail.


Discover why Elixir is becoming essential for building scalable, fault-tolerant systems. From unique concurrency to self-healing architecture, explore what sets this language apart.


Thanks, mate. If you’ve got other theories like that, I don’t want to hear them either


I am equally surprised and enertained by how offended some people are regarding this article.

FYI: The above sentence contains 7 words that come from old french/latin, and 7 that come from old english/german


As a person with ears who is mildly versed in English and enough Spanish to get some landscaping done, it's extremely obvious that French is the badly pronounced English, and not vice versa.


it's tongue in cheek. the book is an act of mild trolling.


Developers chase hype technologies out of fear of becoming outdated, but they abandon these trendy programming languages just as quickly as they adopt them.


It might become the modern Tulip Mania of our time.


Tulips didn't largely replace the most popular search engine. They didn't replace employees. Tulips were a meme stock, AI is a breakthrough.

Could be overvalued, but it is not vaporware.


Tulips almost can replace employees, just like AI


Researchers have for the first time turned germanium—a widely used semiconductor—into a superconducting material by embedding gallium atoms in its crystal structure. This breakthrough could usher in a new era of quantum devices and ultra-efficient electronics.


> ...allows it to carry current with zero resistance at 3.5 Kelvin (about -453 degrees Fahrenheit)

Seems to me this is a problem.


It's an interesting result, but yeah, not a room temperature superconductor.


For that matter, we've had superconductors for decades that work at much higher temperatures than this one.


It seems the breakthrough is that you could use familiar semiconductor manufacturing processes. However the temperature is still going to be a major issue. I don't want a computer that requires liquid helium cooling.


> I don't want a computer that requires liquid helium cooling.

True, but I /can/ see someone, such as Sandia National Labs, very much willing to install a liquid helium cooled computer if it provides a significant performance increase above their existing supercomputer installations.


> you could use familiar semiconductor manufacturing processes.

Unclear to me why that's helpful. Materials that superconduct at a higher temperature than this one aren't hard to come by, or obscure:

> In 1913, lead was found to superconduct at 7 K,


Probably because they don’t behave well for normal lithography techniques? The high temp superconductors I know of are weird meta materials, and good luck getting them to exist in chip form at all.


Isn’t that very close to the practical limit for cooling in a lab?


Not that hard. A dilution fridge, used for instance for cooling quantum computers, can go much lower:

https://en.wikipedia.org/wiki/Dilution_refrigerator


Quantum devices are already cooled to that temperature (at least for some technologies), so it's not a problem in that use case.


Thanks!

Was gonna be lazy and say… temp or is doesn't matter.


The 1953 Ford X-100 was basically a "car from the future" that Ford built just to show off. It wasn't ever sold, but it's famous for packing features that were decades ahead of its time. We're talking stuff like heated seats, a rain sensor that automatically closed the roof, and even built-in jacks. It was wild for the 50s, but pretty much previewed a ton of tech we see in our cars today.


Hugging Face is not just an AI information-sharing website; it’s also a great learning platform for all AI learners. This documentation is one of the most impressive hands-on resources I’ve ever read.


What others would you recommend that are comparable in quality?


Been reading a book by u/fpham "The Cranky mans guide to lora and qlora" and it's pretty great, writing quality isnt all there but the content is valuable for learning to make good finetunes


The documentation for common ai packages is pretty good too. For example, pytorch docs, peft docs, timm docs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: