Hacker Newsnew | past | comments | ask | show | jobs | submit | wolttam's commentslogin

Lol, tell that to Valve.

I'm not sure what 'most apps' constituted in their case, but I've been playing all my games on Linux for the last 3 years and can't be happier.

I wonder, what prevents better support for 'regular' apps? Are they using some windows API that is hard to implement in Linux?


More resources were put into gaming api’s by Valve, that is all.

Games are usually created from the beginning with cross-platform in mind. So, not as coupled.

Let people make and use what they want, you don’t have to use it.

I'm consistently amazed at how much some individuals spend on LLMs.

I get a good amount of non-agentic use out of them, and pay literally less than $1/month for GLM-4.7 on deepinfra.

I can imagine my costs might rise to $20-ish/month if I used that model for agentic tasks... still a very far cry from the $1000-$1500 some spend.


Doesn't this depend a lot on private vs company usage? There's no way I could spend more than a few hundreds alone, but when you run prompts on 1M entities in some corporate use case, this will incur costs, no matter how cheap the model usage.

Is the property of an answer being ordered in the order that resolutions were performed to construct it /that/ fragile?

Randomization within the final answer RRSet is fine (and maybe even preferred in a lot of cases)


Well cisco had their switches get into a boot loop, that sounds very broken...

Yes it’s a well known behaviour from these Cisco switches, not just reliant on name ordering. If SBS fails they reboot.

We thought it as just the default ntp servers abut had some reboot during this event because www.cisco.com was unavailable.


My take is quite cynical on this.. This post reads to me like a post-justification of some strange newly introduced behaviour.

Please order the answer in the order the resolutions were performed to arrive at the final answer (regardless of cache timings). Anything else makes little sense, especially not in the name of some micro-optimization (which could likely be approached in other ways that don’t alter behaviour).


The DNS specification should be updated to say CNAMES _must_ be ordered at the top rather than "possibly". Cloudflare was complying with the specification. Cisco was relying on unspecified behavior that happened to be common.

The only reasonable interpretation of "possibly prefaced" is that the CNAMEs either come first or not at all (hence "possibly"). Nowhere the RFC suggests that they may come in the middle.

Something is broken in Cloudflare since a couple of years. It takes a very specific engineering culture to run the internet and it's just not there anymore.


Except that "first or not at all" doesn't prevent this bug from triggering.

Nowhere the RFC suggests multiple CNAMEs need to be in a specific order.


Cloudflare broke clients all over the world. What the 40 year old RFC says is not the de facto “specification” at this point.

Cloudflare broke 'Cisco' clients all over the world. Not CFs problem that the biggest router vendor in the world programmed their routers wrongly.

I’m no fan of the centralised intenet cloudflare heralds, but blaming anyone but Cisco for this reboot behaviour is wrong.

Browsers consider ‘localhost’ a secure context without needing https

For local /network/ development, maybe, but you’d probably be doing awkward hairpin natting at your router.


it's nice to be able to use https locally if you're doing things with HTTP/2 specifically.

User attention to get user data?

I feel like the the data to drive the really interesting capabilities (biological, chemical, material, etc, etc, etc) is not going to come in large part from end users.


It's the other way around. You gather user data so that you can better capture the user's attention. Attention is the valuable resource here: with attention you can shift opinions, alter behaviors, establish norms. Attention is influence.

Yeah I understood that but I don’t think we need influence over masses to train better models with novel data

I'm not sure how similar they are internally (I suspect: quite), but I use Django-Q2's database broker to similar effect. More simple = better!

Okay, something just tweaked in my brain. Do higher temperatures essentially unlock additional paths for a model to go down when solving a particular problem? Therefore, for some particularly tricky problems, you could perform many evaluations at a high temperature in hopes that the model happens to take the correct approach in one of those evaluations.

Edit: What seems to break is how high temperature /continuously/ acts to make the model's output less stable. It seems like it could be useful to use a high temperature until it's evident the model has started a new approach, and then start sampling at a lower temperature from there.


Decaying temperature might be a good approach. Generate the first token at a high temperature (like 20), then for each next token multiply temperature by 0.9 (or some other scaling factor) until you reach your steady-state target temperature


I think yes. Recently I was experimenting with NEAT and HyperNEAT solutions and found this site. At the bottom it explains how novelty yields far more optimal solutions. I would assume that reasonably high temperature may also result more interesting solutions from LLM

https://blog.lunatech.com/posts/2024-02-29-the-neat-algorith...


It depends on the business for sure. Kube is overkill until you have someone on your team whose specialization is infra. Then that person will probably be spearheading kube anyway :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: