Hacker Newsnew | past | comments | ask | show | jobs | submit | SlavikCA's commentslogin

They told us that with AI you can vibe-code anything now...

So, no need to make old program to work. Just write new one.

/sarcasm


Or you could have AI figure out how to crack it.

The HuggingFace link is published, but not working yet: https://huggingface.co/MiniMaxAI/MiniMax-M2.1

Looks like this is 10 billion activated parameters / 230 billion in total.

So, this is biggest open model, which can be run on your own host / own hardware with somewhat decent speed. I'm getting 16 t/s on my Intel Xeon W5-3425 / DDR5-4800 / RTX4090D-48GB

And looking at the benchmark scores - it's not that far from SOTA (matches or exceeds the performance of Claude Sonnet 4.5)


That screenshot / video on README page is mostly unreadable. Can't get anything out of it.


This app is clearly a demonstration of GTK4's light/dark transition animation. Looks like it works perfectly to me!


Same for me.

What info does it show more than a:

"netstat -tulpn"

Wrote myself a script years ago that basically loops netstat -tulpn watch like for the same purpose - just wondering if your tool shows me more than that.


modern graphical interface, for a start


I was asking which information it shows not what output it uses to display that information....


Come on, now. You can see that it supports today’s most critical feature: it has dark mode and light mode.

/s


If you live in the terminal it's all dark mode*

* unless you are one of those weirdo's who has a black on white terminal in which case you should be on a watch list (/s in case wasn't immediately obvious).


I've been there since the DOS days when it was all dark mode, green phosphor characters on a black CRT. I was there when amber monitors were the new thing. (I still love sunglasses with brown lenses.) And I watched the early Apple computers with graphics and black-characters-on-white display style that has been the rage ever since... well since the recent new thing being dark mode.

It reminds me of fashion trends, miniskirts then maxis, up and down past the knee like tides.

Fads, that's the word.


I am exactly that kind of weirdo, but then again I’ve been reading black on white books for my entire life and I never thought to complain about it.


Looks like Incus has no GUI?

Proxmox has nice web GUI


It has one[1] (optional). Proxmox has a shittier, but more featureful, web UI.

[1]: https://blog.simos.info/how-to-install-and-setup-the-incus-w...


i like the proxmox web ui.

also, looking at the link you posted, it looks like incus can only do like a fraction of what proxmox can do. is that the case or is that web ui a limiting factor?


Reading few blogs and forums about it today - people talking about switching to Gateway API (from "legacy" Ingress).

And I do not understand it:

1. Ingress still works, it's not deprecated.

2. There a lot of controllers, which supports both: Gateway API and Ingress (for example Traefik)

So, how Ingress Nginx retiring related / affects switch to Gateway API?


1) ingress still works but is on the path to deprecation. It's a super popular API, so this process will take a lot of time. That's why service meshes have been moving to Gateway API. Retiring ingress-nginx, the most popular ingress controller, is a very loud warning shot.

2) see (1).


It doesn't but Kubernetes team was kind of like "Hey, while you are switching, maybe switch away from Ingress API?"


Ingress as defined by Kubernetes is really restricted if you need to do rewriting, redirecting and basically all the stuff we've been doing in pre-Kubernetes times. Nginx Ingress Controller worked around that by supporting a ton of annotations which basically were ingested into nginx.conf, to the point that any time you had a need everyone just assumed you were using nginx-ingress and recommended an annotation or two.

In a way, it was a necessity, since Ingress was all you'd get and without stuff like rewriting, doing gradual Kubernetes migrations would have been much more difficult to impossible. For that reason, every ingress controller tried to go a similar, but distinctly different way, with vastly incompatible elements, failing to gain traction. In a way I'm thankful they didn't try to reimplement nginx annotations (apart from one attempt I think), since we would have been stuck with those for foreseeable future.

Gateway API is the next-gen standardized thing to do ingress, pluggable and upgradable without being bound to a Kubernetes version. It delivers _some_ of the most requested features for Ingress, extending on the ingress concept quite a bit. While there is also quite a bit of mental overhead and concepts only really needed by a handful of people, just getting everyone to use one concept is a big big win for the community.

Ingress might not be deprecated, but in a way it was late to the party back in the day (OpenShift still has Route objects from that era because ingress was missing) and has somewhat overstayed its welcome. You can redefine Ingress in terms of Gateway API and this is probably what all the implementers will do.


I think it's that Gateway is new (relatively speaking) so there's a lot of places it's a good fit that haven't adopted it yet.


And there is no profit, too.


Its very depressing that we’re almost quite literally converting money into BTUs via GPUs and not even making a buck, meanwhile the ocean gets hotter


We finally reached critical mass on seeing money as an arbitrary construct, so now we're converting it into real, physics-based heat. Entropy at its finest.


I've been saying roughly the same thing about cryptocurrencies (just a good way to waste fuckhuge amounts of resources on digital tulips to enable crime) but it never seems to stop anyone from plowing ahead on being stupid.


This is precisely what soured me on crypto. It's a ridiculous waste, a circle jerk.


Google released MedGemma model: "optimized for medical text and image comprehension".

I use it. Found it to be helpful.


I'm running k3s at home on single node with local storage. Few blogs, forum, minIO.

Very easy, reliable.

Without k3s I would have use Docker, but k3s really adds important features: easier to manage network, more declarative configuration, bundled Traefik...

So, I'm convinced that quite a few people can happily and efficiently use k8s.

In the past I used other k8s distro (Harvester) which was much more complicated to use and fragile to maintain.


Check out Talos Linux if you haven't already, it's pretty cool (if you want k8s).


I tried Talos few month ago. Found it unstable and complicated; reported few bugs.

And because they are "immutable" - I found it's significantly more complicated to use with no tangible benefits. I do not want to learn and deal declarative machine configs, learn how to create custom images with GPU drivers...

Quite a few things which I get done on Ubuntu / Debian under 60 seconds - takes me half an hour to figure out with Talos.


Learning new things takes time.

It sounds like an immutable kubernetes distro doesn't solve any problems for you.


How do you manage node settings k8s does not yet handle with Talos?


Talos has it's own API that you interact with primarily through the talosctl command line. You apply a declarative machineconfig.yaml with which custom settings can be set per-node if you wish.


Proxmox has built-in support for CEPH, which is promoted as VMFS equivalent.

I don't have much experience with them, so can't tell if it's really on the same level.


Proxmox with Ceph can do failover when a node fails. You can configure a VM as high-availability to automatically make it boot on a leftover node after a crash: https://pve.proxmox.com/wiki/High_Availability . When you add ProxLB, you can also automatically load-balance those VMs.

One advantage Ceph has over VMware is that you don't need specially approved hardware to run it. Just use any old disks/SSDs/controllers. No special extra expensive vSAN hardware.

But I cannot give you a full comparison, because I don't know all of VMware that well.


That is the problem. LLMs can't be trusted.

I was searching on HuggingFace for the model which can fit on my system RAM + VRAM. And the way HuggingFace shows the models - bunch of files, showing size for each file, but doesn't show the total. I copy-pasted that page to LLM and asked to count the total. Some of LLMs counted correctly, and some - confidently gave me totally wrong number.

And that's not that complicated question.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: