Hacker Newsnew | past | comments | ask | show | jobs | submit | Glemkloksdjf's commentslogin

Honestly, the person should spend their time of fixing their shit instead of writing blog posts.

I find intellij a great IDE, modern frameworks are fun to use, ai helps me doing things i don't want to do or things i just need (like generate a good README.md for the other people).

Containers are great and mine build fast.

My Startup has a ha setup with self healing tx to k8s and everything is in code so i don't need to worry to backup some random config files.

Hardware has never been that cheap. NVMs, RAM and Compute. A modern laptop today has a brilliant display, quite, can run everything, long batterytime.

Traffic? No brainer.

Websphere was a monster with shitty features. I remember when finally all the JEE Servers had a startup time of just a few seconds instead of minutes. RAM got finally cheap enough that you were able to run eclipse, webserver etc. locally.

Java was verbose, a lot more verbose than today.

JQuery was everywere.



truthnuke

How did you do that?

I followed the same steps the Verge reporter did: Downloaded GameHub, connected my Steam account. The initial boot was lengthy, but it worked.

You complelty ignore the foxcon problem?

Google makes money with ads and at least takes this serious.

Apple just exploits.


Uncloud is so far away from k8s, its not k8s like.

A normal person wouldn't think 'hey lets use k8s for the low stakes deployment over here'.


>A normal person wouldn't think 'hey lets use k8s for the low stakes deployment over here'.

I'm afraid I have to disappoint you


Which is fine because it absolutly matches the result.

You would not be able to operate hundreds or thousand of any nodes without operation complexlity and k8s helps you here a lot.


So you build an insecure version of nomad/kubernetes and co?

If you do anything professional, you better choose proven software like kubernetes or managed kubernetes or whatever else all the hyperscalers provide.

And the complexity you are solving now or have to solve, k8s solved. IaC for example, Cloud Provider Support for provisioning a LB out of the box, cert-manager, all the helm charts for observability, logging, a ecosystem to fall back to (operators), ArgoCD <3, storage provisioning, proper high availability, kind for e2e testing on cicd, etc.

I'm also aways lost why people think k8s is so hard to operate. Just take a managed k8s. There are so many options out there and they are all compatible with the whole k8s ecosystem.

Look if you don't get kubernetes, its use casees, advantages etc. fine absolutly fine but your solution is not an alternative to k8s. Its another container orchestrator like nomad and k8s and co. with it own advantages and disadvantages.


It's not a k8s replacement. It's for the small dev team with no k8s experience. For people that might not use Docker Swarm because they see it's a pretty dead project. For people who think "everyone uses k8s", so we should, too.

I need to run on-prem, so managed k8s is not an option. Experts tells me I should have 2 FTE to run k8s, which I don't have. k8s has so many components, how should I debug that in case of issues without k8s experience? k8s APIs change continuously, how should I manage that without k8s experience?

It's not a k8s replacement. But I do see a sweet spot for such a solution. We still run Docker Swarm on 5 servers, no hyperscalers, no API changes expected ;-)


I still run docker swarm on 3 servers. Haven't needed to update it much over the past 5 years.

How was your Swarm experience so far? It's so disappointing that Docker seems to slowly but steadily abandoning it. There is only a couple dozen mainly maintenance commits in the swarmkit repo for the entire 2025 year :sigh:

Those are all sub-par cloud technologies which perform very badly and do not scale at all.

Some people would rather build their own solutions to do these things with fine-grain control and the ability to handle workloads more complex that a shopping cart website.


I've tried to refrain from commenting but your comment pushed me over the edge. I either want to dismiss your comment as ignorant that amazon is just a shopping cart or ignorant that you even need cloud technologies until you have 1000s of customers. But I must concede there's a chance you fall in that middle area and I'm wrong. It's < 5 percent. But yeah sure.. we have a scale problem and you're right you've identified the nonsense cloud technologies that won't fix it. I'm glad you chimed in to convince us but to build our own for 5000 customers.

Even kubectl slows down to a crawl with a thousand deployments on the same cluster.

The protocols are bad, as is the tech supporting them.


Nvidia has everything they need to build the most advanced GPU Chip in the world and mass produce it.

Everything.

They can easily just do this for more optimized Chips.

"easily" in sense of that wouldn't require that much investment. Nvidia knows how to invest and has done this for a long time. Their Ominiverse or robots platform isaac are all epxensive. Nvidia has 10x more software engineers than AMD


They still go to TSMC for fab, and so does everyone else.

For sure. But they also have high volumne and know how to do everything.

Also certain companies normally don't like to do things themselves if they don't have to.

Nonetheless nvidia is were it is because it has cude and an ecoysystem. Everyone uses this ecosystem and then you just run that stuff on the bigger version of the same ecosystem.


Sry to say but the fact that you argue with LLMs never become AGI, you are not up-to-date.

People don't assume LLM will be AGI, people assume that World Models will lead us to AGI.

I personally never asumed LLM will become AGI, i always assumed that LLM broke the dam for investment and research into massivce scale compute ML learning and LLMs are very very good in showing were the future goes because they are already so crazy good that people can now imagine a future were AGI exists.

And that was very clear already when / as soon as GPT-3 came out.

The next big thing will probably be either a LOT more RL or self propelling ai architecture discovery. Both need massive compute to work well but then will potentially provide even faster progress as soon as humans are out of the loop.


> People don't assume LLM will be AGI,

I wish that was true.

> people assume that World Models will lead us to AGI.

Who are these people? There is no consensus around this that I have seen. You have anything to review regarding this?

> as soon as GPT-3 came out.

I don't think that was true at all. It was impressive when it came out, but people in the field clearly saw the limitations and what it is.

RL isn't magical either. Google AlphaGo as an example often required human intervention to get the RL to work correctly.


AlphaGo Zero doesn't need much human intervention at all.

Regarding world models: All the big ones. LeCun, Demis Hassabis, Fei-Fei Li too. And they are all working on it.

LLMs will definitly play some type of role in AGI. After all you can ask an LLM already a lot of basic things like 'what are common tasks to make a tea'. A type of guide, long term fact memory or whatever this can be called.


> AlphaGo Zero doesn't need much human intervention at all

You should research it and not just read news articles. RL did not work and required human intervention numerous times before it got close to what it is now.


Are OpenAI or Anthropic et al seriously building towards “world models”? I haven’t seen any real evidence of that. It seems more like they are all in on milking LLMs for all they are worth.

I mentioned it in my other comment but people like LeCun, Demis Hassabis, Fei-Fei Li do.

There are indications that Open AI is doing this but nothing official as far as i know and i have not heard anything from Anthropic.


For the fact that they invented Deep Blue, they are really struggling with AI

Their Granite family of models is actually pretty good! They just aren't working on the mainstream large LLMs that capture all the attention.

IBM is always very conscious of what their clients need (and the large consultancy business provides a very comprehensive view). It just turns out their clients don’t need IBM to invest in large frontier models.

ibm developed SSMs/mamba models and also releasing trainings datasets i think, also quantum computing is strategic option..

For sure but do you see them at any relevant leader boards? Any news how good they are?

I don't.

I know their models, but not because i constantly read about it



Thats not the point.

Deepmind is not an UK company, its google aka US.

Mistral is a real EU based company.


Using US VC dollars. Where their desks are isn’t really important.

Increasingly where the desks and servers are is critical.

The cloud act and the current US administration doing things like sanctioning the ICC demonstrate why the locations of those desks is important.


That's such a silly argument. X, OpenAI and others have large Saudi investments. In the grant scheme of things the US is largely indebted to China and Japan.

Currency is interchangeable. Location might not be.

An EU Company pays taxes in EU, has a EU mindset (worker laws etc.), focuses more on EU than other countries.

And an EU company can't be forced by the US Gov to hand over data.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: