Hacker Newsnew | past | comments | ask | show | jobs | submit | r-bar's commentslogin

The committee wanted Cosmos, Azure, and Postgres all in the name and wouldn't compromise.


Lima (1) is a project that packages Linux distros for MacOS and executes them via qemu in the backend. Maybe you could solve your problem by launching one of their vms and inspecting the command line it generates. You might find an option you were missing.

(1) https://github.com/lima-vm/lima


I'll check this out. There are many different systems out there like UTM and such, but I want the most basic / minimal amount of dependencies, which will work basically anywhere - which is just QEMU. Not UTM, or maybe parallels, sometimes Lima, for Mac and then virtualbox for windows, and QEMU Linux type of nonsense. Just QEMU should suffice everywhere, and it's much more secure that way.


K3S includes some extras that make it nice for working in small local clusters, but are not part of the standard k8s codebase.

* Traefik daemonset as a load balancer

* Helm controller that lets you apply helm manifests without the helm command line

* Upgrade controller

* Sqlite as the default backing store for the k8s API

* Their own local storage provisioner

K0S has a lot of the same goals: be light weight and self contained in a single binary. But K0S tries to be as vanilla as possible.

Choosing between the two it comes down to your use case. Do you want light weight and compatible (k0s), or lightweight and convenient (k3s)?

Edit: formatting


What you've listed for k3s is mostly included in k0s. I wouldnt go far to say k0s isnt convenient.

* A helm controller is included in k0s

* Etcd is bundled and bootstrapped automatically which I perfer because I dont want the overhead of the translation that Kine does. Although Kine is available for a non-etcd datastore if that is preferred.

* Upgrade controller is included (autopilot).

* They have a local storage provider based on OpenEBS.

* Ingress is missing, but due to the built in helm controller that can be boot strapped upon cluster initialisation.

Overall, together with k0sctl and its declarative configuration it is easier to deploy k0s than it was k3s.


Can you please elaborate on the "kine" overhead?


Kine (https://github.com/k3s-io/kine) is a shim or an external process (when not k3s) that translates the etcd api to enable compatibility with a database or alternative data store. Kubernetes natively talks etcd, so this translation is what enables its usage with sqlite or another database, but it incurs an overhead.

I don't have specific numbers unfortunately since it was years ago I benchmarked Kine against etcd. But I had a better results with etcd both in cluster and single node.

I happened to stumble upon this paper that echos my experience. https://programming-group.com/assets/pdf/papers/2023_Lightwe... Particularly, the high controller cpu usage (even for an empty cluster), and higher latencies.


thanks!

my problem with etcd was very high and constant I/O and CPU usage. I don't mind the latency.


Thank you! Great details. Definitely want convenient.


You are getting a lot of troll replies, but it is actually interesting to think about.

The waste is uncaptured value. It is some part of your software business's domain that is just too hard, expensive, or requiring physical intervention to encode the process in their system. So the business never chooses to build that feature. This leaves some part of your business's problem domain unsolved. Potentially someone else smaller could come in and try and solve that problem and capitalize on that wasted value.


I think human programming is not untapped at all. This is describing every line of business application in existence. A great example of both the power and limitations of this is phone trees for customer support.

A product team and dev team encode business knowledge and flows into code and leverage a human to make judgement calls when necessary. The outcome is a program that can either be used by skilled workers to multiply their output or allow unskilled workers to perform tasks that would have formerly required a skilled worker to accomplish.

There are already (arguably) optimized flows and design patterns for application UX. Companies have already spent years trying to build and optimize this "human programming". Dev teams have developed many DSLs to make it easier to encode business logic into their applications more quickly.

I am not saying line of business applications are good or near some optimal final form, but to call "human programming" untapped is taking a very narrow view of the definition.


How does supabase not qualify as open source?

Their stack is primarily comprised of other independent open source projects. The one component that isn't is their "realtime" server that serves updates from postgres' WAL over websockets, but that is open sourced[0] under Apache 2.0. From my understanding the primary part that has not been open sourced is their database browser / web UI. There are plenty of alternative management tools for postgres though. As you can export your database what else would you need to ensure your portability and independence?

Granted they make their docs fairly opaque for trying to self host. Presumably to encourage you to just use their hosted service. Hosting open sourced projects seems like a very ecosystem friendly way of monetizing.

[0] https://github.com/supabase/realtime


> From my understanding the primary part that has not been open sourced is their database browser / web UI.

FYI, this is also open source: https://github.com/supabase/supabase/tree/master/studio


I have been dying for this. Currently I use privacy.com and their spending limits to get the most basic of functionality. I would my bank to provide an API and make this type of control and other information available. Maybe I could really dream and it would be standardized across banks.


We're building retail accounts with exposed APIs. I encourage you to send me an email (check profile) if that's something you're interested in.


Wise lets you access your account and money with API-s. You can create a personal token (and webhooks if you want) at https://wise.com/settings and you're off to the races. Api docs are at https://api-docs.wise.com

Disclaimer: I work on Wise. (We're hiring: https://wise.jobs)


I don't think true python "one liners" are a thing, but the awkward thing about awk is sits in this place where what you are doing is complicated enough you need awk, but simple enough you need a one liner? Those cases have been exceedingly few and far between for me enough that every time I want to reach for awk I have to go lookup how to do anything more complex than printing fields. That completely defeats the point of the quick one liner.

May as well open up vim, write my 7 lines of python, and run it. Because I use it everyday and didn't have to look anything up it ends up far faster. Then when I am done I either delete it, throw it in a scripts directory, or make it part of some existing infrastructure repo. Now if I keep it because I used python it is much more readable than the awk 1 liner would have been.

I have tried in earnest to memorize awk's idiosyncrasies multiple times now. By the time I go to use what I learned the last time it is months later and I have forgot enough I need to go look stuff up.

So in a way, here I am: The guy that writes "one liners" in python.


I think that is a good point, that often writing a short python script is usually the best solution.

I use awk (and python) daily at work. I work with a lot of flat files, and I use awk when I am doing data quality checks. One of the "sweet spots" it hits for me is when I need to group data by value, or other relatively simple aggregations.


Yeah, it's a different world from when I learned Awk. You might enjoy the (very short) book by the creators just because it's a great focused expression of the Unix way. But nobody needs to learn it.


Perl is sometimes "better awk".


They also seem to be the most willing to open up their GPU sharding API, GVTG, based on their work with their existing Xe GPUs. The performance of their implementation in their first generation was a bit underwhelming, but it seems like the intention is there.

If Intel is able to put out something reasonably competitive and that supports GPU sharding it could be a game changer. It could change the direction of the ecosystem and force Nvidia and AMD to bring sharding to their consumer tier cards. I am stoked to see where this new release takes us.

Level1Linux has a (reasonably) up to date state of the GPU ecosystem that does a much better job outlining the potential of this tech.

https://www.youtube.com/watch?v=IXUS1W7Ifys


The best solution I have found for developing locally on k8s is k3d [0]. It quickly deploys k3s clusters inside docker. It comes with a few extras like adding a local docker registry and configuring the cluster(s) to use it. It makes it super easy to setup and tear down clusters.

I usually only reach for it when I am building out a helm charm for a project and want to test it. Otherwise docker-compose is usually enough and is less boilerplate to just get an app and a few supporting resources up and running.

One thing I have been wanting to experiment with more is using something like Tilt [1] for local development. I just have not had an app that required it yet.

[0] https://k3d.io/ [1] https://tilt.dev/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: