Hacker Newsnew | past | comments | ask | show | jobs | submit | ShakataGaNai's commentslogin

> Open Source

Where? Lets take a random example: https://hub.docker.com/hardened-images/catalog/dhi/traefik

Ok, where is the source? Open source means I can build it myself, maybe because I'm working in an offline/airgapped/high compliance environment.

I found a "catalogue" https://github.com/docker-hardened-images/catalog/blob/main/... but this isn't a build file, it's some... specialized DHI tool to build? Nothing https://github.com/docker-hardened-images shows me docs where I can build it myself or any sort of "dhi" tool.


Hi. Yes, we fully intend to open up access to the build tool here. The build file you see is a new format that we've built to be able to do reproducible builds. It's a new frontend on top of buildkit so you can use it with docker build. The team is currently working hard to provide access to this tooling which will enable you to create, build and modify the images in your environment. We just need a couple more days for this to be available.


You do not need a custom buildkit frontend to do reproducible builds with any modern container build system, including docker.

Vanilla docker/buildkit works just fine as we use it in Stagex with just makefiles and Containerfiles which makes it super easy for anyone to reproduce our images with identical digests, and audit the process. The only thing non default we do to docker is have it use the containerd backend that comes with docker distributions since that allows for deterministic digests without pushing to a registry. This lets us have the same digests across all registries.

Additionally our images are actually container native meaning they are "from scratch" all the way down avoiding any trust in upstream build systems like Debian or Alpine or any of their non deterministic package management schemes or their single-point-of-failure trust in individual maintainers.

We will also be moving to LLVM native builds shortly removing a lot of the complexity with multi-arch images for build systems. Easily cross compile all the things from one image.

Honestly we would not at all be mad if Docker just white labeled these as official images as our goal is just to move the internet away from risky and difficult to audit supply chains as opposed to the "last mile" supply chain integrity that is the norm in all other solutions today.

https://stagex.tools


More importantly, no bandwidth charge penalty. As leaving AWS isn't inexpensive.


My friends owned it (I was never allowed to have a NES myself). Not once did ANY of us ever manage to land the plane. We tried MANY times. This blog makes it seem so easy I want to be angry at it :-)


Too much of anything sucks. Too big of a monolith? Sucks. Too many microservices? Sucks. Getting the right balance is HARD.

Plus, it's ALWAYS easier/better to run v2 of something when you completely re-write v1 from scratch. The article could have just as easily been "Why Segment moved from 100 microservices to 5" or "Why Segment rewrote every microservice". The benefits of hindsight and real-world data shouldn't be undersold.

At the end of the day, write something, get it out there. Make decisions, accept some of them will be wrong. Be willing to correct for those mistakes or at least accept they will be a pain for a while.

In short: No matter what you do the first time around... it's wrong.


Probably a lot of overlap in the venn diagram of people who would like the two things. Mostly the "Early Adopter" circle.

Also a lot of cars have a lot of limitations with comma.ai. Yes, you can install it on all sorts but there are limitations like: above 32mph, cannot resume from stop, cannot take tight corners, cannot do stop light detection, requires additional car upgrades/features, only known to support model year 2021. Etc.

Rivian supports everything, it has a customer base who LOVE technology, are willing to try new things, and ... have disposable income for a $1k extra gadget.


I’ve seen videos of massive touch screen stuttering, is it still a thing on rivian?


a lot of the gen1 users will likely swap over to it though. They basically have dropped improvements for gen1 autonomy which is rug-pullish :(


I would wager that's because there isn't a lot of existing silicon that fits the bill. What COTS equipment is there that has all the CPU/Tensor horsepower these systems need... AND is reasonably power efficient AND is rated for a vehicle (wild temp extremes like -20F to 150F+, constant vibration, slams and impacts... and will keep working for 15 years).

Yea, Tesla has some. But they aren't sharing their secret sauce. You can't just throw a desktop computer in a car and expect it to survive for the duration. Ford et all aren't anywhere close to having "premium silicon".

So you're only option right now is to build your own. And hope maybe that you can sell/license your designs to others later and make bucks.


NVIDIA orin series is the big one for tensor horsepower. Horizon robotics and Qualcomm also have competitive automotive packages.

They are all expensive, but less than the risk adjusted cost of developing a chip.


Having to work with Qualcomm is enough reason to not buy Qualcomm


Isn't that risk balanced by a healthy reward of controlling their verticals and possible secret sauce?

And their chips give "1600 sparse INT8 TOPS" vs the Orin's "more than 1,000 INT8 TOPS" -- so comparable enough? And going forward they can tailor it to exactly what they want?


Orin is Nvidia's last generation. Current gen is Thor at 1k TOPS. Rivian's announcement specifies TOPS at the module level. The actual chip is more like 800 and probably doubled. Throw two Thors on a similar board and you're looking at 2000 sparse int8 TOPS.

I've been involved with similar efforts on both sides before. Making your own hardware is not a clear cut win even if you hit timelines and performance. I wish them luck (not least because I might need a new job someday), but this is an incredibly difficult space.


Mostly it costs hundreds of millions to develop a chip; it relies on volume to recover the cost.

NVIDIA also tailor their chips to customers. It's a more scalable platform than their marketing hints at... Not to mention that they also iterate fairly quickly.

So far anyway, being on a specialised architecture is a disadvantage; it's much easier to use the advances that come from research and competitors. Unless you really think that you are ahead of the completion, and can sell some fairly inflexible solution for a while.


Their "launch trailer" shows the Steam Machine running Windows.


Do you mean this (~3m04s)? https://www.youtube.com/watch?v=OmKrKTwtukE&t=184s

That was the desktop mode, showing KDE Plasma (a linux desktop environment).

Also, Blender on the left screen and Godot on the right screen!


Wasn't that the desktop mode of SteamOS?


Temperature Compensated Crystal Oscillators (TCXOs) is what they should be looking for. And to be clear you can get SX1262 variants with such, eg: https://wiki.seeedstudio.com/wio_sx1262/

For the detailed run down, see https://cdn.sparkfun.com/assets/f/f/b/4/2/SX1262_AN-Recommen... page 14

> In the case of an SX1262 operating at +22 dBm in the US 902 – 928 MHz band, the frequency drift measured during the maximum LoRAWAN™ packet duration stays below the maximum limit, provided thermal insulation is implemented around the crystal during PCB design.

> At extreme temperatures (below -20 °C and above 70 °C), it is recommended to use a TCXO.

> For any other frequency bands corresponding to longer RF packet transmissions at +22 dBm, it is recommended to use a TCXO.


Theory and reality are different here.

As used in the meshtastic devices this chip does actually fail doing normal Lora transmission under reasonable conditions.

I know because I've seen the exact failure.


You've seen the failures in variants with a TCXO?


I have to agree that there were a lot of good options, but uv's speed is what sets it apart.

Also the ability to have a single script with deps using TOML in the headers super eaisly.

Also Also the ability to use a random python tool in effectively seconds with no faffing about.


People are still using Perl for large project in 2025?

Look, I don't hate Perl. It was my first real language beyond basic that I used for a long long time. But Perl's popularity peaked in the late 90s? Early 2000s? The failed Perl 6 adventure was about the time that people started fleeing elsewhere, like PHP.


Personally I don't use it, but I admire Perl from a distance. I know Craigslist and Ebay use it? I'm not sure if its used as much for systems stuff as it used to be.

Maybe Perl 6 was not even really needed and Perl is perfect ;)


For some measures of failure. Raku (aka Perl 6) does exist after all https://raku.org


I know some large financial institutions that still use it. They were building big systems using the stuff in the 90s and early 00s. It still works and nobody has the appetite to rewrite it as it's a massive undertaking that would be very expensive and high risk. Better to just keep updating it to support the occasional new requirement.

They'll rarely advertise it in a job listing of course. They're looking for people with Java/C#/C++/Python experience, and there's certainly plenty of that, but also thousands of little Perl scripts doing ETL workflows.


What about maintaining the codebases that got written 25 years ago? Those still exist and needs care to stay operational. Sometimes there’s no point rewriting to the next trendy language, although it can be obligatory, if it’s impossible for the company to find skilled workers, because everybody moved to a different language ecosystem.


Perl is #10 on the Tiobe index this year.


I agree, I thought everyone had moved on to Python or other languages.


Perl can be a huge hassle because of lib versioning. Killed off my project at Amazon with internal monitoring. Python has the same problem...


A problem very solved in tooling for both languages, other than plenty of novice library maintainers existing in the ecosystem. Which is hardly a fault of the language and more choosing those vendors for your projects.


Ok but I didn't get to pick the tooling, I just inherited the project.


Yeah I don't think anyone really uses it. Perl 5 is dead and Perl 6/Raku was never alive.

Weird donation if you ask me. There are many many many more interesting languages that I would rather see succeed. Koka, Hylo, Vale, Whiley, Lobster, etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: