Hacker Newsnew | past | comments | ask | show | jobs | submit | woodruffw's commentslogin

I think OP's point is that a "99 year lease" isn't worth very much without a firm guarantee that the least in fact lasts that long. I don't really have an opinion on land leases in the PRC, but it doesn't seem facially unreasonable to suspect that a foreign lease holder's land value wouldn't be a priority for China's leadership during an economic crisis.

This is on full display with the US's Venezuela problem: no one believes the US will hold it, so oil companies don't want to invest because last time exactly this happened - they had everything seized.

Imagine if you'd invested in lithium mining in Afghanistan 15 years ago: you'd likely have paid a lot, made little money, lost employees and then lost it to the Taliban.


It’s satire.

So is the comment you replied to...

Clearly I’m not on the top of my game today!

What does attestation mean in this context? The point of the Web PKI is to provide consistent cryptographic identity for online resources, not necessarily trustworthy ones.

(The classic problem with self-signed certs being that TOFU doesn’t scale to millions of users, particularly ones who don’t know what a certificate fingerprint is or what it means when it changes.)


One of the ideas behind short-lived certificates is to put certificate lifetimes within the envelope of CRL efficacy, since CRLs themselves don’t scale well and are a significant source of operational challenges for CAs.

This makes sense from a security perspective, insofar as you agree with the baseline position that revocations should always be honored in a timely manner.


This is Astro, not Astral. uv is Astral :-)

Edit: OP clarified what they meant, I'm sorry for the misunderstanding on my part!


They know, hence why they used e.g., i.e. exempli gratia

I don't think that's really clear. I think we could both defer to the OP clarifying.

For pedantry's sake: neither i.e. nor e.g. would be correct here. You want cf. ("conferatur") to invite a comparison; e.g. is when an example pertains to an instance. In this case uv would not pertain to the instance, because Astro is not Astral.


cf. would invite a fair bit of confusion on an article about cloudflare

I agree! That's why I think it's probably just a confusion between entities. It doesn't make sense either as example or as a comparison (although IMO it makes more sense as the latter).

(For the OP: I'm sorry if I misinterpreted you.)


It's all good. Hardly matters. It was just becoming too big a discussion for something far too minor. Any frustration I had from being misunderstood (primarily self-directed) was alleviated from satvikpendem guessing correctly what I intended.

"e.g." IS correct because uv is an example or instance of a dev tool.

"e.g." isn't used correctly here. It's intended use is as a connector linking a clause to examples supporting that clause. You can't simply substitute "for example" with "e.g." anywhere in a sentence and expect it to function correctly.

Regardless, these Latin abbreviations best avoided entirely due to the surprising number of readers who don't understand them.


For the perplex:

e.g. is latin for "exempli gratia" = for example i.e. is latin for "id est" = that is


As someone who was perplexed, I've only heard perplex used in past tense (I was perplexed) so seeing "For the perplex" just made me confused as to what "perplex" meant and I had to do a further search to decipher this tree of comments haha

Ha, ha, sorry. English is not my native language.

A good way to remember it is to use a backronym:

e.g. - example given

i.e. - in effect


i.e. - in eother (words)

in explanation

I'm not sure. I wouldn't generally call Astro a "dev tool". It's more of a framework.

It's possible you are right, but it isn't clear from the content of the comment.


Frameworks are a category of development tool. Things that developers utilitise to be productive.

IMO saying a framework is a dev tool is like saying a cake mix is a cooking tool, because it allows you to be more productive when making a cake. Sure, if you look at it a certain way, it is correct. But that isn't the way the term is usually used.

Like coffee?

It confused me as hell as my first thought was "oh great, astral.sh got bought by a large company, now we've eliminated the last obstacle to using uv in enterprise context" only to realize that it's another company with similar name.

I mean good for them, but it would be nice if the same happened to e.g. Astral (cf.).


I think this part is really worth engaging with:

> Later, moving public key parsing to our own Rust code made end-to-end X.509 path validation 60% faster — just improving key loading led to a 60% end-to-end improvement, that’s how extreme the overhead of key parsing in OpenSSL was.

> The fact that we are able to achieve better performance doing our own parsing makes clear that doing better is practical. And indeed, our performance is not a result of clever SIMD micro-optimizations, it’s the result of doing simple things that work: we avoid copies, allocations, hash tables, indirect calls, and locks — none of which should be required for parsing basic DER structures.

I was involved in the design/implementation of the X.509 path validation library that PyCA cryptography now uses, and it was nuts to see how much performance was left on the ground by OpenSSL. We went into the design prioritizing ergonomics and safety, and left with a path validation implementation that's both faster and more conformant[1] than what PyCA would have gotten had it bound to OpenSSL's APIs instead.

[1]: https://x509-limbo.com


It is extremely common that a correct implementation also has excellent performance.

Also, even if somebody else can go faster by not being correct, what use is the wrong answer? https://nitter.net/magdraws/status/1551612747569299458


> It is extremely common that a correct implementation also has excellent performance.

I think that's true in general, but in the case of X.509 path validation it's not a given: the path construction algorithm is non-trivial, and requires quadratic searches (e.g. of name constraints against subjects/SANs). An incorrect implementation could be faster by just not doing those things, which is often fine (for example, nothing really explodes if an EE doesn't have a SAN[1]). I think one of the things that's interesting in the PyCA case is that it commits to doing a lot of cross-checking/policy work that is "extra" on paper but stills comes out on top of OpenSSL.

[1]: https://x509-limbo.com/testcases/webpki/#webpkisanno-san


I’d say correct common path. OpenSSL due to hand waving deals with a lot of edge cases the correct path doesn’t handle. Even libraries like libnss suffers from this.

Are these edge cases correct to the spec, or not?

There are multiple overlapping specifications for things like X.509. There are the RFCs (3280 and 5280 are the "main" ones) which OpenSSL generally targets, while the Web PKI generally tries to conform to the CABF BRs (which are almost a perfect superset of RFC 5280).

RFC 5280 isn't huge, but it isn't small either. The CABF BRs are massive, and contain a lot of "policy" requirements that CAs can be dinged for violating at issuance time, but that validators (e.g. browsers) don't typically validate. So there's a lot of flexibility around what a validator should or shouldn't do.


Yes.

The spec is often such a confused mess that even the people who wrote it are surprised by what it requires. One example was when someone on the PKIX list spent some time explaining to X.509 standards people what it was that their own standard required, which they had been unaware of until then.


Got any links to that conversation? Sounds fun.

I'm drawing a blank on it sorry, it's somewhere in an archive of messages but I can't find the appropriate search term to locate it. However it did turn up a reference to something else, namely this, https://www.cs.auckland.ac.nz/~pgut001/pubs/x509guide.txt. It hasn't been updated for a long time but it does document some of the crazy that's in those standards. The various Lovecraft references I think are quite appropriate.

Technically yes because I saved the messages, which I saw as a fine illustration of the state of the PKI standards mess. However I'd have to figure out which search term to use to locate them again ("X.509" probably won't cut it). I'll see what I can do.

Remember LibreSSL? That was borne of Heartbleed IIRC, and I remember presentation slides saying there was stuff in OpenSSL to support things like VAX, Amiga(?) and other ancient architectures. So I wonder if some of the things are there because of that.

Most of the performance regressions are due to lots of dynamic reconfigurability at runtime, which isn’t needed for portability to ancient systems. (Although OpenSSL is written in C it has a severe case of dynamic language envy, so it’s ironic that the pyca team want a less dynamic crypto library.)

The Amigans really like their system. So they kept using them long after mainstream users didn't care. By now there probably aren't many left, but certainly when LibreSSL began there are still enough Amigans, actually using an Amiga to do stuff like browse web pages at least sometimes, that OpenSSL for Amiga kinda makes sense.

I mean, it still doesn't make sense, the Amigans should sort out their own thing, but if you're as into stamp collecting as OpenSSL is I can see why you'd be attracted to Amiga support.

Twenty years ago, there are Amigans with this weird "AmigaOne" PowerPC board that they've been told will some day hook to their legitimate 20th century Commodore A1200 Amiga. Obviously a few hundred megahertz of PowerPC is enough to attempt modern TLS 1.0 (TLS 1.1 won't be out for a while yet) and in this era although some web sites won't work without some fancy PC web browser many look fine on the various rather elderly options for Amigans and OpenSSL means that includes many login pages, banking, etc.

By ten years ago which is about peak LibreSSL, the Amigans are buying the (by their standards) cheaper AmigaOne 500, and the (even by their standards) expensive AmigaOne X5000. I'd guess there are maybe a thousand of them? So not loads, but that's an actual audience. The X5000 has decent perf by the standards of the day, although of course that's not actually available to an Amiga user, you've bought a dual-core 64-bit CPU but you can only use 32 bit addressing and one core because that's Amiga.


Now I wonder how much performance is being left on the table elsewhere in the OpenSSL codebase...

Given the massive regression with 3.x alone, you'll probably be happier if you don't know :/

haproxy has an article on the subject

https://www.haproxy.com/blog/state-of-ssl-stacks

TLDR - on the TLS parts, quite a lot, up to 2x slower on certain paths. Amusingly, openssl 1.1 was much faster.

libcrypto tends to be quite solid though, though over the years, other libraries have collected weird SIMD optimizations that enable them to beat openssl by healthy margins.


Being on both sides of the open source value relationship, I feel somewhat skeptical of mechanisms that use dependency cardinality/"popularity" to allocate funding: at its best it's a proxy for core functionality (which is sometimes, but not always, the actually hard/maintenance-intensive stuff) and at its worst it incentivizes dependency proliferation (since two small core packages would be equally as popular as one medium-sized one).

I think this post accurately isolates the single main issue with GitHub Actions, i.e. the lack of a tight feedback loop. Pushing and waiting for completion on what's often a very simple failure mode is frustrating.

Others have pointed out that there are architectural steps you can take to minimize this pain, like keeping all CI operations isolated within scripts that can be run locally (and treating GitHub Actions features purely as progressive enhancements, e.g. only using `GITHUB_STEP_SUMMARY` if actually present).

Another thing that works pretty well to address the feedback loop pain is `workflow_dispatch` + `gh workflow run`: you still need to go through a push cycle, but `gh workflow run` lets you stay in development flow until you actually need to go look at the logs.

(One frustrating limitation with that is that `gh workflow run` doesn't actually spit out the URL of the workflow run it triggers. GitHub claims this is because it's an async dispatch, but I don't see how there can possibly be no context for GitHub to provide here, given that they clearly obtain it later in the web UI.)


I've standardized on getting github actions to create/pull a docker image and run build/test inside that. So if something goes wrong I have a decent live debug environment that's very similar to what github actions is running. For what it's worth.

I do the same with Nix as it works for macOS builds as well

It has the massive benefit of solving the lock-in problem. Your workflow is generally very short so it is easy to move to an alternative CI if (for example) Github were to jack up their prices for self hosted runners...

That said, when using it in this way I personally love Github actions


Nix is so nice that you can put almost your entire workflow into a check or package. Like your code-coverage report step(s) become a package that you build (I'm not brave enough to do this)

I run my own jenkins for personal stuff on top of nixos, all jobs run inside devenv shell, devenv handles whatever background services required (i.e. database), /nix/store is shared between workers + attic cache in local network.

Oh, and there is also nixosModule that is tested in the VM that also smoke tests the service.

First build might take some time, but all future jobs run fast. The same can be done on GHA, but on github-hosted runners you can't get shared /nix/store.


I'm scared by all these references to nix in the replies here. Sounds like I'm going to have learn nix. Sounds hard.

Gemini/ChatGPT help (a lot) when getting going. They make up for the poor documentation

LLMs are awful at nix in my experience. Just learn the fundamentals of the language and build something with it.

Calling nix documentation poor is an insult to actually poor documentation.

Remember the old meme image about Vim and Emacs learning curves? Nix is both of those combined.

It's like custom made for me on the idea level, declarative everything? Sign me up!

But holy crap I have wasted so much time and messed up a few laptops completely trying to make sense of it :D


Whata the killer benefit of nix over, like, a docker file or a package.lock or whatever?

package.lock is JSON only, Nix is for the entire system, similar to a Dockerfile

Nix specifies dependencies declaritively, and more precisely, than Docker (does by default), so the resulting environment is reproducibly the same. It caches really well and doubles as a package manager.

Despite the initial learning curve, I now personally prefer Nix's declarative style to a Dockerfile


same here. though, i think bazel is better for DAGs. i wish i could use it for my personal project (in conjunction with, and bootstrapped with nix), but that's a pretty serious tooling investment that I just feel is just going to be a rabbit hole.

I tend to have most of my workflows setup as scripts that can run locally in a _scripts diorectory, I've also started to lean on Deno if I need anything more complex than I'm comfortable with in bash (even bash in windows) or powershell, since it executes .ts directly and can refer directly to modules/repos without a separate install step.

This may also leverage docker (compose) to build/run different services depending on the stage of action. Sometimes also creating "builder" containers that will have a mount point for src and output to build and output the project in different OSes, etc. Docker + QEMU allows for some nice cross-compile options.

The less I rely on Github Actions environment the happier I am... the main points of use are checkout, deno runtime, release please and uploading assets in a release.

It sucks that the process is less connected and slow, but ensuring as much as reasonable can run locally goes a very long way.


I just use the fact that any action run can trigger a webhook.

The action does nothing other than trigger the hook.

Then my server catches the hook and can do whatever I want.


I wish I had the courage to run my own CI server. But yes, I think your approach is the best for serious teams that can manage more infrastructure.

I am embarrassed that I didn't think to do this. Thank you :)

I was doing something similar when moving from Earthly. But I have since moved to Nix to manage the environment. It is a lot better of a developer experience and faster! I would checkout an environment manager like Nix/Mise etc so you can have the same tools etc locally and on CI.

Yeah, images seem to work very well as an abstraction layer for most CI/CD users. It's kind of unfortunate that they don't (can't) fully generalize across Windows and macOS runners as well, though, since in practice that's where a lot of people start to get snagged by needing to do things in GitHub Actions versus using GitHub Actions as an execution layer.

I’ve VNCed into CI to debug selenium tests failing because of platform font and scrollbar rendering. I never really thought about doing that locally in a docker container, but it definitely wouldn’t be convenient to always run those tests locally in a docker container. I guess having an option to would sort of simplify debugging, but I’d still have to VNC into the docker container I think

Me, too, though getting the trusted publisher NPM settings working didn't help with this. But it does help with most other CI issues.

Most of the npm modules I've built are fortunately pretty much just feature complete... I haven't had to deal with that in a while...

I do have plans to create a couple libraries in the near future so will have to work through the pain(s)... also wanting to publish to jsr for it.


So you've implemented GitLab CI in GitHub... We used to do this in Jenkins like 7 years ago.

https://github.com/nektos/act

Lets you run your actions locally. I've had significant success with it for fast local feedback.


I tried this five years ago back when I was an engineer on the PyTorch project, and it didn't work well enough to be worth it. Has it improved since then?

It works well enough that I didn’t realize this wasn’t first party till right now.

It works, but there are fair amount of caveats, especially for someone working on things like Pytorch, the runtime is close but not the same, and its support of certain architectures etc can create annoying bugs.

For me, no. Spend days trying to get it to recreate a production environment workflow. It is too different than production.

it has. it's improved to work with ~ 75% of steps . fast enough to worth trying before push

I tried this recently and it seems like you have to make a lot of decisions to support Act. It in no way "just works", but instead requires writing actions knowing that they'll run on Act.

I tried act, on the surface it seems like a godsend. Not until you try to use it do you realize it's almost impossible to recreate any moderately complex workflow.

You HAVE to run it against a container, so if you're using self hosted runners your environment may not match at all.


It's insane to me that being able to run CI steps locally is not the first priority of every CI system. It ought to be a basic requirement.

I've often thought about this. There are times I would rather have CI run locally, and use my PGP signature to add a git note to the commit. Something like:

``` echo "CI passed" | gpg2 --clearsign --output=- | git notes add -F- ```

Then CI could check git notes and check the dev signature, and skip the workflow/pipeline if correctly signed. With more local CI, the incentive may shift to buying devs fancier machines instead of spending that money on cloud CI. I bet most devs have extra cores to spare and would not mind having a beefier dev machine.


I think this is a sound approach, but I do see one legitimate reason to keep using a third-party CI service: reducing the chance of a software supply chain attack by building in a hardened environment that has (presumably) had attention from security people. I'd say the importance of this is increasing.

"Works on my machine!"

This goes against every incentive for the CI service provider

Not necessarily. For example, Buildkite lets you host your own runners.


> i.e. the lack of a tight feedback loop.

Lefthook helps a lot https://anttiharju.dev/a/1#pre-commit-hooks-are-useful

Thing is that people are not willing to invest in it due to bad experiences with various git hooks, but there are ways to have it be excellent


Yeah, I'm one of those people who seems to consistently have middling-to-bad experiences with Git hooks (and Git hook managers). I think the bigger issue is that even with consistent developer tooling between both developer machines and CI, you still have the issue where CI needs to do a lot more stuff that local machines just can't/won't do (like matrix builds).

Those things are fundamentally remote and kind of annoying to debug, but GitHub could invest a lot more in reducing the frustration involved in getting a fast remote cycle set up.


GitHub could invest a lot more in actions for sure. Even just in basic stuff like actions/checkout@v6 being broken for self-hosted runners.

But very often the CI operations _are_ the problem. It's just YAML files with unlimited configuration options that have very limited documentation, without any type of LSP.

We need SSH access to the failed instances so we can poke around and iterate from any step in the workflow.

Production runs should be immutable, but we should be able to get in to diagnose, edit, and retry. It'd lead to faster diagnosis, resolution, and fixing.

The logs and everything should be there for us.

And speaking of the logs situation, the GHA logs are really buggy sometimes. They don't load about half of the time I need them to.


I wrote something recently with webrtc to get terminal on failure: https://blog.gripdev.xyz/2026/01/10/actions-terminal-on-fail...


This is one of the big problems we solved with the RWX CI platform (RWX.com). You can use ‘rwx run’ and it automatically syncs your local changes, so no need to push — and with our automated caching, steps like setting up the environment cache hit so you don’t have to execute the same stuff over and over again while writing your workflow. Plus with our API or MCP server you can get the results directly in your terminal so no need to open the UI at all unless you want to do some in-depth spelunking.

I’ve contemplated building my own CI tool (with a local runner) and the thing is if you assume “write a pipeline that runs locally but also on push”, then the feature depth is mostly about queuing, analyzing output, and often left off, but IMO important, charting telemetry about the build history.

Most of these are off the shelf, at least in some programming languages. It’s the integrations and the overmanagement where a lot of the weight is.


I think you described Jenkins, which is infinitely better than GitHub runners.

Jenkins has it's own set of issues. The theory behind GHA is you *should* be able to keep everything in git and not need a another service with it's own abstractions.

But Actions just screws this up. I consider them both to be equally bad.

I'm almost considering n8n to replace both but that will just lead to more problems.


All my Jenkins builds are between 1 and 3 lines depending on readability:

Clone

Cd $clonedFolder

./build.sh

Add an archive step, all set.

I will say even building and running unit testing in Jenkins isn’t bad, but if you aren’t careful it can get messy.

Jenkins can be a super simple build tool. Strong-arming it into doing all the things starts to get messy quickly. Build, run unit tests, archive. I’m not even big on like, building on push. Nothing wrong with it, I’m just big on keeping things as simple as possible. GHA is just a ball of complexity.


I've never used gh workflow run, but I have used the GitHub API to run workflows and wanted to show the URL. I had to have it make another call to get the workflow runs and assume the last run is the one with the correct URL. This would obviously not work correctly if there were multiple run requests at the same time. Maybe some more checking could detect that, but it works for my purposes so far.

Does the metadata in the further call not identify the branch/start time/some other useful info that could help disambiguate this? (honest question)

I wonder what prevents a GH action from connecting to your VPN (Wireguard is fine) and post tons of diagnostics right onto your screen, and then, when something goes badly wrong, or when a certain point is reached, to just keep polling an HTTP endpoint for shell commands to execute.

I mean. I understand, it would time out eventually. But it may be enough time to interactively check a few things right inside the running task's process.

Of course this should only happen if the PR contains a file that says where to connect and when to stop for interactive input. You would only push such a file when an action is misbehaving, and you want to debug it.

I understand that it's a band-aid, but a band-aid is better than the nothing which is available right now.


I’ve never used Nix and frankly am a sceptic, but can it solve this problem by efficiently caching steps?

I think it's possible to both think GitHub Actions is an incredible piece of technology (and an incredible de facto public resource), while also thinking it has significant architectural and experiential flaws. The latter can be fixed; the former is difficult for competitors to replicate.

(In general, I think a lot of criticisms of GitHub Actions don't consider the fully loaded cost of an alternative -- there are lots of great alternative CI/CD services out there, but very few of them will give you the OS/architecture matrix and resource caps that GitHub Actions gives every single OSS project for free.)


MIT was already using Python by 2009[1]; I think it's been one of the "standard" teaching languages for well over a decade at this point.

(By most metrics, Python became "big" in the mid-late 2000s, which is why the Python 3 transition was so painful.)

[1]: https://www.wisdomandwonder.com/link/2110/why-mit-switched-f...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: