We use a combination of AWS autoscaling and Nix to make our CI pipeline bearable.
For autoscaling we use terraform-aws-github-runner which will bring up ephemeral AWS machines if there are CI jobs queued on GitHub. Machines are then destroyed after 15 minutes of inactivity so they are always fresh and clean.
For defining build pipelines we use Nix. It is used both for building various components (C++, Go, JS, etc) as well as for running tests. This helps to make sure that any developer on the team can do exactly the same thing that the CI is doing. It also utilizes caching on an S3 bucket so components that don't change between PRs don't get rebuilt and re-tested.
It was a bit of a pain to set up (and occasionally a pain to maintain), but overall it's worth it.
For copy-pasting text to/from the terminal, I prefer to use Mac shortcut Cmd+C. It doesn't work out of the box on Linux (you have to use Ctrl+Shift+C, because Ctrl+C sends SIGINT). But there's a simple way to make Cmd+C work universally across all apps by rebinding Cmd+C to send Ctrl+Insert and Cmd+V to send Shift+Insert. It turns out these alternative keybindings work everywhere (browsers, GUI apps, terminal, etc). I use keyd to do that in software, but some QMK keyboards can do that rebinding on the keyboard itself.
I don't remember if I did any special config in iTerm or installed a nonstandard package (I don't believe I did), but in Mac I often find it easier to just pipe a result I want copied into the `pbcopy` command in order to capture it in the clipboard, rather than trying to highlight what may be a long output.
I'm not involved in the project in any way, but I can probably give an explanation.
This is a risc-v virtual machine that supports rv32im instruction set (the bare minimum + multiplication). You can compile and run programs there as you would on a usual microcontroller.
The "ZK" thing means that you can pass program code + data to this virtual machine, and as a result of execution get some output and a short sequence of bytes that allow the "other side" to verify that the result of program execution is correct without having to re-execute the program. This verification is computationally cheap. In order to do the verification the "other side" only needs that sequence of bytes and a hash of original code+data.
Blockchains use this in context of achieving "byzantine consensus", especially in cases where multiple systems that lack mutual trust are involved. Think for example about if you want to relay information that's been computed on one blockchain to the other. If both blockchains can prove their state transitions with such virtual machine, then it is possible to make a sort of trusted "event queue" between them. There are of course caveats that rollbacks can happen, so it's not a silver bullet.
Not sure how this applies to day-to-day software, but the thing that comes to mind is that it could serve some cases where TPM (hardware modules) are traditionally utilized. The TPM assumes you don't have means to break the hardware, and so it can attest to certain computation by signing the result with a baked-in key.
> pass program code + data to this virtual machine
If you have to pass the data, then how can this possibly help solve, as others have said, scenarios that prove something about data without revealing the data?
In order to verify the proof, you only need a "commitment" to the fact that you passed specific input data. This can be in the form of calculating a cryptographic hash.
Imagine that you represent a program and data as a flat byte array (which is the typical case for risc-v). Your program contains a prologue at the very start (0x0) that calculates the hash of the rest of the memory and checks that the hash is equal to some value hardcoded right after the prologue and if it doesn't match - then it panics. Then if you can prove that the VM followed all steps exactly as the hardware architecture prescribes, it's just impossible for the result to be any other way. What you need now is to only send to other side this prologue together with the hash. You don't have to reveal the rest of the memory.
Of course I'm simplifying a bit, but I hope the idea is clear.
You don't _have_ to, I think the parent poster is just giving an example of how this can be used.
The "Zero Knowledge" part is that you can tell me "for this particular program code, I know an input that gives an output of 'foobar'" and I can be convinced that you're telling me the truth without seeing what that input actually is.
Let's put it this way. If you can break it and are willing to commit crime - you can earn a lot of money. I'm personally not equipped to judge the level of security, because I'm not a cryptography researcher. What I understand is that the overall ZK cryptography space has been around for a long time, and basic properties are well researched. I tried to read the PLONK paper for entertainment purposes and it's quite easy to understand.
If you want to prove a program, you need to convert it to what is called "arithmetic circuits". This is a clever way of saying "a system of polynomial equations". It is as if you're converting the code to logic gates but instead here you use arithmetic: addition, raising to a certain power, etc. It is a process called "arithmetization".
The proof calculation involves folding this system of equations in various ways and collecting a "witness". Not sure if I'm explaining this correctly, but it's probably in the right direction at least.
The problem with all ZK proofs is that the last bit of calculating the witness is very computationally expensive. What takes milliseconds to run on the CPU can take many hours to prove. And a lot of research is focused on clever mathematical tricks that allow to speed up the proofs while keeping the risks of circuit compromise low enough. As you may guess, the more bleeding edge the research the less it is peer-reviewed.
So I'd say that "it depends".
As for DRM - probably not. At least not in a way that companies installing the DRM would want it to work. They likely want you to be unable to decrypt something at all outside of a particular chip, and watermark the video on top of that to be sure that you're not screen-capturing it.
UPD: there's a decent free into course into modern ZK cryptography: https://zkiap.com/
Proving performance is improving at a rapid pace though, like 10X/year. And converting to circuits is automatic now; there are compilers that can do it for arbitrary Rust code.
This is the software the OP talks about: https://www.startallback.com/. If you go and check the download URL, the files are actually hosted on the CDN, and not on the site itself.
So it looks like as long as the site refers to the file is any way -- it is still flagged. I think there was a recommendation to compartmentalize at some point, but maybe it doesn't help now.
I write a lot of bash, and can kinda agree that it’s full of footguns. The most of them are not even in bash, but in Unix tools, which you have to use due to the lack of standard libraries.
I have used yabai but unfortunately you need to disable SIP so it can inject the Dock.app process with some custom functionality. I forge the specifics but last I checked that had not changed so it was a no go for me.
I use Yabai without disabling SIP. It basically means some of the functionality won't work, but the main tiling stuff still works great. For me, the main feature that won't work without disabling SIP is commands to manipulate spaces and move windows between them, but I've got used to just using one workspace per monitor and hiding/showing windows as needed now anyway.
There is an alternative: you can use a MIME envelope to store notes. This way you’ll have a plaintext container and a way to attach files in a portable way.
For autoscaling we use terraform-aws-github-runner which will bring up ephemeral AWS machines if there are CI jobs queued on GitHub. Machines are then destroyed after 15 minutes of inactivity so they are always fresh and clean.
For defining build pipelines we use Nix. It is used both for building various components (C++, Go, JS, etc) as well as for running tests. This helps to make sure that any developer on the team can do exactly the same thing that the CI is doing. It also utilizes caching on an S3 bucket so components that don't change between PRs don't get rebuilt and re-tested.
It was a bit of a pain to set up (and occasionally a pain to maintain), but overall it's worth it.