> 10. The FOSS axiom "More Eyes On The Code" works, but only if the "eyes" are educated.
One thing that could help with this is if somebody points an LLM at all these foundational repositories, prompted with "does this code change introduce any security issues?".
AM/PM is bad, but I have a GE microwave which requires you to also set _the date_ when you're setting the clock. How could somebody think that was a good idea? :)
Packages had an incident two days ago, also: https://www.githubstatus.com/incidents/sn4m3hkqr4vz. I noticed it when a Terraform provider download was failing, citing a 404 from objects.githubusercontent.com.
> The easiest practice to implement for peak Git efficiency is to stick to a subset of commands
This has been my experience as well.
Great article, thanks! I've been using essentially this same subset of commands for many years, and it's worked extremely well for me: does everything I need/my team needs, and avoids complication. I'm glad to have this as a reference I can point people to when they ask for git advice.
Yes, there are threshold cryptography schemes with "distributed key generation" [1] in which the parties end up holding shares but the full secret is never known to any party. Then, to your point about "the only time they key was known was when the parties reached quorum after the fact": in these schemes, some threshold of the parties can cooperate to compute a function of the secret (e.g. a signature, or a ciphertext) without any of them ever knowing the secret.
If I've already reviewed a PR and the author makes further changes, I definitely prefer to review an add-on commit. If the history is rewritten/rebased, then IME the entire PR needs to be re-reviewed from scratch. If we're talking about a <10 line change, then, by all means, rebase to your heart's content. With anything more complicated than that, rebasing a branch that's already been looked at can be disruptive and I'd strongly recommend against it (though squash-and-merge after review is fantastic).
I didn't actually read the linked article but I see it is from GitLab. GitLab makes it easy to view the diff between versions of an MR even if it includes rebases.
> then IME the entire PR needs to be re-reviewed from scratch
Why? What's the difference? You can still diff the previous version of the PR with the current version and end up with the same thing that an add-on commit would give you, but ready to merge as-is.
Does your project have a policy of compiling and running (and e.g. tests passing) at EVERY commit?
I can't imagine being able to easily enforce that without asking people to edit the correct part of their commit. It's maybe more difficult with gitlab/github interfaces where changing the middle of a sequence of commits will not render very well, but in email based workflows it works fine.
On the other hand, being able to bisect a project without having to worry about whether an unrelated issue is causing you to traverse the wrong branch of the bisect is an enormous advantage compared to the minimal effort required of keeping track of a modified (rebased) commit in the middle of a set of commits under review.
I don't see how this solves the problem of patches which fix up previous patches. This workflow doesn't require using merges, but it will introduce situations (irrespective of whether you use --first-parent or not) where patches fix previous patches potentially leaving a gap where code doesn't compile, doesn't run, or doesn't pass tests.
Changes should be added with additional commits. When the review is complet, the code should be rebased and merged (git merge --ff-only my-reviewed-banch). This leads to a clean git commit history and an easy review process.
In your view if a PR is not rebased but trunk is merged into it, does that warrant a full re-review? The end result is functionally the same as a rebase.
More or less, though LXC (and LXD using the LXC backend) are focused on system containers rather than single services. Basically lightweight VMs with a full init and the usual set of set of system daemons.
I recently got an external monitor for my work Macbook. I plugged it in and soon found out that closing the laptop doesn't put it to sleep anymore. I can kind of see why somebody would want this behavior in some situations. I can't at all see why this would be the default, or why there would be no way to toggle the behavior.
> I continue to be disappointed with Apple's desktop experience.
> and soon found out that closing the laptop doesn't put it to sleep anymore
I'm fairly sure this has never been the case, or at least not for a very long time. As a long time Macbook user with external display, the expected behavior of closing the lid is to keep the laptop running as if the external monitor is the main display.
Why? Maybe if there is an external keyboard and mouse, but even then it seems bad. As the default it seems counter productive -- closing a laptop should do the same thing regardless of peripherals, unless you specify otherwise.
I'm a fan of consistent behavior, but actually I'm on the side of not sleeping with a display plugged in. Going to sleep when there are no peripherals plugged in is only a sane default because there's no way to actually use the device with no access to the keyboard/mouse/screen.
OTOH, one can still reasonably use a closed laptop if it has a display plugged in, so closing the lid no longer implies an intent to stop using it. Because of this alone, going to sleep in this context might not be the most sane default.
Some examples of when I've personally closed a laptop with the screen plugged in without wanting it to sleep:
* Working at a desk where the laptop doesn't fit with the lid open
* Starting a video when connected to a TV in a dark room, where the laptop's screen is a distracting source of light
Then why is anyone complaining about it? I mean, it seems like a silly default to me, but if it's expected then I'm on the wrong side of what to expect it seems.
Great work on the book, and thank you for writing about the self-publishing process [1]. I'm in the process of writing a book, and your writing has addressed several things that have been on my mind.
One thing that could help with this is if somebody points an LLM at all these foundational repositories, prompted with "does this code change introduce any security issues?".