Hacker Newsnew | past | comments | ask | show | jobs | submit | jbergknoff's commentslogin

> 10. The FOSS axiom "More Eyes On The Code" works, but only if the "eyes" are educated.

One thing that could help with this is if somebody points an LLM at all these foundational repositories, prompted with "does this code change introduce any security issues?".


Not sure why an LLM would be better than existing static analysis tools. Many projects I have worked on run static vulnerability analysis on PRs.


I found the black hat!


Yeah, I found it very interesting how McCullough (Masters of Rome) idolizes Caesar and holds Cicero in contempt, and Harris is exactly the opposite.


It’s funny how political figures from over two thousand years ago can remain as divisive now as they were at the time.


AM/PM is bad, but I have a GE microwave which requires you to also set _the date_ when you're setting the clock. How could somebody think that was a good idea? :)


Packages had an incident two days ago, also: https://www.githubstatus.com/incidents/sn4m3hkqr4vz. I noticed it when a Terraform provider download was failing, citing a 404 from objects.githubusercontent.com.


> The easiest practice to implement for peak Git efficiency is to stick to a subset of commands

This has been my experience as well.

Great article, thanks! I've been using essentially this same subset of commands for many years, and it's worked extremely well for me: does everything I need/my team needs, and avoids complication. I'm glad to have this as a reference I can point people to when they ask for git advice.


Yes, there are threshold cryptography schemes with "distributed key generation" [1] in which the parties end up holding shares but the full secret is never known to any party. Then, to your point about "the only time they key was known was when the parties reached quorum after the fact": in these schemes, some threshold of the parties can cooperate to compute a function of the secret (e.g. a signature, or a ciphertext) without any of them ever knowing the secret.

FROST is one example of such a threshold scheme, for computing Schnorr signatures: https://eprint.iacr.org/2020/852.pdf

[1] https://en.wikipedia.org/wiki/Distributed_key_generation


A similar scheme we use in drand is this: https://www.researchgate.net/publication/225722958_Secure_Di...


If I've already reviewed a PR and the author makes further changes, I definitely prefer to review an add-on commit. If the history is rewritten/rebased, then IME the entire PR needs to be re-reviewed from scratch. If we're talking about a <10 line change, then, by all means, rebase to your heart's content. With anything more complicated than that, rebasing a branch that's already been looked at can be disruptive and I'd strongly recommend against it (though squash-and-merge after review is fantastic).


I didn't actually read the linked article but I see it is from GitLab. GitLab makes it easy to view the diff between versions of an MR even if it includes rebases.


How does that feature work in Gitlab? I also use it along with a rebase policy at work and sometimes have that issue.



GitHub does the same. There is a compare button that appears after a force (with-lease) push of a rebased branch.


Some code review systems, like gerrit, actually encourage this workflow: you can easily view diffs between commit versions ('patchsets').


Are there CR tools that don't display the diffs between rebases? Seems like a tooling issue more than anything.


> then IME the entire PR needs to be re-reviewed from scratch

Why? What's the difference? You can still diff the previous version of the PR with the current version and end up with the same thing that an add-on commit would give you, but ready to merge as-is.


Does your project have a policy of compiling and running (and e.g. tests passing) at EVERY commit?

I can't imagine being able to easily enforce that without asking people to edit the correct part of their commit. It's maybe more difficult with gitlab/github interfaces where changing the middle of a sequence of commits will not render very well, but in email based workflows it works fine.

On the other hand, being able to bisect a project without having to worry about whether an unrelated issue is causing you to traverse the wrong branch of the bisect is an enormous advantage compared to the minimal effort required of keeping track of a modified (rebased) commit in the middle of a set of commits under review.


You can now use --first-parent when you bisect to ensure bisect doesn't go into branches but stays on the main branch.

https://git-scm.com/docs/git-bisect#Documentation/git-bisect...


I don't see how this solves the problem of patches which fix up previous patches. This workflow doesn't require using merges, but it will introduce situations (irrespective of whether you use --first-parent or not) where patches fix previous patches potentially leaving a gap where code doesn't compile, doesn't run, or doesn't pass tests.


> I can't imagine being able to easily enforce that without asking people to edit the correct part of their commit.

I imagine something something githooks. However it might be enforced, it seems like a miserable way to develop.


You want to ask the author to change the commit you are interested in and then review the diff to that commit.

This is if course mostly valuable if you don't squash commits on merge. Otherwise, the extra rebase work isn't that valuable.


Changes should be added with additional commits. When the review is complet, the code should be rebased and merged (git merge --ff-only my-reviewed-banch). This leads to a clean git commit history and an easy review process.


In your view if a PR is not rebased but trunk is merged into it, does that warrant a full re-review? The end result is functionally the same as a rebase.


I'm a big fan of using containers to distribute and run tools. It's an underappreciated use case. I wrote about its benefits (and drawbacks) a few months ago: https://jonathan.bergknoff.com/journal/run-more-stuff-in-doc....

Subuser looks interesting, nice work! I love to see progress in this space.


How does lxd fit into this space - similar to docker it's also based on cgroups and namespaces?


More or less, though LXC (and LXD using the LXC backend) are focused on system containers rather than single services. Basically lightweight VMs with a full init and the usual set of set of system daemons.


I recently got an external monitor for my work Macbook. I plugged it in and soon found out that closing the laptop doesn't put it to sleep anymore. I can kind of see why somebody would want this behavior in some situations. I can't at all see why this would be the default, or why there would be no way to toggle the behavior.

> I continue to be disappointed with Apple's desktop experience.

Same here.


> and soon found out that closing the laptop doesn't put it to sleep anymore

I'm fairly sure this has never been the case, or at least not for a very long time. As a long time Macbook user with external display, the expected behavior of closing the lid is to keep the laptop running as if the external monitor is the main display.


Isn’t that called clamshell mode? If so it’s been that way for years. Easy to change IIRC also.


At least going back to the original MacBook. The one with the 10GB hard drive, around 2001-ish, did this.

It's a feature, not a bug.


For myself, I expect to be able to close the laptop and continue using the external display.


Why? Maybe if there is an external keyboard and mouse, but even then it seems bad. As the default it seems counter productive -- closing a laptop should do the same thing regardless of peripherals, unless you specify otherwise.


I'm a fan of consistent behavior, but actually I'm on the side of not sleeping with a display plugged in. Going to sleep when there are no peripherals plugged in is only a sane default because there's no way to actually use the device with no access to the keyboard/mouse/screen.

OTOH, one can still reasonably use a closed laptop if it has a display plugged in, so closing the lid no longer implies an intent to stop using it. Because of this alone, going to sleep in this context might not be the most sane default.

Some examples of when I've personally closed a laptop with the screen plugged in without wanting it to sleep:

* Working at a desk where the laptop doesn't fit with the lid open * Starting a video when connected to a TV in a dark room, where the laptop's screen is a distracting source of light


I'm not saying it shouldn't be possible, just that it shouldn't be the default.


Mac laptops have done this for at least 18 years, probably longer. That's the way they're designed, and how they are used by many people.


Then why is anyone complaining about it? I mean, it seems like a silly default to me, but if it's expected then I'm on the wrong side of what to expect it seems.


For thermal reasons this used to not be possible, and probably still isn’t on older hardware.


Power + mouse + display = docking station.


you can turn this off.

system settings -> power adapter -> 1. option

I've run into https://forums.macrumors.com/threads/16-is-hot-noisy-with-an... so I need to use clamshell mode.


Great work on the book, and thank you for writing about the self-publishing process [1]. I'm in the process of writing a book, and your writing has addressed several things that have been on my mind.

[1] Especially https://www.jeffgeerling.com/blog/2016/self-publish-dont-wri...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: