Hacker Newsnew | past | comments | ask | show | jobs | submit | ethomson's commentslogin

Thanks for everything, Cliff. I discovered _The Cuckoo’s Egg_ as a child, and was taken in. I wrote a book report on it... and then I wrote a book report on it the next year... and the next year...

At some point, I stopped trying to drag my pre-teen schoolmates along with me, but I still have my original hardback and re-read it regularly.

Your book taught me many things - perhaps most importantly, that one can educate about complex topics in engaging and understandable ways. And now I’ve landed in a job that focused on security.

One of the things that I’ve been doing in this job is, well, trying to educate about complex topics in engaging and understandable ways. I’ve thought about your book in every blog post I’ve written lately.

Thanks, again.


This is delightful and I can't wait to try it out. Right now, the libgit2 project (https://github.com/libgit2/libgit2) has a custom HTTP git server wrapper that will throttle the responses down to a very slow rate. It's fun watching a `git clone` running over 2400 baud modem speeds, but it's actually been incredibly helpful for testing timeouts, odd buffering problems, and other things that crop up in weird network environments.

I'd love to jettison our hacky custom code and use something off-the-shelf instead.


Yes, that’s definitely the Copper Kettle.

More interesting than the food, though, is the background. Those who aren’t familiar with Cambridge will see King’s College in the background, across the street, which many may know from the BBC’s Carols from King’s.


There’s no confusion about it referring to a dwelling. The confusion is about the _type_ of dwelling.

To quote Wikipedia:

> In American English, "cottage" is one term for such holiday homes, although they may also be called a "cabin", "chalet", or even "camp".

In other words, calling a multi-million pound property a “cottage” would rankle an American ear.


>> In American English, "cottage" is one term for such holiday homes, although they may also be called a "cabin", "chalet", or even "camp".

> In other words, calling a multi-million pound property a “cottage” would rankle an American ear.

It might rankle an American ear but this isn't America, it's Cornwall (Kernow as one of my 11th great grandmas, off of Padstow, would have called it). Cottage hereabouts does not mean a holiday home - they are called holiday homes.

I'll also note that here in en_GB land, the word camp also has multiple meanings and cottaging (the verb) does too. Be careful what you search for. Also please note that Kernow has its own language, which predates English, which is seeing a resurgence. It's one of the old Brythonic languages and Cornwall was once known as West Wales, but I digress.

I spend a great deal of time trying to keep up with the various en_* vagaries. The split of en_US from en_GB (very simplistic depiction) is still quite young and you probably speak a closer variety and with a more "authentic" accent of English than I do, when compared to say that which was spoken in C18 when it started to brachiolate.


In other words, calling a multi-million pound property a “cottage” would rankle an American ear.

I don't think so. There's tons of multi-million-dollar lakefront properties all over North America. People generally refer to these as cottages since they're:

1. seasonal

2. not the primary residence

3. often located near a body of water and/or away from big cities

4. intended for vacations with families and friends (or short-term rental for the same purpose)


Indeed... This usage reaches across the pond - check out how the Vanderbilts used the term here:

https://www.architecturaldigest.com/story/guide-to-vanderbil...

Le Carré's cottage is a splendid place, but it's not the Breakers.


5000 square ft... That's a mansion.


> In other words, calling a multi-million pound property a “cottage” would rankle an American ear

To this British ear.....

Plenty cottages sell for multi-millions (including many in my village, unfortunately). Please take your semantics elsewhere.


I don't know why "American ear" was brought into this, even the OED has definitions that agree a cottage is small and this is... very not small.


To my "American ear" it just sounds like your typical limey understatement. It's the upper class twit equivalent of the American "humblebrag."


Product Manager for Vercel's storage products here. Today we announced three new storage products in beta; they're based on infrastructure that are provided by partners. There's a lot that goes in to pricing products -- and our products are distinct from our partners' products. We have different roadmaps and will introduce different features as we continue development. So I don't love to make too many comparisons between apples and oranges.

But I suspect that you might be comparing Upstash's per-command pricing for _regional_ requests ($0.20 per 100k) to Vercel KV's? In fact, Vercel KV is multi-region, so the more apt comparison is Upstash's pricing for _global_ requests ($0.40 per 100k).


I was fortunate enough to go in 2019, pre-covid, while I had a day off in Seattle. It was a great museum, and I’m disappointed that I won’t be able to go back.

I tweeted a bunch of photos: https://twitter.com/ethomson/status/1109880552360951810


I'm saddened to learn that it is no longer opened to the public, but wanted to say thank you for sharing these photos. It makes me want to learn more about our industry's history.

What a truly handsome set of machines and I hope they find their way to another venue or become accessible to the public once more.


Neat stuff - certainly this problem crops up quite a lot where an internal server needs to get GitHub webhook data.

In the past, I've had good luck using a webhook proxy. I've mostly just used https://smee.io/ which is simple and lightweight although seems to be mostly abandonware at this point. I dockerized it so that it could be used in a Kubernetes cluster, which was very useful for my GitHub Actions build cluster: https://github.com/ethomson/smee-client

There's also Hookdeck, which I haven't used in production, but have played around with, and it seems conceptually the same, but can be made more Enterprisey. Whether that's a bug or a feature is probably up to you.


The proxy idea is interesting too. Does a webhook proxy entail a polling model for events? That is, does the private server have to poll the proxy to receive the webhook? I wanted the GitHub event to push to trigger actions on the private server.


No - your local server will still listen for webhooks, but they'll come from the proxy's client software.

Basically, you set up your GitHub webhook URL as the proxy server (for example, smee.io). Then you run a client on your local machine that connects to the proxy server. When a webhook is fired, it will be sent to the proxy, then delivered to the connected client, which will then pass it along as a webhook to whatever machine you've configured.

There's disadvantages to having all this stuff running, of course, so I think that handling this at the networking layer instead of putting a proxy just for webhooks into place is an interesting strategy. Certainly, it sounds like the right solution if you're already using OpenZiti.


Thanks for the feedback! I'm one of the PMs for GitHub Actions, and I appreciate this. Thinking about Actions as a set of primitives that you can compose is very much how I think about the product (and I think the other PMs as well) so I'm glad that resonates.

We're always welcome to feedback, and we're continuing to invest and improve on the product, so I'm hopeful that we can address the features that you're missing.


Here's my ask:

* Setting up GHA is still a lot of "commit and hope for the best". I've resorted to having a sandbox repo just for experimentation/testing so that I don't overly pollute repos that I actually care about. It would be great to get more instrumentation to see what is going on.

* I have a monorepo for Dockerfiles. It's quite annoying that I have to have separate invocations for different Dockerfiles in dependabot.yml. I should be able to specify /Dockerfile or /Dockerfile* as patterns for detection. The Dependabot invocation for GitHub Actions is a single entry and it would be great to have that.

* I quite like Step Security's Harden Runner but it does require more work/invocations to get this set up. Maybe GH can work with them to more closely incorporate said functionality?

* Make the cache bigger? I build a fair number of multi-arch containers and starting all of them at once tends to blow out the cache.

* Given the interest around sigstore and SBOMs, maybe incorporate native capabilities to sign artifacts and generate SBOMS?


Thanks. The "commit and hope for the best" problem really resonates with me. There are two great projects that might provide some pain relief - nektos/act or rhysd/actionlint. But I agree that commit-to-validate is probably the best strategy at the moment, which is deeply unfortunate. This is an area that I intend to improve in the future.

As for the cache, we doubled it at the end of last year to 10GB. https://github.blog/changelog/2021-11-23-github-actions-cach..., but I can see how multi-arch images would be very large. Have you considered putting images into GitHub Container Registry instead of putting the layers into the cache? I'd love to understand if that is appropriate for your workflow, and if not, what the limitation there is.

Appreciate the rest of the feedback, I'll pass it along to the appropriate teams.


> Setting up GHA is still a lot of "commit and hope for the best". I've resorted to having a sandbox repo just for experimentation/testing so that I don't overly pollute repos that I actually care about. It would be great to get more instrumentation to see what is going on.

There is act[0] which aims to let you run github actions locally via Docker. It isn't perfect but it does a decent job at it, and for the most part your pipeline can be run locally.

After MS bought GH, I had hopes that they would build a tool to run action locally, but nothing yet.

[0] https://github.com/nektos/act


I've had no luck reproducing problems in Actions with act, and the rest of the time have problems in act that I don't in Actions it seems.

I like the idea and also would like something first-party, but I imagine it's hard and GitHub would want it to be less buggy than act is, and maybe they're trying but it's not there.

Tbh even if it ran remotely in actual Actions, but just didn't show up in the repo UI, logged locally, that would be fine?


From my perspectives, GHA is missing 2 things over CircleCI. A way to pause an action for approval, or a way to pull artifacts from other workflows. Both of these actions are _possible_ with an external service but painful to setup. I want to: create a terraform plan, approve it, and then deploy the specifically approved plan. That's not so difficult in CircleCI but is _painful_ in GHA.


fwiw, you can use approvals using environments: https://docs.github.com/en/actions/deployment/targeting-diff...


Sure, but that's actually worse than useless for my use-case. Image this, you have an action that publishes your plan to your PR (#1 - it's a biggish feature). It gets merged and goes to approval. Then people happen. PR #2 is addressing a customer-facing bug so it gets fast-tracked and rammed through before PR #1. Suddenly PR #1 is silently invalid. It _should_ be rejected at this point but the whole point of CI/CD is to save time and reduce the surface area for human mistakes.


specifically for your terraform example, wouldn't it make more sense to have the PR merged only when apply was successful?

i'm not sure how well that can be represented in GH actions, but that would surely be the better option?

you'll always risk some kind of race condition there, e.g. atlantis locks the project while something is planned but not applied to avoid such things from happening. this of course prevents having multiple PRs "ready" at the same time, you'd have to unlock the active PR lock to be able to implement another one.


This still can't use GHA to enforce any sort of integrity so it's kinda moot. I have some of my projects set up to deploy with CircleCI...which can give me the build, approve, apply (specifically the thing you approved) chain that I'm looking for (so there's no race condition). "Why not use CircleCI?" well i do, but if my company decides to cut costs, it may not survive the chopping block...so I'm looking at other options.


Just looked at CircleCI recently. Great product, but the price difference between them and GHA was absolutely jaw dropping.


That's very much how I think about it too, which is why it frustrates me that I can't create canned workflows that apply to all my repos of a certain type (language specific linting and releasing say).

I know I can create user/organisation templates, but all that does is put it in the UI chooser to create a commit to put it in the repo from the web. I want to do something like `include: OJFord/workflows/terraform-provider.yml` or `include: OJFord/workflows/rust.yml`

Perhaps even better would be I don't even have to specify that in the repo, they just apply automatically to any which match a given pattern - named `terraform-provider-*` or having a file `Cargo.toml` say - but I realise that's probably too big a deviation from the way Actions works at this point.


This might be what you're looking for: https://github.com/github/roadmap/issues/74


Could you stuff your actions definition into a sub module pointed at trunk?


Interesting idea! Even if Actions will follow a submodule though (I doubt it tbh, it happens before any actions run of course so we have no control over that) you can't point it 'at trunk' as far as I know, they're always at a specific commit.

(E.g. if you git submodule foreach git checkout master, your diff if you have one will be updating commit hashes, not -whatever +master. This is good for a lot of other reasons but doesn't help here.)


That's probably not what you want. Checking out an ID would put you in a detached HEAD state, and the CLI will give you the page of warning messages that go along with it. I think that this is an excellent reminder that yes, the git CLI is not simple or obvious.


GitHub turns off rename detection and turns off recursive base-building when creating merge commits in pull requests. Both of these selections would cause differences in when a merge produces conflicts. This is for backward compatibility with historical mechanisms for merging pull requests, but it seems like something that we might want to revisit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: