Hacker Newsnew | past | comments | ask | show | jobs | submit | mr_ndrsn's commentslogin

This looks very cool!

Please consider adding a user agent string with a link to the repo or some Google-able name to your curl call, it can help site operators get in touch with you if it starts to misbehave somehow.


It's tough when there's a cat and mouse game to spoof your UA so you don't get blocked. I wish webmasters had better relationships with scrapers and could accept the realities that your data will be scraped no matter how much you try and stop it.


IMO, We should really just get rid of the user agent header altogether.


Yeah, that's good idea - I need to add that to my suggestions for how to implement this.


If you're scraping any significant amount of data (>500K), and depending on the frequency, you might also want to add etag/cache-control headers as well as accept-encoding, to save server bandwidth.

Collecting 1 kB every minute might not be a big deal, but collecting 1 MB every minute would cost an AWS-hosted service >$40/year in additional data transfer costs


It should definitely be optional. I can only imagine some busybody PM insisting they block harmless scrapes.


Transcript of call: https://gist.github.com/christianselig/fda7e8bc5a25aec9824f9...

> Me: No, no, I'm sorry. Yeah one more time. I was just saying if the opportunity cost of Apollo is currently $20 million a year. And that's a yearly, apparently ongoing cost to you folks. If you want to rip that band-aid off once. And have Apollo quiet down, you know, six months. Beautiful deal. Again this is mostly a joke, I'm just saying if the opportunity cost is that high, and if that is something that could make it easier on you guys, that could happen too. As is, it's quite difficult.

> Reddit: Yeah, yeah, yeah, I hear you. I think it's… I don't know what you mean by quiet down. I find that to be-

> Me: No, no, sorry. I didn't mean that to-

> Reddit: I'm going to very straightforward to you too, it sounds like a threat. And I'm just like "Oh interesting". Because one of the things we're trying to do is say "You have been using our API free of cost for many, many years and we have absolutely sanctioned - you have not broken any rules." And now we're changing our perspective for what we're telling you - and I know you disagree with it. That hey, we want to operate on a thing that is financially, you know, footing. And so hopefully you mean something completely different from what I said when you say like "go quietly", I just want to make sure.

> Me: How did you take that, sorry? Could you elaborate?

> Reddit: Oh, like, because you were like, "Hey, if you want this to go away".

> Me: I said "If you want Apollo to go quiet". Like in terms of- I would say it's quite loud in terms of its API usage.

> Reddit: Oh, go quiet as in that. Okay, got it. Got it. Sorry.

> Me: Like it's a very-

> Reddit: Yeah, that's a complete misinterpretation on my end.

> Me: Yeah. No, no, it's all good.

> Reddit: I apologize. I apologize immediately.

> Me: No, no, no, it's all good.

> Reddit: Because what we're hearing in some conversations is folks are, you know, like in other- making threats, and we're like "Hey, that's not a conversation that we want to have". So I immediately apologize.

> Me: Oh, no, no, it's all good. I'm sorry if it sounded like that.

Link to audio: http://christianselig.com/apollo-end/reddit-third-call-may-3...


I am more confused after reading that than I was before. Why is Reddit apologizing? What does “go quiet” mean here and why aren’t they speaking more plainly?


Yup, use an accessory for that. I used it to setup a Minecraft server for our last company meetup.

I agree, there should be a way to have the primary app be an image that doesn’t get built/pushed.


Zero, which is why we're not using k8s on-prem. Our team is already handling the on-prem hardware/software environment, and this will consolidate our apps on a single platform methodology, allowing us to keep the same team size. Using mrsk allows us reduce the complexity of our servers, moving that into the Dockerfile.

If we had gone down the k8s on-prem rabbit-hole, I suspect we would have required more folks to manage those components and complexity.


I don't understand how having k8s means you need significantly more people.

It's just concepts put into a strict system. Now you're just shimming the same concepts with less supported hacks. Now you have to train your team on less used technology that isn't transferable to other roles. Sounds like technical debt to me.


We're arguing about generic approaches and the 37Signals folks are making specific decisions about their very specific situation (their app, their staff having time or not, their budget, etc).

To be fair, they don't seem to be saying their strategy is for everybody but the audience thinks so? I think we're talking past each other, tbh.


For me it's still the 'nor getting what issue they had with k8s'.

And I would love to spend a few days with there team to understand it.


Yes, they did. This is not a debatable fact. IIRC, it was 30%+ of the company.


i am sure you will supply proof for your claims.


I was at the company when it happened. I'm currently at the company. I'm in ops and work on all of the mrsk/de-clouding efforts.


Has the political change lead to a better or worse work environment?


ha right on! Must've be real awkward for the people who didn't quit in hottest tech job market of all times :D



> at least 20 people — more than one-third of Basecamp’s 57 employees — had announced their intention to accept buyouts from the company.

Thanks for subjecting me to this crap article ( Which i presume you didn't bother to read.).


The article seems to provide evidence for the claim that a dispute within the company over the messaging from leadership led to 1/3 of the staff leaving. I provided it without comment.

Do you believe that a significant proportion of the staff did not quit? Do you have an alternative source that provides evidence for that version of events?


intention to leave = staff leaving ?

then scarlett johanssen is my wife because i intend to marry her.

> Do you have an alternative source that provides evidence for that version of events?

Yes because people go around documenting evidence for things did not happen.


announced their intention to leave... to the company... in response to the company making an open offer to people of terms for them to leave.

That seems like a slightly different prior, in terms of our Bayesian assessment of the probability that those people remained employed at the company afterwards, than your hypothetical engagement to Ms Johannsen.


> to the company

Where did you get this though?

> had announced their intention to accept buyouts from the company.

Is it just people clicking 'yes' reaction to internal slack message ? This didn't sound like they were making any commitment ' to the company' .

Also do you have any comment about the title of the article that you linked. Does that seem honest to you?


So strange to white-knight a company and attempt to deny something that happened pretty publicly...

> As a result of the recent changes at Basecamp, today is my last day at the company. I joined over 15 years ago as a junior programmer and I’ve been involved with nearly every product launch there since 2006.

https://web.archive.org/web/20210430155528/https://twitter.c...

https://web.archive.org/web/20210430140035/https://twitter.c...

https://twitter.com/zachwaugh/status/1388190748189802501

> I’m leaving my position at Basecamp, where I’ve worked for 4 years, due to the recent changes and new policies.

https://twitter.com/lexicola/status/1388189598367559688

https://twitter.com/dylanginsburg/status/1388199059983413257

https://twitter.com/jonasdowney/status/1388205182916440070

> Given the recent changes at Basecamp, I’ve decided to leave my job as Head of Design.

https://twitter.com/mackesque/status/1388206605506842627

https://twitter.com/kaspth/status/1380616358266871810

https://twitter.com/wcmoline/status/1388208323908968449

> I have left Basecamp due to the recent changes & policies.

https://twitter.com/conormuirhead/status/1388207801646780416

https://twitter.com/Rahsfan/status/1388209146487623681

https://twitter.com/AdamStddrd/status/1388223100823642112


> So strange to white-knight a company and attempt to deny something that happened pretty publicly...

it was just skepticism from seeing these sorts of claims over the years. Half of hollywood would be in canada if people really followed up on those. At some point it became acceptable to make these sort of claims with no intention of following up.

I guess quitting your job in the hottest tech market of all time is a little different than moving to a different country.


> Last week was terrible. We started with policy changes that felt simple, reasonable, and principled, and it blew things up internally in ways we never anticipated. David and I completely own the consequences, and we're sorry. We have a lot to learn and reflect on, and we will. The new policies stand, but we have some refining and clarifying to do.

https://world.hey.com/jason/an-update-303f2f99


We did! And it did work. And there are def some great things that I (we) love about k8s. Personally, the declarative aspect of it was chef's kiss. "I want 2 of these and 3 of these, please", and it just happens.

Which is the primary reason why we did investigate k8s on-prem. We had already done the work to k8s-ify the apps, let's not throw that away. But running k8s on-prem is different than running your own k8s in the cloud is different than running on managed k8s in the cloud.

Providing all of the bits k8s needs to really work was going to really stretch our team, but we figured with the right support from a vendor, we could make it work. We worked up a spike of harvester + rancher + longhorn and had something that we could use as if it were a cloud. It was pretty slick.

Then we got the pricing on support for all of that, and decided to spend that half million elsewhere.

We own our hardware, we rent cabs and pay for power & network. We've got a pretty simple pxeboot setup to provision hardware with a bare OS that we can use with chef to provide the common bits needed.

It's not 'ultimately flexible in every way', but it's 'flexible enough to meet the needs of our workloads'.


What is your position at 37Signals and how do you like it? I'm really impressed by the innovation that comes out of you guys and the workplace culture you folks have.


I'm a Lead SRE on the Ops team. We've got a fantastic bunch of folks, they're amazing to work with!


Negative. No external tool/company has ssh access. GHA is strictly for CI, which is decoupled from the actual deploy.

If we do decide to tie it in, it will be using the GH Deployment API to inform the local tool on CI status or something.


What do you do then if you don't mind me asking? I see this problem time and time again for self hosting and and using CI/CD - and every time it seems to either come down to exposing SSH, polling for new versions, or running the github action runner on the same machine as the app or service.


mrsk doesn't require rails. It makes no assumptions about what you're running, we deploy a golang service with it.

mrsk does require a ruby install on your machine, tho.


I would be very interested in that, do you have a writeup?


I don't think there's a writeup out there, but mrsk just uses docker under the hood. So, if you have a CMD in your Dockerfile, it will use that.

If you have an image that can run multiple things, like a rails app that can run the app process for web traffic by default, but it can also run job workers with the right command, you can provide the cmd in the mrsk config. You can see this in the jobs role in the example: https://github.com/mrsked/mrsk#using-different-roles-for-ser....


Thanks!


IP addresses are easy to use in the configurations that inspired mrsk. Small apps that are fairly static. There are two main problems that mrsk is trying to solve.

1. Moving our stuff out of the cloud without going back to static hosts. 2. Giving new rails devs a tool where they can deploy their application easily, in a modern fashion.

Both of these are not so large or complex that you must use hostnames in configs instead of IP addresses. I will note, that most of our internal configs, do in fact, use hostnames rather than IP's. But judging a tool because an example used an IP address seems shortsighted.

There are plenty of things in mrsk to discuss without fixating on that.


>> 1. Moving our stuff out of the cloud without going back to static hosts.

I move companies between clouds and on-prem to cloud, cloud to on-prem (even though a bit bigger one than 37singnals) and I could use tools like Ansible and Terraform. In my experience when a smaller company starts to writes tools that solve the imaginary problems of the CTO they are not focusing on solving customer problems and this usually ends badly.

>> 2. Giving new rails devs a tool where they can deploy their application easily, in a modern fashion

Could you share the comparison chart of tools that you considered? What thought process led you to believe that this is an unsolved problem and requires a new tool? Genuinely interested.

>> But judging a tool because an example used an IP address seems shortsighted.

How should I judge it by than? Reading the code? Figuring out how to hold it right myself?


> I move companies between clouds and on-prem to cloud, cloud to on-prem (even though a bit bigger one than 37singnals) and I could use tools like Ansible and Terraform.

Where did anyone say we don't use other tools? Chef + Terraform are wonderful and still in use for us.

> Could you share the comparison chart of tools that you considered? What thought process led you to believe that this is an unsolved problem and requires a new tool? Genuinely interested.

Your phrasing here and in your previous quoted reply leads me to believe you're not genuinely interested. Our environment is not your environment, our experiences are not your experiences. No, I can't share a chart of other tools that were considered, but off the top of my head, Capistrano, various CI integrations, Github Actions, etc.

We're a rails shop. We're going to look at tools in that area. We're not going to go dig into dagger or garden.io or something that causes us to have to conform our dev/deploy environment to a mental model that adds more friction for our developers.

> How should I judge it by than? Reading the code? Figuring out how to hold it right myself?

It's not that complicated of code, only about 2k, total, IIRC? I mean, you could read it. I'm baffled that you're choosing to compare a structural design flaw like the iPhone antennae issue to the fact that a configuration example in a new tool used an IP address. Go off, king.


> In my experience when a smaller company starts to writes tools that solve the imaginary problems of the CTO they are not focusing on solving customer problems and this usually ends badly.

Like writing their own web framework in obscure programming language?


> Like writing their own web framework in obscure programming language?

Which caused the biggest spike in CO2 production for recorded human history while making operation teams rage-quit their job over undefined method [] for nil messages?

Disclaimer I love Ruby and Rails to death.


Yes, we're actively de-k8s/de-clouding all workloads, including Hey, with mrsk as the pattern. We've started with simple apps and moved up our complexity tree, updating mrsk as we go. My co-worker, Farah Schüller, wrote up a good summary of things so far: https://dev.37signals.com/bringing-our-apps-back-home/


Can you expand on the limitations of AWS ECS? It seems like... you built a worse version of it.

Toss https://aws.amazon.com/ecs/anywhere on your own hardware and let someone else wake up at odd hours to worry about the container images making it on to the host.


I mean, mrsk has a different feature set than ECS, sure. But we don't want to pay AWS for ECS anywhere. We're trying to get off big tech where we can. I'm sure you've heard David talk about that.

This tool covers our use cases so far, and is easy to reason about. Ergonomically, it's very similar to Capistrano, which we're all familiar with.


Makes sense if you're dropping all AWS dependencies. For us, ECS Anywhere with its logging to CloudWatch and automatic IAM credentials management allowed us to still use the rest of the AWS ecosystem on cheaper and more specialized hardware.

It wasn't too bad to setup GitHub actions -> AWS ECR -> AWS CodeDeploy -> AWS ECS -> On-prem hosts via ECS Anywhere and that's worked well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: