Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Re: 6. ... Github Actions

Github Actions left a bad taste in my mouth after having it randomly removed authenticated workers from the pool, after their offline for ~5 days.

This was after setting up a relatively complex PR workflow (always on cheap server starts up very expensive build server with specific hardware) only to have it break randomly after a PR didn't come in for a few days. And no indication that this happens, and no workaround from GitHub.

There are better solutions for CI, GitHub 's is half baked.



This is documented currently (supposed to be 14 days). [1]

That said, I have found runners to be unnecessarily difficult.

But Jenkins and its own quirks, and when I used GitLab, it used ancient docker-machine and outdated AMIs by default.

I think Buildkite has been the only one to make this easy and scalable. But it is meant for self hosted runners.

[1] https://docs.github.com/en/enterprise-cloud@latest/actions/h...


Buildkite also has hosted runners (which they all agents): https://buildkite.com/docs/pipelines/hosted-agents


It does, but that came second.

It was originally (and still usually) used by those who wanted to self-host runners.


bugs happen to all of us. whats your better solution - gitlab?


Roll 2d6, sum result. Your CI migration target is:

  2. migrate secret manager. Roll again
  3. cloud build
  4. gocd
  5. jenkins
  6. gitlab
  7. github actions
  8. bamboo
  9. codepipeline
  10. buildbot
  11. team foundation server
  12. migrate version control. Roll again


somehow i am really liking the kind of people that comment in the comment sections of sysadmin posts. i wonder what personality type this is


Sysadmin.


SysEng


Bump up to 2d10 and add:

    - Travis
    - CircleCI
    - Drone/Woodpecker
    - Tekton Pipelines
    - TeamCity
    - Zuul
    - Buildkite
    - Agola


IBM ClearCase anyone? Noone? I AM old


GitLab pipelines are really good.


Not in love with its insistence on recreating the container from scratch every step of the pipeline, among a bundle of other irksome quirks. There are certainly worse choices, though.


Opposite of Jenkins where you have shared workspaces and have to manually ensure workspace is clean or suffer from reproducibility issues with tainted workspaces.


It's up to you whether you have a shared workspace or not. My machines/pods are destroyed and recreated after each job, so I never had this issue.


You don't actually have to.

If you use the built in container registry and build artifacts, you can pass between steps.


I'm aware, but thank you. Unfortunately, given sufficiently large artifacts, the overhead of packaging, uploading, downloading and unpacking them at every step becomes prohibitive.


honestly jenkins really isnt that bad


Hudson/Jenkins is just not architected for large, multi-project deployments, isolated environments and specialized nodes. It can work if you do not need these features, but otherwise it's fight against the environment.

You need a beefy master and it is your single point of failure. Untimely triggers of heavy jobs overwhelm controller? All projects are down. Jobs need to be carefully crafted to be resumable at all.

Heavy reliance on master means that even sending out webhooks on stage status changes is extremely error prone.

When your jobs require certain tools to be available you are expected to package those as part of agent deployment as Jenkins relies on host tools. In reality you end up rolling your own tool management system that every job has to call in some canonical manner.

There is no built in way to isolate environments. You can harden the system a bit with various ACLs, but in the end if you either have to trust projects or build up and maintain infrastructures for different projects isolated at host level.

In cases when time-wise significant processing happens externally, you have to block an executor.


Yeah I was thinking of using it for us actually. Connects to everything, lots of plugins, etc. I wonder what the hate is from, they are all pretty bad aren't they ?

Will test forgejo's CI first as we'll use the repo anyway, but if it ain't for me, it's going to be jenkins I assume.


Cons:

  - DSL is harder to get into.
  - Hard to reproduce a setup unless builds are in DSL and Jenkins itself is in a fixed version container with everything stored in easily transferable bind volumes; config export/import isn't straightforward.
  - Builds tend to break in a really weird way when something (even external things like Gitea) updates.
  - I've had my setup broken once after updating Jenkins and not being able to update the plugins to match the newer Jenkins version.
  - Reliance on system packages instead of containerized build environment out of the box.
  - Heavier on resources than some of the alternatives.
Pros:

  - GUI is getting prettier lately for some reason.
  - Great extendability via plugins.
  - A known tool for many.
  - Can mostly be configured via GUI, including build jobs, which helps to get around things at first (but leads into the reproducibility trap later on).
Wouldn't say there is a lot of hate, but there are some pain points compared to managed Gitlab. Using managed Gitlab/Github is simply the easiest option.

Setting up your own Gitlab instance + Runners with rootless containers is not without quirks, too.


CASC plugin + seed jobs keep all your jobs/configurations in files and update them as needed, and k8s + Helm charts can keep the rest of config (plugins, script approvals, nodes, ...) in a manageable file-based state as well.

We have our main node in a state that we can move it anywhere in a couple of minutes with almost no downtime.

I'll add another point to "Pros": Jenkins is FOSS and it costs $0 per developer per month.


I have a previous experience with it. I agree with most points. Jobs can be downloaded as xml config and thus kept/versioned. But the rest is valid. I just don't want to manage gitlab, we already have it at corp level, just can't use it right now in preprod/prod and I need something which will be either throwaway or kept just for very specific tasks that shouldn't move much in the long run.


For a throwaway, I don't think Jenkins will be much of a problem. Or any other tool for that matter. My only suggestion would be to still put some extra effort into building your own Jenkins container on top of the official one [0]. Add all the packages and plugins you might need to your image, so you can easily move and modify the installation, as well as simply see what all the dependencies are. Did a throwaway, non-containerized Jenkins installation once which ended up not being a throwaway. Couldn't move it into containers (or anywhere for that matter) without really digging in.

Haven't spent a lot of time with it myself, but if Jenkins isn't of much appeal, Drone [1] seems to be another popular (and lightweight) alternative.

[0] https://hub.docker.com/_/jenkins/

[1] https://www.drone.io




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: