To be honest, some of your points can be a hindrance, but as a GitLab user, others are solveable without massive efforts.
- env vars can be scripted, either in YAML or through dotenv files. Dotenv files would also be portable to dev machines
- how is security a joke? Do you mean secrets management? Otherwise, i don't see a big issue when using private runners with containers
- jobs can pass artifacts to each other. When multiple jobs are closely interwined, one could merge them?
- what dependency installation do you mean? You can use prebuilt images with dependencies for one. And ideally, you build once in a pipeline and use the binary as an artifact in other jobs?
- in my experience, starting containers is not that slow with a moderately sized runner (4-8 cpus). If anything, network latency plays a role
- not being able to modify pipelines and check runners must be annoying, I agree
- everything from on-prem license to SaaS license keeps costing more. Somewhere, expenses are made, but that can be optimized if you are in a position to have a say?
By comparing dev machines to runners, you miss one important aspect: portability, automation and testing in different environments. Except when you have a full container engine on your dev machine with flexible network configs, there can be missed issues.
Also, you need to prime every dev to run the CI manually or work with hooks, and then you can have funny, machine-specific problems. So this already points to a central CI-system by making builds repeatable and in the same from-scratch envirnment.
As for deployment, those shouldn't be made from dev machines, so automated pipelines are the go-to here.
Also autmated test reporting goes out the window for dev machines.
TLDR: True, most things can be fixed if configured and setup properly. Just the way the are often used and provided examples encourage many of the problems.
Env vars can be scripted, many companies use a tree of instance/group/project scoped vars though, leading to easily breaking some projects when things higher up change. Solvable for sure, guidelines in companies make it a pain. There are other settings like allowed branch names etc. that can break things.
With security, yes I mean mostly secrets management. Essentially everyone who can push to any branch has access to every token. Or just having a typo or mixing up some variables lead to stuff being pushed to production. Running things in the public cloud is another issue.
Passing artifacts between jobs is a possibility. Still leads to data pushed between machines. Merging jobs is also possible, just defeats the purpose of having multiple jobs and stages. The examples often show a separation between things like linting, testing, building, uploading, etc. so people split it up.
With dependencies I mean everything you need to execute jobs. OS, libraries, tools like curl, npm, poetry, jfrog-cli, whatever. Prebuilt images work, but it is another thing you have to do yourself. Building more containers, storing them, downloading them. Also containers are not composable, so for each project or job has its own. The curse of being stateless and the way Docker works.
Starting containers is not slow on a good runner. But I noticed significant delays on many Kubernetes clusters, even if the nodes are <1% CPU. Startup times of >30s are common. Still, even if it would be faster it is still a delay that quickly adds up if you have many jobs in a pipeline.
I agree that dev machines and runners have different behavior and properties. What I mean is local-first development. For most tasks it is totally fine to run a different version of Postgres, Redis and Go for example. Docker containers bring it even closer to a realistic setup. What I want is quick feedback and being able to see the state of something when there a bugs. Not needing to do print debugging via git push and waiting for pipelines. Pipelines that setup a fresh environment and tear it down after are nice for reproducibility, but prevent me to inspect the system aside from logs and other artifacts. Certainly this doesn't mean you shouldn't have a CI/CD environment at all, especially for releases/production deployments.
I agree with wrapping things like build scripts to test locally.
Still, some actions or CI steps are also not meant to be run locally. Like when it publishes to a repo or needs any credentials that are used by more than one person.
Btw, Github actions and corresponding YAML are derived from Azure DevOps and are just as cursed.
The whole concept of Github CI is just pure misuse of containers when you need huge VM images - container is technically correct, but a far fetched word for this - that have all kinds of preinstalled garbage to run typescript-wrapped code to call shell scripts.
At this point, just pause with Github Actions and compare it to how GiLab handles CI.
Much more intuitive, taking shell scripts and other script commands natively and not devolving into a mess of obfuscated typescript wrapped actions that need a shit ton of dependencies.
The problem with Gitlab CI is that now you need to use Gitlab.
I’m not even sure when I started feeling like that was a bad thing. Probably when they started glueing a bunch of badly executed security crud onto the main product.
The earliest warning sign I had for GitLab was when they eliminated any pricing tier below their equivalent of GitHub's Enterprise tier.
That day, they very effectively communicated that they had decided they were only interested in serving Enterprises, and everything about their product has predictably degraded ever since, to the point where now they're now branding themselves "the most comprehensive AI-powered DevSecOps Platform" with a straight face.
GitLab can't even show you more than a few lines of context without requiring you to manually click a bunch of times. Forget the CI functionality, for pull requests it's absolutely awful.
I decided it was a bad thing when they sent password reset emails to addresses given by unauthenticated users. Not that I ever used them. But now it is a hard no, permanently.
They have since had other also severe CVEs. That has made me feel pretty confident in my decision.
there was a pretty bad bug (though I think it was a rails footgun)- that allowed you to append an arbitrary email to the reset request.
The only difficult part for the attacker was finding an email address that was used by the target; though thats hsually the same as you use for git commits; and gitlab “handily” has an email address assigned to each user-id incrementing from 1;
Usually low numbers are admins, so, a pretty big attack vector when combined.
But you can do the same with GitHub, right? Although most docs and articles focus on 3rd party actions, nothing stops you to just run everything in your own shell script.
Yes, you can, and we do at my current job. Much of the time it's not even really the harder approach compared to using someone else's action, it's just that the existence of third party actions makes people feel obliged to use them because they wouldn't want to be accused of Not Invented Here Syndrome.
The spirit of forgejo is great, but the whole CI component of both Gitea, forgejo und Github is absolute garbage.
Just compare it to GitLab and it becomes clear.
Why the hell are Actions and 20GB "containers" used?
It all makes simple command line installs in a container so hard to do.
Actions overcomplicate simple stuff like git clone by burying it in mountains of TyseScript to the point of no recognizability or transparency.
And having an external dependency on someone others otherworldly wrapped shell script.
A singe-purpose container does not exist in that type of world which utterly defeats the whole point of using containers.
Is there a similar CI implementation to GitLab out there?
I still use Drone with Gitea and it works great. I wrote a little adapter [1] so it can run VMs with Qemu, in case I need to build untrusted code (or on a different OS).
I don't know why they picked the GitHub thing for their CI.
Yeah, that part reminds me of the flatteringly named "Call to Action" buttons. There was a nice blog article named something like "Button presses You" which i can't seem to find. Essentially, it goes on the trajectory of the purpose of any application telling you what and how to do instead of you, the user, deciding what the programm should do.
Indeed. CTA buttons and everything. Instead of simple, readable text-based menus, icons everywhere because they are "more intuitive". Navigation aids because, otherwise, "users get lost". Useless animated pictures together with apologetic error messages whenever the system (frequently) crashes. Everything facilitated by tools like Figma, enabling designers to be creative by pointing and clicking and leaving the multiple implementation and maintenance details as an exercise to developers who have now to "specialize" on front-end. What a hell. Yesterday I had to use a government web app and got lost. They even bothered to create some video tutorials but unfortunately the software went recently through a redesign to "improve UX" and now the icons don't match the ones in the tutorial anymore. The amount of damage they create with their cutesy interfaces and their condescending attitude is astonishing.
> Instead of simple, readable text-based menus, icons everywhere because they are "more intuitive". Navigation aids because, otherwise, "users get lost".
I'm not especially dumb but after they dropped "hamburger button" I think it took me about 10 years to ever look for the "hamburger button" for basic functionality, especially on non-touch devices.
What horrors lie beneath the hamburger? What void of regular function could there be?
First of all, llvm has clang, which means that llvm as a whole is equipped to understand C (and C++ and Objective-C) both at a high level (abstract syntax tree, all types as declared by the programmer) and low level (SSA form, only the types that are meaningful for sound analysis and optimization).
I think that CIL was a really big deal before llvm and clang. Back then, it was a more approachable alternative to trying to fiddle with C than using GCC, since GCC has a steep learning curve. But in the last 15 years or so, most of the research that would have been done in CIL before is now done in llvm. That’s because llvm is much more complete and it’s designed for ergonomics, specifically in the case where you just want to mess around and even if you’re a newcomer to the compiler. The docs are great and the APIs are top notch.
I think that LLVM’s SSA form is especially good for doing sophisticated analysis and instrumentation of C. I’ve used that a lot for my C experiments. Clang’s AST is really great, too - and it’s amazing for doing higher level stuff where you want to see the original C types and declarations before lowering.
I suspect that there is very little that CIL can do for you that can’t be done in llvm more straightforwardly. And llvm+clang support all of C, plus the adjacent languages (C++ and others).
So, it’s cool that CIL is still around (having alternatives is good, generally) but in my opinion as someone who does experimental work in C compilers, C language extensions, and static/dynamic analysis of C, llvm completely subsumes CIL.
LLVM is a backend: it takes LLVM IR (intermediate representation) and generates machine code.
This is a frontend: it takes C and generates its own IR (a simplified version of C).
You could glue these together with an adapter from CIL to LLVM IR to get a complete C compiler.
Clang is both a frontend and a complete compiler in this respect: the Clang frontend compiles C to LLVM IR, and these are bundled together to produce the Clang compiler.
(Note: I'm simplifying things here. Clang and LLVM are more intertwined than these, and there are several nuances I'm not covering; I'm going for a high-level perspective here)
HTML is made to be flexible by using CSS, LaTeX is a typesetting program. It sets the whole document to look nice in one specific view, which is the final PDF. Or mre precisely, LaTeX works best for printed media
There are efforts for web maths, which would close the gap considerably for rendering science articles.
Most of the other LaTeX features can be achieved with HTML, CSS and maybe JS. Or even Markdown + preprocessor.
it can be achieved, but at what cost? you need to waste time with css and js to get something latex gets you out of the box.
The question was is there something like latex for frontend, and is the "monospace web" it?
"html does the thing" can technically be a true answer, if you completely ignore any contextual understanding.
I'm looking for a simpler solution, not one that would waste more time.
You are asking for a responsive weblayout, which excludes any format that LaTeX outputs.
Latex is a typesetting engine that does the job before presenting anything and not while presenting it. Its layouts are not responsive by design.
So your question if there is something like LaTeX for frontend is just misplaced, because the working method of LaTex does not fit into a responsive weblayout frontend.
What LaTeX features do you want in a frontend? How do you want to write code and how do you want the layout to happen?
What you describe sounds more like static site generation with math handling and perhaps references. There are markdown extensions that can handle such things. For rendering there is e.g. https://www.mathjax.org/
This is a bit like insisting on using latex without ever touching packages, and then asking for the functionality of packages anyway. At least for the CSS part.
Firefox with some extras might he nice, but the structure of that web page raises the question:
Who is the target audience? That website has so many oversimplified marketing claims that are about security and customization. It seems wholly undecided if the target audience is people who fall for buzz words or someone actually interested in quantitative improvements over Firefox.
And yet the comparison is just checkboxes and not even including base Firefox. How about bar graphs for comparison and some actual pictures of the advertised customization, layout and workspaces?
To me this still feels a little shady, even though the features seem nice.