Hacker Newsnew | past | comments | ask | show | jobs | submit | mkornaukhov's commentslogin

It's open sourced some code, you may see some here https://github.com/serenedb/serenedb/


> I’m happy with how Rust turned out.

I agree, with the possible exception of perplexing async stuff.


I was really hoping that there'd be movement on a comment without-boats made in https://without.boats/blog/why-async-rust/ to bring a pollster like API into the standard library.

Rust has very good reasons for not wanting to bless an executor by bringing it into the standard library. But most of those would be moot if pollster was brought in. It wouldn't stifle experimentation and refinement of other approaches because it's so limited in scope and useless to all but the simplest of use cases.

But it does in practice solve what many mislabel as the function coloring problem. Powerful rust libraries tend to be async because that's maximally useful. Many provide an alternate synchronous interface but they all do it differently and it forces selection of an executor even if the library wouldn't otherwise force such a selection. (Although to be clear such libraries do often depend on I/O in a manner that also forces a specific executor selection).

Pollster or similar in standard library would allow external crates to be async with essentially no impact on synchronous users.


`pollster` in the stdlib would probably make sense. But of course there's nothing stopping anyone from using the `pollster` crate today.


Pollster in the standard library provides several major benefits outside of just using it yourself.

- it provides an incentive for libraries to be pollster compatible, rather than requiring tokio. And pollster compatible means executor agnostic.

- libraries would document their library with pollster usage


Not quite yet. Crates like reqwest and hyper tend to use tokio's io types internally to set up the sockets correctly and send/receive data at the right time. Those might have different APIs than the thread-pausing sync APIs.

Sans-IO crates exist but are kind of annoying to schedule correctly on an IO runtime of choice. Maybe lending iterators could help idk


I feel async is in a very good place now (apart from async trait :[ ) As a regular user who isn't developing libraries async is super simple to use. Your function is async = it must be .await and must be in an async runtime. Probably as simple and straightforward as possible. There are no super annoying anti-patterns to deal with.

The ecosystem being tokio centric is a little strange though


I love Rust and async Rust, but it's not true that there aren't annoying things to deal with. Anyone who's written async Rust enough has run into cancel-safety issues, the lack of async Drop and the interaction of async and traits. It's still very good, but there are some issues that don't feel very rust-y.


I've been writing async Rust for as long as it existed, and never ran into any cancel-safety issue. However, I also never used tokio's select macro.


I don't really appreciate the superlative here as I too have not run into cancel safety issues in practice.


I write and use mostly async code, and I cannot for the life of me understand the async hate.

What do you want Rust to do differently?

What language does async right?

How did Rust not reach its async goals?

Rust even lets you choose the runtime you want. And most big libraries work with several runtimes.


I do write mostly async code, too.

There are several ~~problems~~ subtleties that make usage of Rust async hindered IMHO.

- BoxFuture. It's used almost everywhere. It means there are no chances for heap elision optimization.

- Verbosity. Look at this BoxFuture definition: `BoxFuture<'a, T> = Pin<Box<dyn Future<Output = T> + Send + 'a>>;`. It's awful. I do understand what's Pin trait, what is Future trait, what's Send, lifetimes and dynamic dispatching. I *have to* know all these not obvious things just to operate with coroutines in my (possibly single threaded!) program =(

- No async drop and async trait in stdlib (fixed not so long ago)

I am *not* a hater of Rust async system. It's a little simpler and less tunable than in C++, but more complex than in Golang. Just I cannot say Rust's async approach is a good enough trade-off while a plethora of the decisions made in the design of the language are closest to the silver bullet.


> What do you want Rust to do differently?

Lean into being synchronous. Why should I have to manually schedule my context switches as a programmer?


Because async and sync programming are two fundamentally different registers. There are things you can do in one that you can’t with the other, or which have dramatically different tradeoffs.

As an example: Call N functions to see which one finishes first. With async this is trivial and cheap, without it it’s extremely expensive and error-prone.


The actor model proves that that isn't really as fundamentally a difference as you make it out to be. Write synchronously, execute asynchronously, that's the best of both worlds. To have the asynchronous implementation details exhibit themselves at the language level is just a terribly leaky abstraction. And I feel that if it wasn't a fashionable thing or an attempt to be more like JavaScript that it would have never been implemented in the way it was in the first place.

Async makes everything so much harder to reason about and introduces so many warts in the languages that use it that I probably think it should be considered an anti-pattern. And I was writing asynchronous code in C in the 90's so it's not like I haven't done it but it is just plain ugly, no matter what syntactic sugar you add to make the pill easier to swallow.


The actor model isn’t possible to enforce in a systems programming language, in my opinion.


What do you base that opinion on?

The fact that it hasn't been done?

Or that you can't do it in a systems programming language whose main intent is to replace 'C'?

I don't want to start off with a strawman but in the interest of efficiency:

Because C is far from the only systems programming language and I don't see any pre-requisites in the actor model itself that would stop you from using that in a systems programming language at all. On the contrary, I think it is eminently suitable for systems programming tasks. Message passing is just another core construct and once you have that you can build on top of it without restrictions in terms of what you might be able to achieve.

Even Erlang - not your typical first choice for low level work - is used for bare metal systems programming ('GRiSP').

Maybe this should start with a definition of what you consider to be a systems programming language? Something that can work entirely without a runtime?


Sure, you can absolutely build systems with the actor model, even some embedded or bare metal cases. But Erlang isn't written in Erlang. I'm talking about the languages that you implement Erlang in.

Yes, I think "entirely without a runtime" in the colloquial sense is what I mean. Or "replace C" if you want.


Ok, interesting because if 'should be written in itself' is a must then lots of languages that I would not consider systems languages would qualify. And I can see Erlang 'native' and with hardware access primitives definitely as a possibility.

'replace C' is a much narrower brief and effectively forces you to accept a lot of the warts that C exposes to the world. This results in friction between what you wanted to do and end up doing as well as being stuck with some decisions made in the 1970's. It revisits a subset of those decisions whilst keeping the remainder. And Rust's ambitions now seem to have grown beyond 'replace C', it is trying very hard to be everything to everybody and includes a package manager and language features that a systems language does not need. In that sense it is becoming more like C++ than like C. C is small. Rust is now large.

Async/Await is a mental model that makes code (much) harder to reason about than synchronous code, in spite of all of the claims to the contrary (and I'm not even sure if all of the people making those claims really believe them, it may be hard to admit that reasoning about code you wrote yourself can be difficult). It obfuscates the thread of execution as well as the state and that's an important support to hold on to while attempting to understand what a chunk of code does. It effectively turns all of your code into a soft equivalent of interrupt driven code, and that is probably the most difficult kind of code you could try to write.

The actor model recognizes this fact and creates an abstraction that - for once - is not leaky, the code is extremely easy to reason about whilst under the hood the complexity of the implementation is hidden from the application programmer. This means that relative novices (which probably describes the bulk of all programmers alive today) can safely and predictably implement complex systems with multiple moving parts because it does not require them to have a mental model akin to a scheduler with multiple processes in flight all of which are at different stages of their execution. Reasoning about the state of a program suddenly becomes a global exercise rather than a local one and locality of state is an important tool if you want to write code that is predictable, the smaller the scope the better you will understand what you are doing.

It is funny because this would suggest that the likes of Erlang and other languages that implement the actor model are beginners languages because most experienced programmers would balk at the barrier to entry. But that barrier is mostly about a lot of the superstructure built on top of Erlang, and probably about the fact that Erlang has its roots in Prolog which was already an odd duck.

But you've made me wonder: could you write Erlang in Erlang entirely without a runtime other than a language bootstrap (which even C needs) and if not to what degree would you have to extend Erlang to be able to do so. And I think here you mean 'the Erlang virtual machine that are not written in Erlang' because Erlang the language is written in Erlang as is the vast bulk of the runtime.

The fact that the BEAM is written in another language is because it is effectively a HAL, an idealized (or not so idealized, see https://www.erlang.org/blog/beam-compiler-history/) machine to run Erlang on, not because you could not write the BEAM itself entirely in Erlang. That's mostly an optimization issue, which to me is in principal evaluations like this a matter of degree rather than a qualitative difference, though if the inefficiency is large enough it could easily become one as early versions of Erlang proved.

Maybe it is the use of a VM that should disqualify a language from being a 'systems language' by your definition?

But personally I don't care about that enough to sacrifice code readability to the point that you add entirely new footguns to a language that aims for safety because for code with long term staying power readability and ability to reason about the code is a very important property. Just as I would rather have memory safety than not (but there are many ways to achieve that particular goal).

What is amusing is that the Async/Await anti-pattern is now prevalent and just about the only 'systems languages' (using your definition) that have not adopted it are C and Go.


Honestly, this is why I find "systems language" kind of an annoying term, because you're not wrong, but it's also true that we're talking about two different things. I just don't think we have good language terminology for the different sorts of languages here.

> could you write Erlang in Erlang entirely

I think this sort of question is where theory and practice diverge: sure, due to turing completeness. But theory in this sense doesn't care about things like runtime performance, or maintainability.

> But personally I don't care about that enough

Some people and some domains do need to care about implementing the low-level details of a system. The VMs and runtimes and operating systems. And that's what I meant by my original post.


So, as the author of not one but two operating systems (one of which I've recently published, another will likely never see daylight): I've never felt the need for 'async/await' at the OS kernel level. And above that it is essentially all applications, and there almost everything has a runtime, usually in the form of a standard library.

I agree with you that writing Erlang in Erlang today is not feasible for the runtime performance matter, less so for maintainability (which I've found to be excellent for anything I ever did in Erlang, probably better than any other language I've used).

And effectively it is maintainability that we are talking about here because that is where this particular pattern makes life considerably harder. It is hard enough to reason about async code 20 minutes after you wrote it, much harder still if you have to get into a code base that you did not write or if you have to dig in six months (or a decade) later to solve some problem.

I get your gripe about the term systems language, but we can just delineate it in a descriptive way so we are not constrained by terminology that ill fits the various uses cases. Low level language or runtime-free language would be fine as well (the 'no true Scotsmen of systems languages ;) ).

But in the end this is about the actor model, not about Erlang per se, that is just one particular example and I don't see any reason why the actor model could not be a first class citizen in a systems oriented language, you could choose to use it or not and if you did that would have certain consequences just like using async/await have all kinds of consequences, and most likely when writing low level OS code you would not be using that anyway.


I mean, I'm also not saying async/await is critical for kernels. I'm only saying that "everything is an actor" isn't really possible at the language level.

Async/await is used for a lot of RTOS like things in Rust. At Oxide, we deliberately did not do that, and did something much closer to actors, actually. Both patterns are absolutely viable, for sure. But as patterns, and not as language primitives, at least on the actor side.


Do you mean RPCs, or dispatching to threads?

If not, your async code is a deterministic state machine. They're going to complete in the same order. Async is just a way of manually scheduling task switches.


The system Rust has is a lot better than that of Python or JavaScript. Cleanly separating construction from running/polling makes it a lot more predictable and easier to understand what's happening, and to conveniently compose things together using it.


Thats putting the bar pretty damn low.


Better tell me how to make the compiler not fool me!


IMHO, the main advantage of github is that it is an ecosystem. This is a well-thought-out Swiss knife: a pioneering (but no longer new) PR system, convenient issues, as well as a well-formed CI system with many developed actions and free runners. In addition, it is best to use code navigation simply in a web browser. You write code, and almost everything works effortlessly. Having a sponsorship system is also great, you don't have to search for external donation platforms and post weird links in your profile/repository.

All in one, that's why developers like it so much. The obsession with AI makes me nervous, but the advantages still outweigh, as for me, the average developer. For now.


I don't agree with this at all. I think the reason Github is so prominent is the social network aspects it has built around Git, which created strong network effects that most developers are unwilling to part with. Maintainers don't want to loose their stars and the users don't want to loose the collective "audit" by the github users.

Things like number of stars on a repository, number of forks, number of issues answered, number of followers for an account. All these things are powerful indicators of quality, and like it or not are now part of modern software engineering. Developers are more likely to use a repo that has more stars than its alternatives.

I know that the code should speak for itself and one should audit their dependencies and not depend on Github stars, but in practice this is not what happens, we rely on the community.


These are the only reasons I use GitHub. The familiarity to students and non-developers is also a plus.

I have no idea what the parent comment is talking about a "well-formed CI system." GitHub Actions is easily the worst CI tool I've ever used. There are no core features of GitHub that haven't been replicated by GitLab at this point, and in my estimation GitLab did all of it better. But, if I put something on GitLab, nobody sees it.


I am surprised by the comments about GH CI. I first started using CI on GL, then moved to GH and found GH's to let me get things done more easily.

It's been years through and the ease of doing simple things is not always indicative of difficult things. Often quite the contrary...


From what I gather it's that GH Actions is good for easy scenarios: single line building, unit tests, etc. When your CI pipeline starts getting complicated or has a bunch of moving parts, not only do you need to rearchitect parts of it, but you lose a lot of stability.


Bingo. GH Actions is great if you're deploying vanilla web stuff to a vanilla web server. I write firmware. GH Actions is hell.


Easy and good are radically different things.


And this is the core problem with the modern platform internet. One victor (or a handful) take the lead in a given niche, and it becomes impossible to get away from them without great personal cost, literal, moral, or labor, and usually a combo of all three. And then that company has absolutely no motivation at all to prioritize the quality of the product, merely to extract all the value from the user-base as possible.

Facebook has been on that path for well over a decade, and it shows. The service itself is absolute garbage. Users stay because everyone they know is already there and the groups they love are there, and they just tolerate being force-fed AI slop and being monitored. But Facebook is not GROWING as a result, it's slowly dying, much like it's aging userbase. But Facebook doesn't care because no one in charge of any company these days can see further than next quarter's earnings call.


This is a socio-economic problem, it can happen with non internet platforms too. Its why people end up living in cities for example. Any system that has addresses, accounts or any form of identity has the potential for strong network effects.


I would say that your comment is an addition to mine, and I think so too. This is another reason for the popularity of github.

As for me, this does not negate the convenient things that I originally wrote about.


Github became successful long before those 'social media features' were added, simply because it provided free hosting for open source projects (and free hosting services were still a rare thing back in the noughties).

The previous popular free code hoster was Sourceforge, which eventually entered its what's now called "enshittifcation phase". Github was simply in the right place at the right time to replace Sourceforge and the rest is history.


There's definitely a few phases of Github, feature and popularity wise.

   1. Free hosting with decent UX
   2. Social features
   3. Lifecycle automation features
In this vein, it doing new stuff with AI isn't out of keeping with its development path, but I do think they need to pick a lane and decide if they want to boost professional developer productivity or be a platform for vibe coding.

And probably, if the latter, fork that off into a different platform with a new name. (Microsoft loves naming things! Call it 'Codespaces 365 Live!')


Technically so was BitBucket but it chose mercurial over git initially. If you are old enough you will remember articles comparing the two with mercurial getting slightly more favorable reviews.

And for those who don’t remember SourceForge, it had two major problems in DevEx: first you couldn’t just get your open source project published. It had to be approved. And once it did, you had an ugly URL. GitHub had pretty URLs.

I remember putting up my very first open source project back before GitHub and going through this huge checklist of what a good open source project must have. Then seeing that people just tossed code onto GitHub as is: no man pages, no or little documentation, build instructions that resulted in errors, no curated changelog, and realizing that things are changing.


Github was faster than BitBucket and it worked well whether or not JavaScript was enabled. This does seem to be regressing as of late. I have tried a variety of alternatives; they have all been slower, but Github does seem to be regressing.


> Technically so was BitBucket

The big reason I recall was that GitHub provided free public repos and limited private, while BitBucket was the opposite.

So if you primarily worked with open-source, GitHub was the better choice in that regard.


Mercurial was/is nice and imho smooths off a lot of the unnecessarily rough git edges.

But VCS has always been a standard-preferring space, because its primary point is collaboration, so using something different creates a lot of pain.

And the good ship SS Linux Kernel was a lot of mass for any non-git solution to compete with.


And GitHub got free hosting and support from Engine Yard when they were starting out. I remember it being a big deal when we had to move them from shared hosting to something like 3 dedicated supermicro servers.


> Things like number of stars on a repository, number of forks, number of issues answered, number of followers for an account. All these things are powerful indicators of quality, and like it or not are now part of modern software engineering.

I hate that this is perceived as generally true. Stars can be farmed and gamed; and the value of a star does not decay over time. Issues can be automatically closed, or answered with a non-response and closed. Numbers of followers is a networking/platform thing (flag your significance by following people with significant follower numbers).

> Developers are more likely to use a repo that has more stars than its alternatives.

If anything, star numbers reflect first mover advantage rather than code quality. People choosing which one of a number of competing packages to use in their product should consider a lot more than just the star number. Sadly, time pressures on decision makers (and their assumptions) means that detailed consideration rarely happens and star count remains the major factor in choosing whether to include a repo in a project.


Stars, issues closed, PRs, commits, all are pointless metrics.

The metrics you want are mostly ones they don't and can't have. Number of dependent projects for instance.

The metrics they keep are just what people have said, a way to gameify and keep people interested.


So number of daily/weekly downloads on PyPI/npm/etc?

All these things are a proxy for popularity and that is a valuable metric. I have seen projects with amazing code quality but if they are not maintained eventually they stop working due to updates to dependencies, external APIs, runtime environment, etc. And I have see projects with meh code quality but so popular that every quirk and weird issue had a known workaround. Take ffmpeg for example: its code is.. arcane. But would you choose a random video transcoder written in JavaScript just due to the beautiful code that was last updated in 2012?


It is fine if a dependency hasn't been updated in years, if the number of dependent projects hasn't gone down. Especially if no issues are getting created. Particularly with cargo or npm type package managers where a dependency may do one small thing that never needs to change. Time since last update can be a good thing, it doesn't always mean abandoned.


I agree with you. I believe it speaks to the power of social proof as well as the time pressures most developers find themselves with.

In non-coding social circles, social proof is even more accepted. So, I think that for a large portion of codebases, social proof is enough.


> Things like number of stars on a repository, number of forks, number of issues answered, number of followers for an account. All these things are powerful indicators of quality

They're NOT! Lots of trashy AI projects have +50k stars.


You don't need to develop on Github to get this, just mirror your repo.


that's not enough, i still have to engage with contributors on github. on issues and pull requests at a minimum.


Unfortunately the social network aspect is still hugely valuable though. It will take a big change for anything to happen on that front.


> Things like number of stars on a repository, number of forks, number of issues answered, number of followers for an account. All these things are powerful indicators of quality

Hahahahahahahahahahahaha...


OK, indicators of interest. Would you bet on a project nobody cares about?


I guess if I viewed software engineering merely as a placing of bets, I would not, but that's the center of the disagreement here. I'm not trying to be a dick (okay maybe a little sue me), the grandparent comment mentioned "software engineering."

I can refer you to some github repositories with a low number of stars that are of extraordinarily high quality, and similarly, some shitty software with lots of stars. But I'm sure you get the point.


You are placing a bet that the project will continue to be maintained; you do not know what the future holds. If the project is of any complexity, and you presumably have other responsibilities, you can't do everything yourself; you need the community.


There are projects, or repositories, with a very narrow target audience, sometimes you can count them on one hand. Important repositories for those few who need them, and there aren't any alternatives. Things like decoders for obscure and undocumented backup formats and the like.


Most people would be fine with Forgejo on Codeberg (or self hosted).


> Maintainers don't want to loose their stars

??? Seriously?

> All these things are powerful indicators of quality

Not in my experience....


Why are you as surprised?

People don't just share their stargazing plots "for fun", but because it has meaning for them.


In my 17 years of having a GitHub account I don’t think I’ve ever seen a “stargazing plot”. Have you got an example of one?


> People don't just share their stargazing plots "for fun", but because it has meaning for them.

What's the difference?


> a pioneering (but no longer new) PR system

having used gerrit 10 years ago there's nothing about github's PRs that I like more, today.

> code navigation simply in a web browser

this is nice indeed, true.

> You write code, and almost everything works effortlessly.

if only. GHA are a hot mess because somehow we've landed in a local minimum of pretend-YAML-but-actually-shell-js-jinja-python and they have a smaller or bigger outage every other week, for years now.

> why developers like it so much

most everything else is much worse in at least one area and the most important thing it's what everyone uses. no one got fired for using github.


The main thing I like about Github's PRs is that it's a system I'm already familiar with and have a login/account for. It's tedious going to contribute to a project to find I have to sign up for and learn another system.

I've used Gerrit years ago, so wasn't totally unfamiliar, but it was still awkward to use when Go were using it for PRs. Notably that project ended up giving up on it because of the friction for users - and they were probably one of the most likely cases to stick to their guns and use something unusual.


> Notably [go] ended up giving up on [gerrit]

That's not accurate. They more or less only use Gerrit still. They started accepting Github PRs, but not really, see https://go.dev/doc/contribute#sending_a_change_github

> You will need a Gerrit account to respond to your reviewers, including to mark feedback as 'Done' if implemented as suggested

The comments are still gerrit, you really shouldn't use Github.

The Go reviewers are also more likely than usual to assume you're incompetent if your PR comes from Github, and the review will accordingly be slower and more likely to be rejected, and none of the go core contributors use the weird github PR flow.


> The Go reviewers are also more likely than usual to assume you're incompetent if your PR comes from Github

I've always done it that way, and never got that feeling.


there's certainly a higher rejection rate for github PRs


That seems unsurprising given that it’s the easiest way for most people to do it. Almost any kind of obstacle will filter out the bottom X% of low effort sludge.


correlation, not causation.

Lowest common denominator way will always get worst quality


sure it's correlation, but the signal-to-noise ratio is low enough that if you send it in via github PR, there's a solid chance of it being ignored for months / years before someone decides to take a look.


Oh right. Thanks for the correction - I thought they had moved more to GitHub. Guess not as much as I thought!


Many people confuse competence and dedication.

A competent developer would be more likely to send a PR using the tool with zero friction than to dedicate a few additional hours of his life to create an account and figure out how to use some obscure.


You are making the same mistake of conflating competence and (lack of) dedication.

Most likely, dedication says little about competence, and vice versa. If you do not want to use the tools available to get something done and rather not do the task instead, what does that say about your competence?

I'm not in a position to know or judge this, but I could see how dedication could be a useful proxy for the expected quality a PR and the interaction that will go with it, which could be useful for popular open source projects. Not saying that's necessarily true, just that it's worth considering some maintainers might have anecdotal experiences along that line.


A competent developer wouldn't call gerrit an obscure tool.


This attitude sucks and is pretty close to just being flame bait. There are all kinds of developer who would have no reason to ever have come across it.


A competent developer should be aware of the tools of the trade.

I'm not saying a competent developer should be proficient in using gerrit, but they should know that it isn't an obscure tool - it's a google-sponsored project handling millions of lines of code internally in google and externally. It's like calling golang an obscure language when all you ever did is java or typescript.


It’s silly to assume that someone isn’t competent just because you know about a tool that they don’t know about. The inverse is almost certainly also true.

Is there some kind of Google-centrism at work here? Most devs don’t work at Google or contribute to Google projects, so there is no reason for them to know anything about Gerrit.


> Most devs don’t work at Google or contribute to Google projects, so there is no reason for them to know anything about Gerrit.

Most devs have never worked on Solaris, but if I ask you about solaris and you don't even know what it is, that's a bad sign for how competent a developer you are.

Most devs have never used prolog or haskell or smalltalk seriously, but if they don't know what they are, that means they don't have curiosity about programming language paradigms, and that's a bad sign.

Most competent professional developers do code review and will run into issues with their code review tooling, and so they'll have some curiosity and look into what's out there.

There's no reason for most developers to know random trivia outside of their area of expertise "what compression format does png use by default", but text editors and code review software are fundamental developer tools, so fundamental that every competent developer I know has enough curiosity to know what's out there. Same for programming languages, shells, and operating systems.


These are all ridiculous shibboleths. I know what Solaris is because I’m an old fart. I’ve never used it nor needed to know anything about it. I’d be just as (in)competent if I’d never heard of it.


> The main thing I like about Github's PRs is that it's a system I'm already familiar with and have a login/account for. It's tedious going to contribute to a project to find I have to sign up for and learn another system.

codeberg supports logging in with GitHub accounts, and the PR interface is exactly the same

you have nothing new to learn!


Yeah and this slavish devotion to keeping the existing (broken imho) PR structure from GH is the one thing I most dislike about Forgejo, but oh well. I still moved my project over to Codeberg.

GH's PR system is semi-tolerable for open source projects. It's downright broken for commercial software teams of any scale.

Like the other commenter: I miss Gerrit and proper comment<->change tracking.


agreed, the github "innovation", i.e. the pull request interface is terrible for anything other than small changes

hopefully codeberg can build on it, and have an "advanced" option


> having used gerrit 10 years ago there's nothing about github's PRs that I like more, today.

I love patch stack review systems. I understand why they're not more popular, they can be a bit harder to understand and more work to craft, but it's just a wonderful experience once you get them. Making my reviews work in phabricator made my patchsets in general so much better, and making my patchsets better have improved my communication skills.


I used gerrit a bit at work but any time I want to contribute to OSS project requiring to use it I just send a message with bugfix patch applied and leave, it's so much extra effort for drive by contributions that I don't care.

It's fine for code review in a team, not really good in GH-like "a user found a bug, fixed it, and want to send it" contribution scheme


> a well-formed CI system

Man :| no. I genuinely understand the convenience of using Actions, but it's a horrible product.


Maybe I have low standards given I've never touched what gitlab or CircleCi have to offer, but compared to my past experiences with Buildbot, Jenkins and Travis, it's miles ahead of these in my opinion.

Am I missing a truly better alternative or CI systems simply are all kind of a pita?


I don't enough experience w/ Buildbot or Travis to comment on those, but Jenkins?

I get that it got the job done and was standard at one point, but every single Jenkins instance I've seen in the wild is a steaming pile of ... unpatched, unloved, liability. I've come to understand that it isn't necessarily Jenkins at fault, it's teams 'running' their own infrastructure as an afterthought, coupled with the risk of borking the setup at the 'wrong time', which is always. From my experience this pattern seems nearly universal.

Github actions definitely has its warts and missing features, but I'll take managed build services over Jenkins every time.


Jenkins was just build in pre-container way so a lot of stuff (unless you specifically make your jobs use containers) is dependent on setup of machine running jenkins. But that does make some things easier, just harder to make repeatable as you pretty much configuration management solution to keep the jenkins machine config repeatable.

And yes "we can't be arsed to patch it till it's problem" is pretty much standard for any on-site infrastructure that doesn't have ops people yelling at devs to keep it up to date, but that's more SaaS vs onsite benefit than Jenkins failing.


My issue with Github CI is that it doesn't run your code in a container. You just have github-runner-1 user and you need to manually check out repository, do your build and clean up after you're done with it. Very dirty and unpredictable. That's for self-hosted runner.


> My issue with Github CI is that it doesn't run your code in a container.

Is this not what you want?

https://docs.github.com/en/actions/how-tos/write-workflows/c...

> You just have github-runner-1 user and you need to manually check out repository, do your build and clean up after you're done with it. Very dirty and unpredictable. That's for self-hosted runner.

Yeah checking out everytime is a slight papercut I guess, but I guess it gives you control as sometimes you don't need to checkout anything or want a shallow/full clone. I guess if it checked out for you then their would be other papercuts.

I use their runners so never need to do any cleanup and get a fresh slate everytime.


Gitlab is much better


Curious what are some better options. I feel it is completing with Jenkins and CircleCI and its not that bad.


In what way? I've never had an issue other than outages.


> it’s horrible, i use it every day > the alternatives are great, i never use them

Every time.


What do you consider a good product in this space?


I'd rather solve advent of code in brainfuck than have to debug their CI workflows ever again.


Surely you just need the workflow to not have embedded logic but call out to a task manager so you can do the same locally?


Well then why 99% of GH Actions functionality even exists.


It is fairly common pratice almost engineering best pratice to not put logic in CI. Just have it call out to a task runner, so you can run the same command locally for debugging etc. Think of CI more as a shell as a service, your just paying someone to enter some shell commands for you, you should be able to do exactly the same locally.

You can take this a setup furthur and use an environment manager to removing the installing of tools from CI as well for local/remote consistency and more benefits.


To lock you in.


Ergo, I'd rather use brainfuck to program CI.


The big issue with Github is that they never denied feeding ai with private repositories. (Gitlab for example did that when asked). This fact alone makes many users bitter, even for organizations not using private repos per se.


>a well-formed CI system with many developed actions and free runners.

It feels to me like people have become way too reliant on this (in particular, forcing things into CI that could easily be done locally) and too trusting of those runners (ISTR some reports of malware).

>In addition, it is best to use code navigation simply in a web browser.

I've always found their navigation quite clunky and glitchy.


Underrated feature is the code search. Everyone starts out thinking they’ll just slap elastic search or similar in front of the code but it’s more nuanced than that. GitHub built a bespoke code search engine and published a detailed blog post about it afterwards.


Github'PR and CI are some of the worst.


> In addition, it is best to use code navigation simply in a web browser.

IMHO the vanilla Github UI sucks for code browsing since it's incredibly slow, and the search is also useless (the integrated web-vscode works much better - e.g. press '.' inside a Github project).

> as well as a well-formed CI system with many developed actions and free runners

The only good thing about the Github CI system are the free runners (including free Mac runners), for everything else it's objectively worse than the alternatives (like Gitlab CI).


Well, I guess. It's not a surprise LinkedIn and GitHub are owned by the same entity. Both are degrading down to the same Zuckernet-style engagement hacking, and pseudo-resume self-boosting portfolio-ware. If the value of open source has become "it gets me hired", then ... fine. But that's not why many of us do free software development.

GitHub's evolution as a good open source hosting platform stalled many years ago. Its advantages are its social network effects, not as technical infrastructure.

But from a technology and UX POV it's got growing issues because of this emphasis, and that's why the Zig people have moved, from what I can see.

I moved my projects (https://codeberg.org/timbran/) recently and have been so far impressed enough. Beyond ideological alignment (free software, distaste for Microsoft, want to get my stuff off US infrastructure [elbows up], etc.) the two chief advantages are that I could create my own "organization" without shelling over cash, and run my own actions with my own machines.

And since moving I haven't noticed any drop in engagement or new people noticing the project since moving. GitHub "stars" are a shite way of measuring project success.

Forgejo that's behind Codeberg is similar enough to GitHub that most people will barely notice anyways.

I'm personally not a fan of the code review tools in any of them (GitLab, Foregejo, or GitHub) because they don't support proper tracking of review commits like e.g. Gerritt does but oh well. At least Foregejo / Codeberg are open to community contribution.


> In addition, it is best to use code navigation simply in a web browser

How do you define "code navigation"? It might've got a bit easier with automatic highlighting of selected symbols, but in return source code viewer got way too laggy and, for a couple of years now, it has this weird bug with misplaced cursors if code is scrolled horizontally. I actually find myself using the "raw" button more and more often, or cloning repo even for some quick ad-hoc lookups.

Edit: not to mention the blame view that actively fights with browser's built in search functionality.


Hint: Type the '.' key on any code page or PR.


And now it opens... some VSCode-esque editor in the browser that asks me to sign-in? Why would I want something even more resource-hungry and convoluted just to look up a random thing once in a while?


If you're familiar with VSCode it's quite handy. If you hate VSCode for some reason then just don't use it.


> a pioneering (but no longer new) PR system

Having used Forgejo with AGit now, IMO the PR experience on GitHub is not great when trying to contribute to a new project. It's just unnecessarily convoluted.


What do you like most about agit?


It's just how straightforward it is. With GitHub's fork-then-PR approach I would have to clone, fork, add a remote to my local fork, push to said remote, and open the PR.

With agit flow I just have to clone the repository I want to contribute to, make my changes, and push (to a special ref, but still just push to the target repo).

I have been making some small contributions to Guix when they were still using email for patches, and that (i.e. send patches directly to upstream) already felt more natural than what GitHub propagates. And agit feels like the git-native interpretation of this email workflow.


> Having a sponsorship system is also great

They have zero fees for individuals too which is amazing. Thanks to it I gained my first sponsor when one of my projects was posted here. Made me wish sponsorships could pay the bills.


I don't get what people are complaining about. I haven't run into these AI issues except for Copilot appearing AS AN OPTION in views. Otherwise it seems to be working the same has it always

Is there more?


Would you say Github has any significant advantages over Gitlab in this regard? I always found them to be on par, with incremental advantages on either side.


One of my favourite GitHub features is the ability to do a code search over the whole of GitHub, not sure GitLab has the same when I use to use it?


Code search over all of Gitlab (even if available) wouldn't help much when many of the interesting repos might be on Github. To be truly useful, it would need to index repos across many different forges. But there's a tension in presenting that to users if you're afraid that they might exit your ecosystem to go to another forge.


Embrace, extend, extinguish.

That's not a Victorinox you're looking at, it's a cheap poorly made enshittified clone using a decades old playbook (e-e-e).

The focus on "Sponsorship buttons" and feature instead of fixing is just a waste of my time.


It is strange that road deaths have been compared in the past, but protection from air pollution has been discussed since 2026. It is noteworthy that, according to IQAir, the air in the United States is less polluted than in most EU countries.


Yes but that is due to the vastly different population density.

The USA has 34 people per square km while Germany has 234. So pollution per capita would be a better metric.


The air you breathe is the same regardless of how many people stand next to you also breathing it.


Actually if you're standing next to people the air you breathe in also has some of their exhaust gases in it, in this case slightly elevated CO2. If there's a dozen people in a small meeting room with the windows closed and no AC the air quality is significantly worse in that room than it would be say, stood on the roof... unless you're in the middle of a major city where maybe the air on the roof is full of exhaust from motor vehicles, hence legislation to restrict vehicle exhaust.


But only if they stand.

If they start driving, the situation changes dramatically!


Air in populated cities or air in general? Air quality seems a bit harder to compare across countries than road deaths, considering the US has so much sparsely populated land.


The average air quality in all of the US is not as bad as in some European countries?


[flagged]


You compare a continent to countries? Without any corrections for density or anything? And you ask me about my comprehension skills?


I've recently understood how RSA works and thought it was a cool achievement. But this article with "basic" math... Not so enjoyable for just a dev =)


Sorry about that. I tried to introduce the necessary concepts starting from zero, or I thought I did.

We devs (take back that "just" :) ) deal with much harder stuff when we build complex APIs, so the problem must be at the syntactic level. To us devs, math may look like an antipattern, with all the short names and operator overloading.

But that's unavoidable, unfortunately. It's normal to spend hours or more on a single concept until it clicks. I'd say don't give up, but I understand one's time is valuable, and the return might not be high enough to justify the cost.


That's good news. I hope this will encourage the industry to use the Zig language (and its creators to release version 1.0).


Billie is lucky to have such a dexterous owner!


Similarly, the CEO couldn't resist the outstanding optimization of memory and execution speed!


[flagged]


I am sad you don't believe this story. The CEO was very technical and this is exactly the sort of thing he would spot.


People don't realize that in the era of dinosaurs where MASM ruled and assembly walked the earth, there basically WEREN'T CEOs who didn't know the details, because all the companies doing this stuff were pretty small at the time (and the CEO may have been writing it himself a few years before).


There was a time when Bill Gates wrote code for Microsoft, and he was actually quite good at it.


Not sure why this was voted down. He was very technical, especially for the time: https://www.thecrimson.com/article/2025/6/7/bill-gates-reuni...


He also wrote and published a paper on pancake sorting.


In the era of dinosaurs, neither MASM nor Windows existed but we still did assembly or micro-coding (machine coding) or flipped switches.


Pre-MASM Dinos probably weren't doing xor ax, ax.


My first part time dev job as a student featured me walking in on our CEO who showed me he was recompiling his kernel to enable some features. I'm quite sure he was just doing that to impress the students, but at least he knew how to!


Perhaps his secretary showed him?


Similarly, if you told people in the 80's that it would be the opposite in the future no one would believe it either.

Not even the developers are very technical in the future!

Woah, really? And they still manage to write good software?

Of course not, if good software would be standing next to their bed at 4 am they would scream who are you what are you doing here? help! help! Someone, make it go away!


CEO doesn't need to mean some big boss. If you have a three person startup, the CEO might just be your co-founding buddy.


> Allocate all necessary memory during startup and avoid dynamic memory allocation after initialization. Absolutely not an ultimatum advice, even harmful in most cases. Just measure you memory usage and don't rely on and avoid external OOM handler.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: