Hacker Newsnew | past | comments | ask | show | jobs | submit | phodge's commentslogin

This article conflates the Monolith|Microservices and Monorepo|Polyrepo dichotomies. Although it is typical to choose Microservices and Polyrepo together or Monolith with Monorepo, it's not strictly necessary and the two architectural decisions come with different tradeoffs.

For example you may be forced to split out some components into separate services because they require a different technology stack to the monolith, but that doesn't strictly require a separate source code repository.


That could be an interesting site when it's done but I couldn't see where you factor in the price of electricity for running bare metal in a 24/7 climate-controlled environment, which I would assume expect is the biggest expense by far.


The first FAQ question addresses exactly that: colocation costs are added to every bare metal item (even storage drives).

Note that this doesn't intend to be used for accounting, but for estimating, and it's good at that. If anything, it's more favorable to the cloud (e.g, no egress costs).

If you're on the cloud right now and BMS shows you can save a lot of money, that's a good indicator to carefully research the subject.


I ran a survey and some one-on-one interviews of 30 or so engineers at my workplace to get some real data about the impact of slow tools. IIRC, the results were something like this:

* Most engineers can maintain focus on a task for up to 10 seconds waiting for a slow tool. However a couple of engineers (myself included) will get derailed at only about 3-5 seconds.

* A tool that runs for more than a minute will cause just about every engineer to switch task while it is completing. (This means they stop working on their highest-impact work and likely spend time on lower-impact things).

* A tool that runs for more than 15 minutes will make any engineer have to start deliberate multi-tasking where they are trying to work on two major work items at once.

Also, I wouldn't have read the article or posted this comment if it weren't for the slow CI pipelines at my workplace.


One of the worst unspoken bits is the cognitive load associated with such long running things, e.g compile or CI, which ideally are successful but that's not always the case, so you're doing something else while - consciously or not - anxiously waiting for the other thing to complete.

And when - when! - that job fails, you can be pretty sure it racks up some cortisol as one has to interrupt† your second task, pop back to the original context, curse at oneself for forgetting that comma or whatever simple thing, rerun, rinse, and repeat.

† The alternative is to finish (or reach a checkpoint) your other task, but that has cumulative latency effects on the duration of that other task that is supposed to be your main one.


If everyone on the team is easily distracted then why would you focus on speeding up slow tooling instead of focusing on making your team resilient to distractions?


Ah, yes, just change human nature. That sounds much easier than speeding up a program.


You can train yourself to focus better and to resist distraction, although it definitely is hard to do. The benefit is that it works for everything, from running a script to ignoring Slack to being able to work in an open office. Improving how long how code takes to run only fixes that specific distraction.


First of all, this sounds like an amazing project. My workplace runs isort+flake8 in pre-commit hooks, so making these even 10x faster would be a huge quality of life improvement for us.

Personally I'm interested to hear from you what are the specific reasons you can't achieve this kind of performance with CPython.

Usually the major factors are: A) python's generalised data structures (int, list, etc); B) extra overhead of common operations like reading a variable value, calling a function or iterating over a collection; C) no real multithreading (i.e. the GIL); D) lack of control over memory management.

I'd love to know if there's anything else that makes Python that much slower.


It's mostly the reasons you've hit on but I'll try to add some color to them based on my experience with Ruff.

1. The "fearless concurrency" that you get with Rust is a big one though. Ruff has a really simple parallelism model right now (each file is a separate task), but even that goes a long way. I always found Python's multi-processing to be really challenging -- hard to get right, but also, the performance characteristics are often confusing and unintuitive to me.

2. Ruff performs very few allocations, and Rust gives us the level of control to make that possible. (I'd like to perform even fewer...) We tokenize each file once, run some checks over that stream, parse it into an AST, run some checks over that AST, and with a few exceptions, the only allocations outside of that process are for the Violation structs themselves.

3. Related to the above (and this would be possible with CPython too), by shipping an integrated tool, we can consolidate a lot of work that would otherwise be duplicated in a more traditional setup. If you're using a bunch of disparate tools, and they all need a tokenized representation, or they all need the AST, then they're all going to repeat that work. With Ruff, we tokenize and parse once, and share that representation across the linter.

4. Again possible with CPython, but in Ruff, we take a lot of care to only do the "necessary" work on a given invocation. So if you have the isort rules enabled, we'll do the work necessary to sort your imports; but if you don't, we skip that step entirely. It sounds obvious, but we try to extend this "all the way down": so if you have a subset of the pycodestyle rules enabled, we'll avoid running any of the expensive regexes that would be required to power the ignored rules.


There actually _is_ a good argument in there, but the article is really poorly written and all the preamble about SourceForge ends up being a distraction from what you really need to stop and think about:

FOSS projects like the Linux kernel use the GPL license because the developers want their code to be free not just for themselves, but for everyone everywhere for all time. It's not acceptable terms for you to take their work and use it to build an alternate operating system that you aren't going to share. If this wasn't important to them they could have just published their code under MIT/BSD licenses.

If you were to build an AI that used the Linux source code to generate a "new" closed-source operating system, in a very real sense all you've done is invent a new way to plagiarize the Linux community's work so that you can weasel your way out of their license terms. Even if you got away with this in the courts, it's obviously very unethical.

What Copilot does is enable the mass plagiarizing of open source code from everyone all at once, mixed up together so that it's hard to know who the original authors were, and then pretend that somehow this makes it ethical.


I've never worked in virus research, but my understanding is that any researcher would be keeping meticulous records of every virus they're studying, as well as detailed information about genetic differences with any variants they have produced. So if the Chinese govt simply seized access to all research projects at the lab at Wuhan they would have been able to compare all viruses within the lab with SARS-Cov-2 within a matter of weeks and have an extremely confident Yes or No as to whether it came from their lab.

I'd love to be refuted on the above by someone with actual viral research experience because the alternative conclusion is that the Chinese govt has known the true origin of SARS-Cov-2 since early 2020 and simply won't tell anyone.


The lab did do that (...or just claimed they did, if you're inclined to believe there's been a cover-up):

>Shi instructed her group to repeat the tests and, at the same time, sent the samples to another facility to sequence the full viral genomes. Meanwhile she frantically went through her own lab’s records from the past few years to check for any mishandling of experimental materials, especially during disposal. Shi breathed a sigh of relief when the results came back: none of the sequences matched those of the viruses her team had sampled from bat caves. “That really took a load off my mind,” she says. “I had not slept a wink for days.”

https://www.scientificamerican.com/article/how-chinas-bat-wo...


If she really did do this, why didn't she publish all the records for public perusal? Even if the records can't be proven to not have been altered, it seems this would be a show of good faith.


Now how would anyone else ever get access to that evidence, if the people who physically control it don't want it to be widely known?

If indeed it ever existed, such would almost certainly have been destroyed by now.

Ultimately the source/origin story only matters to narrative or those who would push political narratives of good/evil guilty/innocent. We have to live in the world that exists today regardless of whether it was chance or carelessness that caused it.


In Australia and most other developed countries now, this ideology is heavily promoted to children, encouraging them to believe they are trans. They are connected with websites that promote the ideology, and then connected with a trans specialist who helps prescribe puberty blockers without parental knowledge.

If you are a parent who believes that children should be taught to love their own bodies as they grow rather than have surgeons pretend to fix them by removing essential organs, then this represents a massive assault on your offspring.

> The numbers I can find for US citizens is: 0.6%

And there's a huge number of "trans" who later realise they were sold a lie and have to undergo further surgery to try and restore their original sex. Selling this to ideology to children is going to dramatically increase that 0.6%. How many of the new cases are going to actually be trans, vs children that thought they were trans and started puberty blockers at school, but actually were just never taught to love their body?


This post has a very noticeable lack of citations and vague terms like "huge" which make it very likely that you are speaking from personal bias rather than any kind of expertise.


Australian parent here. While recognising I'm a sample of one, Ive never seen or heard from many other parent friends trans ideology being promoted.

If you have some examples please share. I suspect you've come across an article pushing a an edge case that tried to make it out as normal. This type of media is common at the more extreme ends of whatever views.


> And there's a huge number of "trans" who later realise they were sold a lie and have to undergo further surgery to try and restore their original sex.

There is no way this is a huge number. I'd be shocked if you could find a credible source on this. This sounds like conservative agitprop.


Studies have consistently shown that most trans children revert to their original gender identity post-puberty [0]:

>The exact rate of desistance varied by study, but overall, they concluded that about 80 percent trans kids eventually identified as their sex at birth. Some trans activists and academics, however, argue that these studies are flawed, the patients surveyed weren't really transgender, and that mass desistance doesn't exist.

>Indeed, some of the studies cited by Cantor had sample sizes as low as 16 people and were more than 40 years old, and one was an unpublished doctoral dissertation. But the most recent study, published in 2013 in the Journal of the American Academy of Child and Adolescent Psychiatry, followed up with 127 adolescent patients at a gender identity clinic in Amsterdam and found that two-thirds ultimately identified as the gender they were assigned at birth.

This does not imply any percentage of patients who underwent surgery to "restore" their original sex. Orchiectomies, mastectomies, and histerectomies are irreversible in any case, not to mention bottom surgery.

[0] https://www.thestranger.com/features/2017/06/28/25252342/the...


From your article

> By all accounts, detransitioners make up a tiny percentage of that already small population: A 50-year study out of Sweden found that only 2.2 percent of people who medically transitioned later experienced "transition regret."

I was speaking specifically to GP's point about gender affirming surgery being reversed. If only 2.2% of people who have had this surgery experience regret, the proportion which reverses the surgery must be even smaller.

"Huge number who [...] have to undergo further surgery to try and restore their original sex" is not supported by the data.

It is not easy for trans people to get gender affirming surgery, so I think the conservative "concern" that "children are going to get surgery and regret it" is vastly overblown: children receiving gender affirming surgery is so rare today, and involves jumping through so many hoops, that a tiny proportion of a tiny proportion of a tiny proportion of people seems like a silly subject for policy debate.

Teenagers are dumb and fickle (I know I was!), so I think there should be a non-zero number of hoops for teens jump through to filter out the "just a phase" cases for any surgery (or even just tattoos) that will have permanent impact on their bodies. But 'concerns' about surgical detransitioning is primarily just scaremongering.


The study they cited included both gender non conforming and gender dysphoric children. Non conforming doesn't mean trans.


They cited a wide body of studies, including studies which included only trans kids. All studies produced consistent results. For those who didn't read the full article, here's the preamble to the 80% figure quoted above:

>There have, however, been almost a dozen studies of looking at the rate of "desistance," among trans-identified kids—which, in this context, refers to cases in which trans kids eventually identify as their sex at birth. Canadian sex researcher James Cantor summarized those studies' findings in a blog post: "Despite the differences in country, culture, decade, and follow-up length and method, all the studies have come to a remarkably similar conclusion: Only very few trans-kids still want to transition by the time they are adults. Instead, they generally turn out to be regular gay or lesbian folks." The exact rate of desistance varied by study, but overall, they concluded that about 80 percent trans kids eventually identified as their sex at birth.


They cited various problems with the other studies.

All of Cantor's sources included non conforming behavior or "sub threshold" gender identity disorder.[1] He just ignored the numbers lost to follow up. And several studies found predictors of persistence. Like meeting the criteria for gender identity disorder.[2]

[1] http://www.sexologytoday.org/2016/01/do-trans-kids-stay-tran...

[2] https://pubmed.ncbi.nlm.nih.gov/18981931/


>In Australia and most other developed countries now, this ideology is heavily promoted to children, encouraging them to believe they are trans.

Proof? As an Australian with multiple family members in the education industry, I've never heard of this.


Seems a bit overly dramatic. Lots of aggressive adjectives.


I don't see how your version is easier than the sequence of commands in the first few steps of the article, which is basically `pip install flit; flit init; flit publish`. Flit is just as easy to install as twine, but you save yourself the hassle of having to write a setup.py.


Maybe I'm too old-fashioned then. But I like that you don't have any dependencies when using distutils/setuputils with a `setup.py` file, so if you don't distribute your code you're already done. I'm also not a fan of tools that are just wrappers around other tools.


Flit isn't (mostly) a wrapper around other tools - it has its own code to create and upload packages. This was one of the motivating cases for the PEPs (517, 518) defining a standard interface for build tools, so it's practical to make tools like this without wrapping setuptools.

(flit install does wrap pip, however)


How does setuptools not count as a dependency?

If you've never run into setuptools compatibility problems, you've either been much luckier than me, or you haven't done much with Python packages.

Vanilla ubuntu used to come with a version of setuptools which didn't work for installing many recent packages.


"Monolith or Microservices" and "Monorepo vs Polyrepo" are best treated as two entirely separate engineering questions. You can have a Monolith built from code in many repos, or many microservices maintained in one big repo.

There are various pros+cons for whichever combination you choose, there's no "best" answer that applies to every single situation.


Seconded.

In a job we were mostly Serverless/Microservices, but tested and deployed as a Monolith. We got many benefits from the Monolith (simplified management, testing, code wrangling), and some benefits of Microservices (theoretically independent functions, scaling/costing down to zero).


`set confirm` is another great option to turn on all the time. It will save you having to retype a command with the `!` added.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: