Hacker Newsnew | past | comments | ask | show | jobs | submit | jtolds's commentslogin

> Ford CEO says he has 5,000 open mechanic jobs with up to 6-figure salaries from the shortage of manually skilled workers: https://fortune.com/2025/11/12/ford-ceo-manufacturing-jobs-t...

"Up to" is doing a lot of work in that sentence.

Honestly, it seems to me that it's "undoing" a lot of work.

Labor Rate at dealerships around me are over $200/h. Granted the mechanic doesn't get 100% of that but 200 * 52 * 8 is nearly 600k. It seems like you could go somewhere else and get the same amount of money as Ford (or more) and don't need to worry about future salary increases not occurring.


The problem is that the mechanics are paid fixed hours for a given type of job (according to the dealership's standard for how long a given job should take). They are not truly being paid per hour. While it's supposed to encourage efficiency, you can imagine how this negatively affects the mechanics as well as the work quality outcomes.

I replied to your comment on my website, but for posterity here, yes, I do think you did a good job for the part about exiting when bestScore > 100. There's nitpicks, but this is fine! It makes sense, and nice use of a select over a send.

I did expect that this exercise would come after the first one though, and doing this on top of a solution to the first exercise is a bit harder. That said, I also don't mean to claim either are impossible. It's just tough to reason about.


Seriously! This caused such a ruckus when I posted this 9 years ago. I lost some professional acquaintanceships over it! Definitely a different reception.


> the complaints about channels have largely been received and agreed upon by experienced Go developers. Channels are still useful but they were way more prominent in the early days of Go as a solution to lots of problems, and nowadays are instead understood as a sharp tool only useful for specific problems.

As the author of the post, it's really gratifying to hear that this is your assessment nowadays. I agree, and while I'm not sure I had much to do with this turn of events (it probably would have happened with or without me), curbing the use of channels is precisely why I wrote the post. I felt like Go could be much better if everyone stopped messing around with them. So, hooray!


Author of the post here, I really like Go! It's my favorite language! It has absolutely nailed high concurrency programming in a way that other languages' solutions make me cringe to think through (await/async are so gross and unnecessary!)

If you are intending to do something that has multiple concurrent tasks ongoing at the same time, I would definitely reach for Go (and maybe be very careful or skip entirely using channels). I also would reach for Go if you intend to work with a large group of other software engineers. Go is rigid; when I first started programming I thought I wanted maximum flexibility, but Go brings uniformity to a group of engineers' output in a way that makes the overall team much more productive IMO.

Basically, I think Go is the best choice for server-side or backend programming, with an even stronger case when you're working with a team.


Thanks for the tip! Will definitely take into account your insights on channels if I decide to dive into it.


I have written channel code in the last week. It's part of the deal (especially with the context package). I'm just happy to see them restrained.


Hi! No, I think you've misunderstood the assignment. The example posits that you have a "game" running, which should end when the last player leaves. While only using channels as a synchronization primitive (a la CSP), at what point do you decide the last player has left, and where and when do you call close on the channel?


I don't think there's much trouble at all fixing the toy example by extending the message type to allow communication of the additional conditions, and I think my changes are better than the alternative of using a mutex. Have I overlooked something?

Assuming the number of players are set up front, and players can only play or leave, not join. If the expectation is that players can come and go freely and the game ends some time after all players have left, I believe this pattern can still be used with minor adjustment

(please overlook the pseudo code adjustments, I'm writing on my phone - I believe this translates reasonably into compilable Go code):

  type Message struct {
    exit bool
    score    int
    reply chan bool
  }

  type Game struct {
    bestScore int
    players int // > 0
    messages    chan Message
  }

  func (g *Game) run() {
    for message := range g.messages {
      if message.exit {
        g.players = g.players - 1;

        if g.players == 0 {
          return
        }
        continue
      }
  
      if g.bestScore < 100 && g.bestScore < message.score {
        g.bestScore = message.score
      }

      acceptingScores := g.bestScore < 100

      message.reply <- acceptingScores
    }
  }

  func (g *Game) HandlePlayer(p Player) error {
    for {
      score, err := p.NextScore()
      if err != nil {
        g.messages <- { exit: true
      }
      return err
    }
    g.messages <- { score, reply }
    if not <- reply {
       g.messages <- { exit: true }
       return nil
      }
    }
  }


I don't think channels should be used for everything. In some cases I think it's possible to end up with very lean code. But yes, if you have a stop channel for the other stop channel it probably means you should build your code around other mechanisms.

Since CSP is mentioned, how much would this apply to most applications anyway? If I write a small server program, I probably won't want to write it on paper first. With one possible exception I never heard of anyone writing programs based on CSP (calculations?)


> Since CSP is mentioned, how much would this apply to most applications anyway? If I write a small server program, I probably won't want to write it on paper first. With one possible exception I never heard of anyone writing programs based on CSP (calculations?)

CSP is really in the realm of formal methods. No you wouldn't formulate your server program as CSP, but if you were writing software for a medical device, perhaps.

This is the FDR4 model checker for CSP, it's a functional programming language that implements CSP semantics and may be used to assert (by exhaustion, IIRC) the correctness of your CSP model.

https://cocotec.io/fdr/

I believe I'm in the minority of Go developers that have studied CSP, I fell into Go by accident and only took a CSP course at university because it was interesting, however I do give credit to studying CSP for my successes with Go.


Naive question, can't you just have a player count alongside the best score and leave when that reaches 0?


Adding an atomic counter is absolutely a great solution in the real world, definitely, and compare and swap or a mutex or similar totally is what you want to do. In fact, that's my point in that part of the post - you want an atomic variable or a mutex or something there. Other synchronization primitives are more useful than sticking with the CSP idea of only using channels for synchronization.


Haven't read the article but it sounds like a waitgroup would suffice.


I assume the basics are similar. The Miyoo Mini Plus has WiFi, which is great, but no GPU, so I assume you end up with different drivers. But the CPU is the same so everything else should be equivalent.


To clarify why I said the CPU is the same and the other poster says the A30 has a better CPU, both are true! Same architecture (Cortex A7), but the Mini Plus has 2 cores and the A30 has 4 cores.


How can this be such a top-voted answer? What?

Without even going into the unsubstantiated assertion with #1, your comment on number 3 shows a dramatic misunderstanding of how compounding effects work. You can't use the last 20 years to linearly project like this. It is true that most scientists agree that humanity will likely not go completely extinct, but it is also true that most scientists agree that many, many individual humans will be impacted. It is tough to say just exactly how humans will be impacted, but think famine, war, major societal upheaval.

Here's a citation if it helps: https://academic.oup.com/bioscience/article/71/9/894/6325731


He said 20 years of doom predictions that haven't come true. Compounding effects don't apply to the accuracy of academic predictions. It's not like academic accuracy automatically gets exponentially better over time. Linear projection when attempting to guesstimate the accuracy of future predictions by a group of people who have also made predictions in the past is a fairly reasonable thing to do, unless you have some specific evidence that they significantly improved their methodology.

Your citation is merely an advocacy piece, not science. For example the first diagram contains charts of fertility rates, institutional assets divested and world GDP whilst claiming they are "climate related human activities". Presenting a nearly random collection of metrics as evidence for your argument isn't a sign of robust thinking or argumentation.


When someone says "20 years of doom predictions haven't come true", I charitably assumed that claim was about scientific consensus predictions, but perhaps I can't assume that everyone shares knowledge of what that is.

So far, all data says that the climate scientists are dead on and have been very accurate: https://eps.harvard.edu/files/eps/files/hausfather_2020_eval...

What doom predictions from the last 20 years haven't come true? If someone says that doom hasn't happened yet, I guess what I want to say is that they haven't waited long enough.

I think the climate scientists are frustrated and giving up. https://www.nytimes.com/2022/03/01/climate/ipcc-climate-scie.... My initial link was an attempt to show where the Overton window is regarding the experts in this field, more than anything else. This comment is probably not the right place to bring someone up to speed with the climate science field when they can Google it themselves.


Although it's not well known, you unfortunately can't use temperature data to judge whether climatological predictions are correct. That's because the databases of temperature data that are presented as "global temperature" (a fundamentally statistical product) are themselves maintained by climatologists. It's a bit like asking a CEO whether his products are good, and he cites his own private data on customer happiness to prove that it is. Lots of people wouldn't accept this as evidence because it's not independent. The data might be correct, but his salary is at stake and so there's the risk of shenanigans. You'd want a truly independent assessment.

Climatologists like to claim that they are of course far better than that and would never abuse their monopoly position on such data, but they also regularly change those databases in ways that retroactively make failing predictions correct. Like here [1] where they declared a new record-breaking temperature that was lower than their previously announced record. They didn't mention that anywhere but the previous press release was still on their website and somebody noticed.

Anyway, you're right, let's Google things. Here are a few failed predictions from 20 years ago that can be judged without using temperature databases:

• Dr David Viner, climatologist, March 2000. "Children just aren't going to know what snow is". David Parker, climatologist, same article. "British children could have only virtual experience of snow." [2]

"Australia faces permanent drought due to climate change", 2003 [3]. Dr James Risbey, Center for Dynamical Meteorology and Oceanography at Melbourne's Monash University, says "the situation is probably not being confronted as full-on as it should". Current data shows no drought [4]

• Pentagon report, 2004 [5]. By 2020 the weather in Britain "will begin to resemble Siberia", by 2007 violent storms have rendered parts of the Netherlands uninhabitable. "A ‘significant drop’ in the planet’s ability to sustain its present population will become apparent over the next 20 years". "Immigrants from Scandinavia seek warmer climes to the south." None of that is even close. "Senior climatologists, however, believe that [the author's] verdicts could prove the catalyst in forcing Bush to accept climate change as a real and happening phenomenon."

There are hundreds more like this. It's inevitable that people take this history into account, and kinda unfair to demand that people don't. If there had been rigorous investigations of what went wrong in these cases, and clear evidence of learning or regulation of the field in the same way as happens in other areas of life after big failures, then people's confidence might be higher.

----

[1] https://retractionwatch.com/2021/08/16/will-the-real-hottest...

[2] https://web.archive.org/web/20150114205355/https://www.indep...

[3] https://web.archive.org/web/20200825073015/https://www.wired...

[4] http://www.bom.gov.au/climate/maps/rainfall/?variable=rainfa...

[4] https://www.theguardian.com/environment/2004/feb/22/usnews.t...


Why? What compliance reasons make Ubuntu LTS work and RHEL not work?


Stupid auditors/pentesters really. Explained a bit in another comment, but essentially we had to explain the concept of backporting cve fixes to the same 'version' of random libs to the auditors and to get certified we would have to demonstrate, with actual source, that each of ~200 or so cve's were fixed in various system parts (individually).

In the end, we just went with ubuntu for those nodes, and they all passed the certification. Shrug.

Since then, we don't even need the OS to be certified, since we are using confidential computing, and we stuck with ubuntu for our k8s nodes etc -- but we are forbidden from using rhel anywhere by our legal / compliance people now.


The issue here is with your auditors. I mean if RH tells you a CVE has been fixed with a backport, sure you can challenge that fact but at the same time and with the same standards, it'd mean your auditor would also have to check the actual source of your patched Ubuntu packages to make sure the new versions fixed the security bugs.

The bottom line really is plenty of auditors I've seen don't know how to check for vulnerabilities other than by checking a version... That's it.. Their tools or reporting only know package must have a version greater than x.y.z.


I made a bunch of maps to answer this question! https://www.jtolio.com/2022/07/anthropocene-calamity-part-8-...


Here's my wizard if you want to tweak the maps yourself: https://climatedash.fly.dev/?selection=tmean_avg_2050+&filte...


Wonderful read!

But a few thing I'm missing are: - second (higher) order availability: even if the hospital you need to go to is within biking distance, is the hospital itself sourcing it's materials (needles, plasters etc.) locally? Otherwise you might end up at a hospital that can't treat you - in your modelling you only accounted for weather effects and elsewhere assumed that, hopefully, immigration will somehow be managed well. Suppose it will not - how will you manage that and ensure you will not loose everything, when everyone suddenly moves to where you are and all wealth is taken from people who have some? Is Traverse city perhaps sufficiently insulated so that people won't easily reach it? Does it have a strong police?

Lastly, I'm recommending the "nodes of persisting complexity" article, if you're interested. They carry out an analysis like you do, but at country level. Sadly, the US doesn't score too high.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: