Hacker Newsnew | past | comments | ask | show | jobs | submit | renlo's commentslogin

The key is to work around the text input. If you want to say "kill myself", you input "kill my" then complete the "self" portion by pressing delete (remove space), then s-e-l-f. I feel like most of my typing time is spent making these corrections, as it's very quick to swipe but corrections are almost always necessary and they are an order of magnitude slower. Yesterday for example I tried to swipe "succession" but it really wanted to output "secession", so I change my strategy to "success" (it really liked this word), then delete (remove space), i-o-n.

I think every time I swipe I need to do at least one correction like this, where I type one similarly spelled word with as minimum an edit distance as I can think of in the moment, then do a manual correction.


it's kinda bleak realizing I've been running the same cursed workflow for way too long. brb gonna disable that autocomplete


Except sometimes the autocorrection will “helpfully” replace the prior word to jive with its model of the universe. Incredibly frustrating.


Horrifying.


Most eye opening experience in my personal development was attending HR conferences (we sold an HR product but I am an engineer), where speakers were openly saying this out loud. I know you won’t believe me given your statement, but using codewords they said they were trying to hire “diverse candidates”, retain “diverse candidates”, explicitly mark “non-diverse candidates” leaving as non-regrettable churn, filtering and searching for diverse employees within the company to fast track for promotion, etc. I was in shock how brazenly they were saying the quiet part out loud, and breaking the law. This was 10 years ago, there were no repercussions for it, in fact they were all lauded.


It wasn’t even coded in many cases. I’ve had pitch meetings where I had to explain how I was brown as part of an express consideration of the business decision. White people talked about my race to my face more in 2020-2021 than during seven years in the south starting right after 9/11.

Some “DEI” was high level measures like recruiting at a broader set of universities. But in the last 5 years it routinely got down to discussing the race of specific individuals in the context of whether to hire them or enter into business relationships.


It's funny how everyone brings up all these anecdotes, but then the reality is that there are plenty of studies that show that if your name is associated with being black you have much lower chances to be invited to an interview.

So seems like all this talk by HR people didn't really change any hiring practices. It's also funny how everyone is outraged by the DEI programs, instead of the real discrimination that is happening in hiring.


Hint: if everyone has such anecdotes, they are no longer anecdotes.


It's enough to show that something isn't ultra rare, but it's not enough to show whether it's happening at 0.1% of companies or 90% of companies or where in between.

If someone is racist in a manner that's outweighed 10:1 by opposite racist practices, that's something we do want to stop, but it shouldn't be top priority and definitely shouldn't be treated as the example of what racism looks like these days.


There is very little evidence of those “opposite racist practices” that are supposedly 10 times more common, at least in large corporations and universities. Microsoft was out there promising to double the percentage of black executives. Where is the big corporations promising to double the number of white executives?


What do you think happens when one level of leadership sets a metric as a goal, and likely ties someone's bonus to that goal?

The metric-goal gets pushed down to lower hierarchy levels, and from then on, all it takes is turning a blind eye and you get the results we've seen in the court case I cited above. The smart ones just don't put it in writing.


As mentioned a couple comments up, something like this: https://bfi.uchicago.edu/wp-content/uploads/2024/04/A-Discri... is way more impactful then some CEO choices.

I can't find the Microsoft thing, but apparently among fortune 500 companies only 1.6% of CEOs are black. Even double that would still be an extremely low number. So unless you think some truly cosmic random odds happened here, that 1.6% is evidence of lots of racism.


Why is that a low number? What is the correct percentage of Fortune 500 CEOs to be black, or any other specific ethnic background.


These studies are misleading, because they try to create race signals by using names that are also class signals: https://newsroom.ucla.edu/releases/ucla-study-suggests-resea....

Also, the study suggests that, even with this flawed methodology, a bulk of industries are in the least discrimination category with only a 3% lower callback rate for “black sounding names.”


Do you have any numbers for trying to correct for that factor?

And the bulk are not at 3%, the bulk are between 5 and 10. 3 was the absolute lowest.

Also you didn't mention the CEO thing, does that mean my numbers were sufficient to address that worry?


As I read it, the industries were grouped into three categories. “Least discriminatory” was at 3%. Those are all the industries in green. These are small differences in a study that’s not well designed to begin with.

The explicit discrimination in universities against whites and asians is huge in comparison: https://nypost.com/2023/06/29/supreme-court-affirmative-acti.... A black applicant to Harvard with an academic index in the 5th decile had an 800-900% higher chance of admission than a white or asian candidate with the same qualifications. This isn’t just CEOs. The pattern was similar at UNC, a state school.


> As I read it, the industries were grouped into three categories. “Least discriminatory” was at 3%.

The least single industry was 3%. And each single industry is a very noisy data point, based on a couple companies and needing more data points. By the time you aggregate into more solid data, like those bigger categories, it's more than 3%.

But the whole thing could use better methods and more data for sure.

> The explicit discrimination in universities against whites and asians is huge in comparison

In comparison to this specific resume effect it's pretty big, but that was just a basic example, not an attempt to list the biggest issue.

In comparison to the fortune 500 CEOs the overall effect here is smaller (no I'm not going to look at 5th decile in isolation).

Also even after this bias was applied, they're admitting a below-population-average amount of black students and a far above-population-average amount of asian students. So there's a bunch of other data necessary to properly analyze what's going on and how bad it is. Should there be a super tight correlation to academic decile? There are huge differences in school quality that muddy the signals, and those differences often correlate with race.

I'm not saying they did nothing wrong, but I'm saying it's unclear what the numbers should have been.


No, they're still anecdotes.

   anecdote   /'ænɪk,doʊt/
   noun
   short account of an incident (especially a biographical one)


A lot of the contemporary formal scientific process is done incredibly badly, for a variety of reasons including overt political bias on the part of individual scientists working in the academic system, pressure to publish any results including poor ones, and outright laziness and fraud. In general we shouldn't assume that if a bunch of public scientific studies purport to show that some phenomenon is happening, that that phenomenon is actually happening. It takes substantial time, effort, and experience to evaluate whether a claimed scientific result is valid; and all the moreso when that result has immediate political policy implications.


Localstack makes that pretty easy. Before Localstack I had a pre-staging environment (dev) target I would deploy to. Their free/community offering includes a Lambda environment; you deploy your dev "Lambda" locally to docker, using the same terraform / deploy flow you'd normally use but pointed at a Localstack server which mimics the AWS API instead. Some of their other offerings require you to pay though (Cloudfront, ECS, others) and I don't use those, yet at least.


I dealt with a microservice style serverless backend that was spread out across roughly 50 Go-based lambdas, one per API endpoint, with a few more for SQS and Cognito triggers. Was deployed via CloudFormation. Testing locally was an absolute nightmare even with Localstack.

Made me wish for a simple VPS, an ansible script to setup nginx and fail2ban and all that shit, and a deployment pipeline that scp'd a single binary into the right place.


I had a horrible time with Localstack. It's very similar to AWS but not exactly the same, so you hit all kinds of annoying edges and subtle bugs or differences and you're basically just doing multi-cloud at that point. The same kinds of problems you encounter with slightly inaccurate mocks in automated testing.

The better solution in my experience is to create an entirely different account to your production environments and just deploy things there to test. At least that way you're dealing with the real SQS, S3, etc, and if you have integration problems there, they're actually real things you need to fix for it to work in prod - not just weird bandaids to make it work with Localstack too.


How do you keep the accounts in sync? Isn't deploying to remote a fairly slow process?


Developer-specific sandbox environments with hot code reload is the golden standard here but Localstack is great if you can't do that due to (usually IT deparment-related, not technical) reasons.


> The trouble is that "fast" doesn't mean anything without a point of comparison.

This is what people are missing. Even those "slow" apps are faster than their alternatives. People demand and seek out "fast", and I think the OP article misses this.

Even the "slow" applications are faster than their alternatives or have an edge in terms of speed for why people use them. In other words, people here say "well wait a second, I see people using slow apps all the time! People don't care about speed!", without realizing that the user has already optimized for speed for their use case. Maybe they use app A which is 50% as fast as app B, but app A is available on their toolbar right now, and to even know that app B exists and to install it and learn how to use it would require numerous hours of ramp up time. If the user was presented with app A and app B side by side, all things equal, they will choose B every time. There's proficiency and familiarity; if B is only 5% faster than A, but switching to B has an upfront cost in days to able to utilize that speed, well that is a hidden speed cost and why the user will choose A until B makes it worth it.

Speed is almost always the universal characteristic people select for, all things equal. Just because something faster exists, and it's niche, and hard to use (not equal for comparison to the common "slow" option people are familiar with), it doesn't mean that people reject speed, they just don't want to spend time learning the new thing, because it is _slower_ to learn how to use the new thing at first.


as a layman, wouldn't you need a more-accurate clock to measure the accuracy of a clock? How is clock accuracy measured when the clock is the most accurate clock?


Well for example you can take several of the clocks and compare them.


Would that require the assumption that any inaccuracies affecting multiple clocks are uncorrelated with each other?


felt more like an article legitimizing an origin myth than authorship


People are still killing each other in modern times because their teams have slightly different magical beliefs and then use that as an excuse to not see each other as real human beings but as foreign "enemies".


Great video, I really enjoyed how down to earth it was. It reminded me of The Secret Life of Machines [1], where we get to peek behind the curtain and see how seemingly "magical" machines (in your case a digital computer) emerges from simple fundamental concepts.

[1] https://en.wikipedia.org/wiki/The_Secret_Life_of_Machines


Spotify used to have a "dislike" button for their Discover Weekly which helped with pruning music you don't like, but with the natural law of tech enshitification they removed that feature a month ago.


That was such a frustrating decision. I had almost convinced Spotify that that one time I listened to Lustmord was just a random mood, and I don't actually want to only listen to dronecore for the rest of my life.


I don't know those terms and now I'm afraid to search for them. The Cybernetic Bureaucracy Mind might label me a dissident with terrible taste in music.


find out without ever touching your spotify account, https://lustmord.bandcamp.com, only whoever you share your browser search history with will know


Now I'm slightly crestfallen that "dronecore" doesn't have any particular relationship to bagpipes.


you want Wisp's Honor Beats then https://youtu.be/gfkWin8Gu7c


I used to always hesitate to use that "dislike" button because I was worried that Spotify would not be able to distinguished between "I will always dislike this song" and "I don't want this song in this specific context"


I insta skipped any song that I liked but didn't want in X context, but disliked songs I didn't want, period, I don't know if it was the intended way, but it seemed to work for me


    function x() {/* ... */}
    const x = function() {/* ... */}
    const x = function foo() {/* ... */}
    const x = (function() {/* ... */}).bind(this)
    const x = (function foo() {/* ... */}).bind(this)
    const x = () => {/* ... */}
    const x = () => /* ... */


Apart from hoisting (which has little to do with functions directly) and `this` these are all equivalent


Agree that the repository/service pattern is a good way to adhere to separation of concerns and make refactoring and readability easier.

That said, I really disagree with any precommit checks. Committing code should be thought of as just saving the code, checks should be run before merging code not saving code. It'd be like Clippy preventing you from saving a Word document because you have a spelling error. It's a frustrating experience.

I can make sure my code is good before I submit it for review, not when I'm just trying to make sure my work has been saved so I can continue working on it later (commit).


Most pre-commits I’ve seen are usually formatting/prettier.

But if need be, I will spam my commits locally with `—no-verify`. Once the code is ready, I reset to head and remake them as nice, conventional commits.


While I like abstracting storage into a repository class of sorts, this crap of "userService" needs to die already. I've seen too many bad codebases that stick everything user related into that one service that ends up having 8+ completely noncohesive functions.


Usually you have those checks when pushing, not committing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: