Hacker Newsnew | past | comments | ask | show | jobs | submit | hnlmorg's commentslogin

> and we're the only animal that we know of that gets it.

Is this actually true? I thought it was pretty common for elderly pets


Elderly pets have loss of cognitive function/memory, but I don't think it's the same disease.

What's the objective, clearly disambiguated, empirically demonstrable difference between memory loss, dementia, and Alzheimer's?

Alzheimer's is, by definition, dementia with associated amyloid plaques in the brain. Since you can't detect the plaques without cutting into the brain, the diagnosis is normally given based on symptoms of dementia (significant loss of memory or other cognitive functions) without other clear reasons (no evidence of vascular events, head trauma, brain tumors or other neurological diseases etc).

My understanding is that amyloid plaques can actually be seen with a specialized PET scan now, so it can be more definitively diagnosed in living people.

You mentioned amyloid plaques. What about tau tangles? I thought Alzheimer's required both. If someone (or some dog, for that matter) has amyloid plaques but no tau tangles, is that Alzheimer's? If they have tau tangles but no amyloid plaques, what is it?

And what about the brains that show amyloid plaques, tau tangles, and Lewy bodies? Or plaques plus vascular lesions? At autopsy, most elderly brains show mixed pathology. Does that person have Alzheimer's plus Lewy body dementia plus vascular dementia? Three diseases? Or one brain failing in multiple correlated ways that we've artificially carved into separate categories?

It sounds like we have at least five different pathological markers that correlate with cognitive decline, often co-occurring, with inconsistent symptom mapping. What makes 'Alzheimer's' a disease rather than a region we've named in a high-dimensional space we don't really understand all that well?


> What makes 'Alzheimer's' a disease rather than a region we've named in a high-dimensional space we don't really understand all that well?

Nothing. I think it's sometimes in fact called a syndrome, not a disease per se. Since we don't really understand the mechanism of action, it remains more of a diagnosis by exclusion rather than anything else.


Humans need to be in the loop for the same reason other humans peer review humans pull requests: we all fuck up. And AI makes just as many mistakes as humans do. They just do so significantly quicker.

Lua only departs from norms if you’ve had a very narrow experience with other programming languages.

Frankly, I welcome the fact that Redis doesn’t use JavaScript. It’s an abomination of a language. The fewer times I need to use it the better.


I think criticizing JavaScript has become a way of signaling "I'm a good programmer." Yes, good programmers ten years ago had valid reasons to criticize it. But today, attacking the efforts of skilled engineers who have improved the language (given the constraints and without breaking half of the web) seems unfair. They’ve achieved a Herculean task compared to the Python dev team, which has broken backward compatibility so many times yet failed to create a consistent language, lacking a single right way to do many things.

> But today, attacking the efforts of skilled engineers who have improved the language (given the constraints and without breaking half of the web) seems unfair.

I was criticising a thing not a person.

Also your comment implies it was ok to be critical of a language 10 years ago but not ok today because a few more language designers might get offended. Which is a weird argument to make.


I think he’s saying it’s a fundamentally improved language at this point?

Not OP, but the case can be made that it's still the same very ugly language of 10 years ago, with few layers of sugar coating on top. The ugly hasn't gone anywhere. You still have to deal with it and suffer the cognitive burden.

> Not OP, but the case can be made that it's still the same very ugly language of 10 years ago, with few layers of sugar coating on top.

Let's talk specifics. As it seems you have strong opinions, in your opinion what is the single worst aspect of JavaScript that justifies the use of the word "ugly"?


https://dorey.github.io/JavaScript-Equality-Table/

https://www.reddit.com/r/learnjavascript/comments/qdmzio/dif...

or anything that touches array ops (concatenating, map, etc…). I mean, better and more knowledgeable people than me have written thousands of articles about those footguns and many more.

I am not a webdev, I don't want to remember those things, but more often than I would wish, I have to interop with JS, and then I'd rather use a better behaved language that compiles down to JS (there are many very good ones, nowadays) than deal with JS directly, and pray for the best.


If type conversion and the new var declaration keywords are your top complains about a language, I'm sorry to say that you are at best grasping at straws to find some semblance of justification for you irrational dislike.

> I am not a webdev, I don't want to remember those things, (...)

Not only is JavaScript way more than a webdev thing, you are ignoring the fact that most of the mainstream programming languages also support things like automatic type conversion.


> you are at best grasping at straws to find some semblance of justification for you irrational dislike.

You seem so emotionally-involved that the whole point whooshed above your head. JS is a language that gives me no joy to use (there are many of those, I can put Fortran or SQL in there), and, remarkably, gives me no confidence that whatever I write with it does what I intend (down to basic branching with checking for nulliness/undefinedness, checking for edge-cases, etc). In that sense it's much worse than most of those languages that I just dislike.

> Not only is JavaScript way more than a webdev thing, you are ignoring the fact that most of the mainstream programming languages also support things like automatic type conversion.

Again, you are missing the point. JS simply has no alternative for webdev, but it's easy to argue that, for everything else, there are better, faster, more expressive, more robust, … languages out there. The only time I ever have to touch JS is consequently for webdev.


Or good programmers understand why JS is bad?

Every programming language is an abomination depending on the perspective.

To paraphrase Bjarne Stroustrup, there are two kinds of programming languages. There are abominations and then there are the ones nobody uses.

> For one thing, the degree of monopolization simply doesn’t exist. Gaming is a market. There are many gaming platforms that are extremely popular. Xbox, PS, Nintendo, Steam, and then just open

Except there isn’t multiple stores on Xbox or PlayStation or Switch. Which is directly comparable to the iOS lock ins that Epic was fighting against.

> But more importantly, gaming isn’t an essential part of life, which is basically what smartphones, dominated entirely by iOS/Android, have become at this point.

True but also irrelevant. Monopoly laws don’t make those distinctions.

> And finally, maybe this is just me, but I think the idea that general purpose computing is the same as playing video games just strikes me as wrong.

Again, monopoly laws don’t make any distinction here. However to answer your direct point, some consoles are marketed as more general purpose devices for taxation reasons. All consoles support YouTube, most have other streaming services from Netflix to Spotify. They all come with a fully capable web browser. Even their hardware has been generic for the last few generations of consoles. So they are general purpose devices in all metrics aside from the variety of apps available. And you could argue the reason for this is literally because of their “App Store” lock ins. So your argument here is evidence against the point you’re trying to make.

> General purpose computing, which is what phones are, are basic infrastructure for modern life.

That’s not the definition of a “general purpose computing device” and I reject the idea that iOS and Android are equivalent to water, roads and electricity.

I do agree that smartphones are a MASSIVELY useful asset, but you don’t actually need a smartphone for modern life. Plenty of older people still manage just fine without iOS nor Android. They’ll use a laptop or PC to access the same services via a web browser.

Furthermore, the companies who are fighting iOS lock ins are not critical services. Epic, for example, is a gaming company. They don’t provide health or banking services. You can’t do your taxes in Fortnight. You don’t book your car in for a service via an app built in Unreal Engine. Epic build games not essential infrastructure.


This analysis is correct. Epic's business incentive has always been lowering platform fees paid to Apple and Google for Fortnite compared to what they are paying Nintendo and Sony for Fortnite.

There's nothing criminal or arguably even morally wrong about that. Nintendo and Sony do not make 10% of the hardware margins that Apple does. They are not analogous businesses.

> There's nothing criminal or arguably even morally wrong about that.

Morality is irrelevant and criminality is for the legal system to decide, not you.

> Nintendo and Sony do not make 10% of the hardware margins that Apple does.

Which, again, is completely irrelevant.

> They are not analogous businesses.

Only because you’ve decided they’re not. And your arguments have zero citations to any legal precedence. Yet we do have legal precedence of lock ins on other platforms and their related app stores.

So the problem we have is the legal precedence actually works against Nintendo et al and now it’s up to the courts to decide if those prior judgements are relevant to Nintendo and its ilk too.

Thus far all you and your likeminded peers have proven is that you have a personal opinion. But you’ve provided precisely zero legal evidence to back up your opinions. So why should we trust your opinion any more than the highly public legal precedence that was reached between Epic and Apple?

“but they’re different” isn’t a compelling legal argument for why they’re different. Regardless of how much you might wish it were.


You don't seem to be disagreeing.

I assume by hardware margins you are thinking of component and manufacturing cost. However, the largest cost that has to be amortized over the life of a hardware product is R&D cost, which is huge.

Even by the component and manufacturing cost metric, the Switch has always been profitable, though DRAM and flash storage costs are putting pressure on hardware margins at the moment. Still R&D is the largest cost that each company faces.


I really don’t like this. It took me several attempts to figure out what was going on.

And even after I had finally figured it out, it still felt more like a rendering glitch than good UX.

If I struggled then I really can’t see this working for non-technical folk.

Worse yet, because people wouldn’t expect this behaviour coupled with the fact that scrolling shouldn’t have any changes to the website state, you’ll likely see people constantly making accidental changes to the ordering of the list.


How does perceptual hashing work?

Have you got any recommendations for further reading on this topic?


These are two articles I liked that are referenced in the Python ImageHash library on PyPi, second article is a follow-up to the first.

Here's paraphrased steps/result from first article for hashing an image:

1. Reduce size. The fastest way to remove high frequencies and detail is to shrink the image. In this case, shrink it to 8x8 so that there are 64 total pixels.

2. Reduce color. The tiny 8x8 picture is converted to a grayscale. This changes the hash from 64 pixels (64 red, 64 green, and 64 blue) to 64 total colors.

3. Average the colors. Compute the mean value of the 64 colors.

4. Compute the bits. Each bit is simply set based on whether the color value is above or below the mean.

5. Construct the hash. Set the 64 bits into a 64-bit integer. The order does not matter, just as long as you are consistent.

The resulting hash won't change if the image is scaled or the aspect ratio changes. Increasing or decreasing the brightness or contrast, or even altering the colors won't dramatically change the hash value.

https://www.hackerfactor.com/blog/index.php?/archives/432-Lo...

https://www.hackerfactor.com/blog/index.php?/archives/529-Ki...


In the same way that Shazam can identify songs despite the audio source being terrible over a phone, mixed with background noise. It doesn't capture the audio as a WAV and then scan its database for an exact matching WAV segment.

I'm sure it is way more complex than this, but shazam does some kind of small windowed FFT and distills it to the dominant few frequencies. It can then find "rhythms" of these frequency patterns, all boiled down to a time stream of signature data. There is some database which can look up these fingerprints. One given fingerprint might match multiple songs, but since they have dozens of fingerprints spread across time, if most of them point to the same musical source, that is what gets ID'd.



Possibly one of the better known (and widely used?) implementations is Microsoft's PhotoDNA, that may be a suitable starting point.

MCP, as a concept, is a great idea.

The problem isn’t having a standard way for agents to branch out. The problem is that AI is the new Javascript web framework: there’s nothing wrong with frameworks, but when everyone and their son are writing a new framework and half those frameworks barely work, you end up with a buggy, fragmented ecosystem.

I get why this happens. Startups want VC money, established companies then want to appear relevant, and then software engineers and students feel pressured to prove they’re hireable. And you end up with one giant pissing contest where half the players likely see the ridiculousness of the situation but have little choice other than to join party.


I think the saying “the road to hell is paved with good intentions” is more apt.

I think what’s happening isn’t some evil plot to quell opposing voices, but more likely the UK government thinking they’re actually passing laws to reduce rioting and online abuse. And the censorship effects are a side effect of these laws.

Some might consider this opinion naive but take this counterpoint: laws require a majority to pass. So if these censorship laws were written to squash opposing voices, then we’d be dealing with a literal conspiracy involving hundreds of people. I don’t believe all politicians are only in it for themselves (though I do believe many are), so you’d expect at least 1 MP to speak out if such a conspiracy existed.


This. Governments are signatory to a huge number of agreements, and are members of various NGOs. Things start out as being representative of some will of the people, but over time it becomes a millstone around the government's neck if it the arrangement becomes politically difficult at home. And of course, those arrangements often morph to be to the benefit of those in charge.

What happens is that you get arrangements like the EU demanding migration quotas that the populations of various individual countries despise, or an automobile market that gets progressively more expensive as environmental legislation puts ever more pressure on manufacturers. And of course, if you're saving the world, who needs cars anyway? We should all be living Hong Kong style to save the environment, so we need more urban density.


Every time I hear that PSA I’m reminded of the acid track “where is your child”

https://youtu.be/sDyxyRcZWBA?si=sqDnodWQ-jWKCdCH

(I know the song came long after the PSA)


I think the problem lies with the fact that you cannot write kernel code without relying on unsafe blocks of code.

So arguably both camps are correct. Those who advocate Rust rewrites, and those who are against it too.


I don't think this code needed to be unsafe. This code doesn't involve any I/O or kernel pointer/CPU register management; it's just modifying a list.

I'm sure the people who wrote this code had their reasons to write the code like this (probably simplicity or performance), but this type of kernel code could be done safely.


As pointed out by yuriks [0], it seems the patch authors are interested in looking into a safer solution [1]:

> The previous patches in this series illustrate why the List::remove method is really dangerous. I think the real takeaway here is to replace the linked lists with a different data structure without this unsafe footgun, but for now we fix the bugs and add a warning to the docs.

[0]: https://news.ycombinator.com/item?id=46307357

[1]: https://news.ycombinator.com/item?id=46307357


You can’t write rust code without relying on unsafe code. Much of the standard library contains unsafe, which they have in parts taken the time to formally verify.

I would presume the ratio of safe to unsafe code leads to less unsafe code being written over time as the full ”kernel standard library” gets built out allowing all other parts to replace their hand rolled implementations with the standard one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: