Hacker Newsnew | past | comments | ask | show | jobs | submit | drath's commentslogin

Also, to mitigate the problem somewhat, one could obfuscate the order at which the numbers were pressed by setting a custom pin with repeating numbers. Ideally, just repeating one./s


For someone born after it was already a vapourwave, can anyone explain how issues with such deep interlinking were supposed to be solved? Like, what is supposed to happen if the linked host dies, or if the content becomes paywalled, copyrighted, or distributed illegaly in the first place? Or if somebody highly referenced gets hacked and a malicious code gets injected into the referenced text?


The design always included replication of the content. When information is originally published, you can request that "n" copies are sent to other back-end hosts that are advertising they have available storage. I believe we intended n to be at least 3. In addition, when someone requests content that is not already available locally and especially when they transclude the content, the back-end they are using is encouraged to make a local copy. So that answers "what if the linked host dies".

All content is already copyrighted by the author(s), and in order to publish it on the Xanadu network they have to agree to publish it under transcopyright which grants prior permission to transclude it. That does not preclude also offering the same content elsewhere under different license terms, but revoking the original license agreement would require the content be removed from the Xanadu network. IANAL but I suspect people might have some rights to rely on the original license unless properly notified that the rightsholder had revoked it.

All Xanadu content is append-only versioned, so if someone gets hacked and content is changed, nobody is obligated to transclude from the altered version. They can continue to transclude earlier versions.


The simplest way is to store everything locally, as was proposed by V Bush in 1945. His Memex included an output (on microfilm) of all the relevant pages, along with the annotation trails.

If you have a local copy of everything referenced, none of your links break.


Did it, though? I don't think I've used a Delphi/VB-based application like... ever? I've made some myself back in the day and fiddled around with my friend's university assigments, but I can't think of any real world application using it. Maybe you have some examples?


Could you provide samples? Maybe it works for you in particular, but all the throat mic tests I've seen on youtube have absolutely abysmal sound quality.


The position of the mic on your throat changes how it sounds. The further away from your throat, the better (more natural) the sound, but the more ambient noise you get because the signal is lower. If you're right over the throat, the noise issue goes away, but you lose the qualities of your voice that the mouth adds.

In general, you'll get better sound from something like an Antlion mic with its passive rejection from a built-in omnidirectional mic opposite the cardiod mic pointed at your mouth.


Hey the article has been updated with an example! Direct link below

https://vadosware.io/audio/hn-throat-mic-test.ogg

(also .mp3 and .mp4 are there, praise be to ffmpeg)


I would not call that good quality. To my ears it sounds like you're trapped in a mayonnaise jar with somebody eating cheetos.

Which is about what I'd expect from a throat mic given how much of the human voice is produced above the throat. And for those who want a refresher, play around with Pink Trombone a bit: https://dood.al/pinktrombone/


> I would not call that good quality. To my ears it sounds like you're trapped in a mayonnaise jar with somebody eating cheetos.

A colorful and terrifying analogy, point taken!

> play around with Pink Trombone a bit: https://dood.al/pinktrombone/

Wow that’s is awesome


1. Why focus on privacy? The niche is already taken by brave 2. To do so, they'd have to drop their half-billion default search engine deals 3. Currently, the donation figure is about $20mil. They'd have to somehow additionally collect $1.5 from every single user annually to prevent layoffs.


> 2. To do so, they'd have to drop their half-billion default search engine deals

Yes, that is part of the reason why they should be a non-profit - sot that they can work for the public good and not for the good of their corporate sponsor.

> 3. Currently, the donation figure is about $20mil.

Donation figure for what? Currently you CANNOT donate towards Firefox development.


> Donation figure for what? Currently you CANNOT donate towards Firefox development.

To Mozilla foundation. Most people I heard donating to them don't realize their money aren't and couldn't be put into Firefox development.


It's not a small niche. It can be shared with Brave.

And Brave has things that not everyone likes like the bat token stuff. And chromium.


> Do you think Apple or Google, when making the decisions on browser upgrade-ability, anticipated javascript changes that would break common sites and force users to buy new hardware

Yes, they should, because these are the same people doing both


On one hand, it's quite asshole-ish. On the other, google is serving broken frontends to their services and charge ridiculous prices on their API's. When I tried to make a third party search using google engine, I've exhausted the limit in less than an hour. It'd cost me like $40/mo to get what I get for free using their crappy frontend.


> On the other, google is serving broken frontends to their services and charge ridiculous prices on their API's.

How does that make this okay? Nobody is entitled to get a company’s services for free just because you think their price is too high or their front ends aren’t built to your liking.


You could use the "turnabout is fair play argument". If you publish a web page, and don't specifically block google, they scrape your content, and use it for their own purposes. And even use it for "rich snippets", products other than search, etc. You're basically doing the same to them...using their content for your own purposes until they specifically block you.


Disagree. The web is clearly architected such that publishing a webpage makes it public and crawlable. You don’t “block Google”, you specify that the site is not for crawling in robots.txt according to well-known standards. This is all basically the contract of the internet and it shouldn’t be surprising to anyone.

Google specifically does not publish their API for free consumption by other companies, yet that’s what’s happening here anyway. The company is also using specific tricks to circumvent detection of the behavior.

In your analogy, this would be like a crawler ignoring robots.txt and then scraping the content for their own website with zero attribution to the source, which is nothing like Google indexing your site with full attribution and driving traffic to it for you.

Regardless, “turnabout is fair play” is unequivocally not a legally or even ethically acceptable standard, so that argument wouldn’t actually hold up anywhere anyway.


I don’t understand your argument. There is no actual “publishing” of web sites or APIs on the web. You simply make something available at a URL, and it’s up to anyone else to discover that URL. In this regard, your personal web site is no different than this Google Translate web API.


> this would be like a crawler ignoring robots.txt

Google ignores the noindex directive in robots.txt now. You're supposed to put it in your HTTP response headers or HTML meta tags...


`noindex` was a Google-specific rule that was never officially documented nor supported. I think they were perfectly entitled to withdraw support for it, especially considering there are alternatives.

https://developers.google.com/search/blog/2019/07/a-note-on-...


"driving traffic to it for you."

I did mention rich snippets.


Google isn't entitled to get my personal data for free to, yet they do it anyway.


"Nobody is entitled to get a company’s services for free just because you think their price is too high or their front ends aren’t built to your liking." Tell this to Google!


Like telegram did with the translate api, there is also a way to have an unlimited api for search results. You have to find one of the old mobile pages of google.


There is nothing wrong with sitting in front of a large television screen, as long as your eyes are not dry and you don't look at one point all time.


Reminds me of http://octodon.mobi/ keyboard. They went through so many many design iterations, tried really hard to get it going, but failed to start production and haven't posted anything since 2017. Guess typing on a phone will remain a pain forever.


This should really be solved by using named parameters or by writing docblocks so that IDE can show hints.

Another trick, at least in js, is to use destructuring assignment, e.g.

    function calc_formula({a, b, is_gain}){
        ...
    }

    calc_formula({a:1, b:2, is_gain:true})


My rule of thumb these days in JS/TS is all functions with more than 2 parameters should be refactored to a single object parameter using destructuring. I don't start with a single object because most times YAGNI applies.


> My rule of thumb these days in JS/TS is all functions with more than 2 parameters should be refactored to a single object parameter using destructuring.

Does this create garbage for the garbage collector (which might be an issue for inner loops)?


In theory yes. But for hot inner loops you will probably get it optimized away. (This is a common pattern so optimizer try to find it and undo it. Furthermore hidden classes for objects are common and when you destructure directly in the argument list escape analysis is pretty easy) It probably does hurt your performance for warm and cold code but that likely isn't too significant.

So yes, if performance is critical you should probably profile and consider avoiding this pattern, but for most cases the performance impact is very minor.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: