Agree with the above. As someone who has never heard of this before, the description of "a portable programmable device for music, graphics, code and writing" reads to me as "a computer". I'm kind of unsure why I would want to use this instead of the computer I'm typing on right now.
This seems to be targeting the market of users with the following intersecting interests:
* DIY hardware enthusiast
* musician
* python developer
* maybe also wants graphics...?
Seems a small segment to me, but I assume I'm missing something here.
An immediate benefit I see is that they're cheap enough to use once - you could make/find/buy a software instrument that you like, then put it in your gear bag and never reflash it. Now it's just like any other synth. Then you can get a second Tulip and do the same thing later if you like. You could do this with laptops of course but it starts to get expensive.
The Pocket Operators have something similar (the KO at least, maybe the others). If you've written samples into them you want to preserve for playing live, you can snap a tab off and then they're read-only - no surprises on gig night.
This study was really highlighting a statistical issue which would occur with any imaging technique with noise (which is unavoidable). If you measure enough things, you'll inevitably find some false positives. The solution is to use procedures such as Bonferroni and FDR to correct for the multiple tests, now a standard part of such imaging experiments. It's a valid critique, but it's worth highlighting that it's not specific to fMRI or evidence of shaky science unless you skip those steps (other separate factors may indicate shakiness though).
When we published the salmon paper, approximately 25-35% of published fMRI results used uncorrected statistics. For myself and my co-authors, this was evidence of shaky science. The reader of a research paper could not say with certainty which results were legitimate and which might be false positives.
Hey, I know you got a lot of flack for the article. So, I just wanted to thank you for having the courage to publish it anyways and go through all of that for all of us.
I go back to the study frequently when looking at MRI studies, and it always holds up. It always reminds me to be careful with these things and to try to have other be careful with their results too. Though to me it's a bit of a lampooning, surprisingly it has been the best reminder for me to be more careful with my work.
So thank you for putting yourself through all that. To me, it was worth it.
Many thanks - appreciate the kind words. Thanks also for always working to work with care in your science. It makes all the difference.
Among other challenges, when we first submitted the poster to the Human Brain Mapping conference we got kicked out of consideration because the committee thought we were trolling. One person on the review committee said we actually had a good point and brought our poster back in for consideration. The salmon poster ended up being on a highlight slide at the closing session of the conference!
Thank you for publishing that paper, which I think greatly helped address this problem at the time, which you accurately describe. I guess things have to be taken in their historical context, and science is a community project which may not uniformly follow best practices, but work like this can help get everyone in line! It's unfortunate, and no fault of the authors, that the general public has run wild with referencing this work to reject fMRI as a experimental technique. There's plenty of different ways to criticize it today, for sure.
> a statistical issue which would occur with any imaging technique
I sounds like it goes beyond that: If a certain mistake ruins outcomes, and a lot of people are ruining outcomes and not noticing, then there's some much bigger systematic problem going on.
Why are you phrasing your correction in the form of a question? I think it's pretty reasonable to infer that he mistakenly thought it was a Stanford study because the link was from Stanford.
While I would agree that the prevalence of the problem has been minimized in fMRI during the last 15 years, I disagree that our critique does not hold up. The root of our concern was that proper statistical correction(s) need to be completed in order for research results to be interpretable. I am totally biased, but I think that remains worthwhile.
This exactly. Worth mentioning that "censoring" can occur in any of a number of ways; blocking select traffic, slowing select traffic, "forgetting" specific nodes, redirecting other nodes at will, performing MITM attacks (if the protocol isn't secure), etc etc.
Also, beyond just no positive incentives, there are nontrivial negatives... they're hubs for an entire network, which can be a lot of traffic and bandwidth if peers are sharing anything other than text. That's a potentially significant cost for literally just being a dumb router. The idea of charging for this doesn't make sense... you don't choose a router, it's automatic based on location, so there's no incentive for quality. That ends up being a race to the bottom, which there's no room for arbitrage; prices are driven down to near-zero profit.
Abuse-wise, the model is fundamentally flawed. Economically, the idea kinda works so long as hub traffic is low enough to be swallowed in background noise for whoever manages the hub. Beyond that the model breaks pretty quickly.
Read up on the outbox model and zaps. Also check out Bitchat for a real world example of Nostr being effectively used without even requiring Internet connectivity.
You cannot censor Nostr.
Also, check out how zaps work, and relay authentication. You can charge for relays if you want.
Can you summarize how those prevent the listed problems? Tossing around absolutes like “you cannot censor Nostr” sounds like a religious assertion rather than technical analysis.
I have posted very similar replies to other messages in this thread and don't want to repeat myself too much at the risk of being considered spam.
But... Outbox model prevents censorship because you push your (cryptographically signed and so impossible to impersonate) messages to multiple relays. To your own preferred relays, as well as to the preferred relays of others who are involved in the conversation, as well as to a couple of global relays for easy discoverability.
These global relays are useful, but are interchangeable and totally replaceable. As soon as you've connected with someone you can retrieve their updates, because you know their preferred relays, and can query them directly.
In this way Nostr has the benefits of centralised networks for discoverability, federated networks for communities, and private individual web site for p2p and archival purposes. As well as making it impossible to censor.
And if you take down THE ENTIRE INTERNET in order to censor Nostr? Well, Bitchat is Nostr via Bluetooth Mesh Networks. Do a quick search and find out where and when it has been used (Nepal, Indonesia, and elsewhere)
And as for zaps fixing the economic problem, I'm not sure what else to say other than you can give and receive value directly using the Lightning Network. It is seamless in most Nostr clients, and built into the Nostr protocol. If you don't believe in Value For Value (v4v) then you can just charge a fee, and the economics problem is solved.
NOSTR is a protocol that doesn't detail all implementation details so it wouldn't be fair to point HTML as culprit for flaws of web browsers.
That is a good paper, the leaks are mentioned the app Damus (notes browser) which wasn't really much worried about verifying the authenticity of the notes. The details: https://crypto-sec-n.github.io/
These are apps developed on free time and made available for free so these issues are bound to exist and be repaired.
A government could make it illegal to run or connect to nodes. It could DPI traffic in and out of the country, and block known nostr relays. Or it could just mandate that smartphone manufacturers block it, which would take out a large fraction of potential users.
(How does nostr avoid hosting known CSAM? Because that is the one thing that law enforcement will definitely come after)
Sure you can. A relay operator absolutely can censor what goes through their relay. More to the point, you cant even prove that such censorship has occurred.
Nostr is censorship resistant in that you can publish to multiple relays, but that is far from censorship-proof.
The concept of public library are the "super-relays", which are always available and basically accept any note you send their way.
It is "kind of" like reinventing email with PGP. Main difference is that you can choose to send the message in plain text with a cryptographic signature that proves it was sent from you or full encrypted like PGP.
There is still (in my opinion) a disadvantage when compared to PGP: key rotation. Once you create a key pair in NOSTR it is your identity forever, whereas in PGP you have mechanisms to declare a key obsolete and generate a new one.
In overall PGP failed over the last 30 years, sharing public keys with other people was always the biggest difficulty for real adoption. With NOSTR this process is kind of solved but we are yet to see about adoption.
I read this as "Disney approached OpenAI and threatened to sue them into oblivion --> OpenAI negotiated that Disney will use OpenAI internally for free, and will buy $1B of equity to have an ownership stake in the company".
Disney comes out pretty good from this one; they're going to have a ton of people using the service to create all sorts of stuff that will—on the whole—increase brand awareness and engagement with Disney.
OpenAI comes out pretty good from this, with a customer who's probably not paying much (if anything), $1B additional runway, but reduced ownership of the company.
>they're going to have a ton of people using the service to create all sorts of stuff that will—on the whole—increase brand awareness and engagement with Disney.
In the same way making a bunch of porn of a character increases brand awareness and engagement with an IP, sure.
OpenAI got away scot free here in avoiding a billion dollar lawsuit. Disney is gonna further melt away a century dynasty of art and culture. They're both gonna lose long term but I guess they both win for next quarter.
I'd argue that for every Assange and Snowden, there are 100 (1k? 100k?) people using Tor for illegal, immoral, and otherwise terrible things. If you're OK with that, then sure, fine point.
> SSH keys
Heartbleed and Terrapin were both pretty brutal attacks on common PKI infra. It's definitely serviceable and very good, but vulnerabilities can go for forever without being noticed, and when they are found they're devastating.
Mickens was arguing that security was illusory, not, as you are, that it was subversive and immoral. My comments were directed at his point. I am not interested in your idea that it would be better for nobody to have any privacy.
> ...who non-ironically believes that Tor is used for things
besides drug deals and kidnapping plots.
That was the quote I was referring to. Also, of course I didn't say that no one should have any privacy; I simply implied a high moral cost for this particular form of privacy.
Continuously updated HTTP response dumps from all the major Tor hidden services: https://rnsaffn.com/zg4/
It is accurate to say that Tor's hidden service ecosystem is focused on drugs, ransomware, cryptocurrency, and sex crime.
However, there are other important things happening there. You can think of the crime as cover traffic to hide those important things. So it's all good.
The third result was "FREE $FOO PORN" where $FOO was something that nearly the entire human race recognizes as deeply Not Okay and is illegal everywhere.
I wonder what % of the heinous-sounding sites are actually providing the things they say they are.
I'm sure that some (most?) of them actually offer heinous stuff. But surely some of them are honeypots run by law enforcement and some are just straight up scams. However, I have no sense of whether that percentage is 1% or 99%.
The problem with this paper is that, while technically true, there are many website owners who have found that CAPTCHAs have effectively reduced the spam on their site to zero. The fact that a CAPTCHA _can_ be bypassed doesn't mean that it _will_, and most spam bots are not using cutting-edge tech because that's expensive.
To say "it's worthless from a security perspective" is a pretty harsh and largely inaccurate representation. It's been tremendously useful to those who have used it. If it wasn't valuable, it wouldn't be so widely used.
Definitely agree with the whole "tons of free $$$ for Google", but that's kind of their business model, so yeah, Google is being Google. In other breaking news, water is still wet.
People really struggle with things that have measurable, probabilistic effects. You see it with healthcare ("Steve smoked his whole life and never got cancer, so cigarettes aren't bad for you!"), environmental effects ("Alice was poor and she didn't rob anyone, so poverty is no excuse!"), hiring ("Charlie is a great employee and he had no experience, so you should never look at backgrounds!"), etc.
It should be a general standard of proof for any sort of sociological claim that you look at rates, not just examples, but it usually isn't.
Well, I would at least ask what the baseline was. The vast majority of websites on the internet don't really have to deal with sophisticated bot traffic, and a very simple traditional CAPTCHA, one that can be trivially solved using existing technology, will also cut SPAM to zero or very close. I don't know exactly why this is, but I suspect it's because most of the bot operations that scale far enough to hit low volume websites are very sensitive to cost (and hence unlikely to deploy relatively-expensive modern multi-modal LLMs to solve a problem) and not likely to deploy site-specific approaches to SPAM.
There are a lot of things that can trivially cut down SPAM ranging from utterly unhelpful to just simply a bad idea. Like for example, you can deny all requests from IPs that appear to be Russian or Chinese: that will cut out a lot of malicious traffic. It will also cut some legitimate traffic, but maybe not much if your demographics are narrow. ReCAPTCHA also cuts some legitimate traffic.
The actual main reason why people deployed reCAPTCHA is because it was free and easy, effectiveness was just table stakes. The problem with CAPTCHAs prior to reCAPTCHA is simply that they really weren't very good; the stock CAPTCHAs in software packages like MediaWiki or phpBB were just rather unsophisticated, and as a double whammy, they were big targets for attack since developing a reliable solver for them would unlock bot access to a very large number of web properties.
Do you need reCAPTCHA to make life hard for bots, though? Well, no. Having a bespoke solution is enough for most websites on the Internet. However, reCAPTCHA isn't even necessarily the best choice even for something extremely high-volume. Case-in-point, last I checked, Google's own DDoS protection system still used a bespoke CAPTCHA that largely hasn't changed since the early 2010s; you can see what it looks like by searching for the Google "sorry" page.
I agree that reCAPTCHA is not "worthless" but it's worth is definitely overstated. Automated services that solve CAPTCHAs charge less than a cent per-solve. For reCAPTCHA to be very effective against direct adversaries rather than easily-thwarted random bots, the actual value of bypassing your CAPTCHA has to be pretty damn low. At that point, it's very reasonably possible that even hashcash would be enough to keep people from SPAMing.
Yeah, we've used CAPTCHAs to great effect as gracefully-degraded service protection for unauthenticated form submissions. When we detect that a particular form is being spammed, we automatically flip on a feature flag for it to require CAPTCHAs to submit, and the flood immediately stops. Definitely saves our databases from being pummeled, and I haven't seen a scenario since we implemented it a few years ago where the CAPTCHA didn't help immediately.
Reminds me of the advice around the deadbolt on your house - it won't stop a determined attacker, but it will deter less-determined ones.
(And while I don't have hard data on this, I suspect that bot authors that don't know how to properly set up rate-limits and don't know how to set up captcha solving service bypass, so captchas are especially effective against them)
This seems to be targeting the market of users with the following intersecting interests: * DIY hardware enthusiast * musician * python developer * maybe also wants graphics...? Seems a small segment to me, but I assume I'm missing something here.