Hacker Newsnew | past | comments | ask | show | jobs | submit | harvie's commentslogin

At first it might seem that 6 is furthest to starting point and therefore it's quite likely it will be the last one reached. However whole process is chaotic enough, that once ladybug finally arrives to 4 and/or 8, the starting position has very little impact on overall outcome.

Well the starting position has no impact on the outcome. Each number other than the starting number has exactly 1/11 chance of being the last remaining number.

So basically a LLM from that brief time period back when communism felt like a good idea? what can go wrong? :-)

yes. and the websites require you to verify transactions with (unrooted?) phone.

on the other hand phone does not require you to verify with your pc, so there's no second factor unless there is some unacessible secure island within the phone itself.

funny enough, you can probably use that website directly on the phone that you use as 2F, which probably circumvents the 2F idea (at least as long as you use SMS 2F instead of app that checks for root)


This is much better than the stale-bot bs irreversibly closing perfectly valid issues just because the reporter have not replied for couple weeks.


The stale-bots are even worse than that. The reporter may have responded quickly, and the bug may be acknowledged as real. But if there's simply no activity in the issue for the month following, it will be closed.


Also upstream is extremely well audited. That's a huge benefit i don't want to loose by using fork.


I do want to say that HPN-SSH is also well audited; you can see the results of CI tests on the github. We also do fuzz testing, static analysis, extensive code reviews, and functionality testing. We build directly on top of OpenSSH and work with them when we can. We don't touch the authentication code and the parallel ciphers are built directly on top of OpenSSL.

I've been developing it for 20+ years and if you have any specific questions I'd be happy to answer them.


this, I'm not going to start using a random ssh fork with modified ciphers.


It may still be sensible if you only expose it to private networks.


So could this safely be used on Tailscale then ? I’m very curious though also a bit paranoid.


> So could this safely be used on Tailscale then ? I’m very curious though also a bit paranoid.

You may as well just use tailscale ssh in that case. It already disables ssh encryption because your connection is encrypted with WireGuard anyway.


It could safely be used on public internet, all this fearmongering has no basis under it.

Better question is 'does it have any actual improvements in day-to-day operations'? Because it seems like it mostly changes up some ciphering which is already very fast.


> It could safely be used on public internet, all this fearmongering has no basis under it.

On what basis are making that claim? Because AFAICT, concern about it being less secure is entirely reasonable and is one of the big caveats to it.


Concern about it being less secure is fully justified. I'm the lead developer and have been for the past 20 years. I'm happy to answer any questions you might happen to have.


I'm not fear mongering. I'm just saying

- IF you don't trust it

- AND you want to use it

=> run it on a private network

You don't have to trust it for security to use it. Putting services on secure networks when the public doesn't need access is standard practice.


I remember the last time I really cared to look into this was in the 2000’s, I had these wdtv embedded boxes that had a super anemic cpu that doing local copies with scp was slow as hell from the cipher overhead. I believe at the time it was possible to disable ciphers in scp but it was still slower than smbfs. NFS was to be avoided as wifi was shit then and losing connection meant risking system locking up. This of course was local LAN so I did not really care about encryption.

But I don’t miss having those limitations.


It's still possible but we only suggest doing it on private known secure networks or when it's data you don't care about. Authentication is still fully encrypted - we just rekey post authentication with a null cipher.


lose*


while (true) { if (stop) { break; } }

If there only was a way to stop while loop without having to use extra conditional with break...


Feel free to read the article before commenting.


I’ve read it, and I found nothing to justify that piece of code. Can you please explain?


The while loop surrounds the whole thread, which does multiple tasks. The conditional is there to surround some work completing in a reasonable time. That's how I understood, at least.


Does not seem so clear to me. If so it could be stated with more pseudo code. Also the eventual need for multiple exit points…


article specificaly mentions rooms with poor ventilation. if you have proper ventilation, then you don't need this system in the first place, because you will get ouside air UV sterilised by the sun...


I was recently thinking about this... We've been building houses and other structures using plum lines and water levels all the time before afordable optics came in play. This kinda means most of our buildings are actualy polar rather than cartesian. Surely enough given the size of earth the error is quite tiny. But it's funny thinking about how the room i am sitting in right now is shaped like frustum with spherical floor and ceiling, rather than block. Despite what architecture drawing says...


If the floor and ceiling (and walls) were leveled and flattened and brought to plumb with a straightedge scraped in with the three-plate method [1] (Popularized by Whitworth in the 1830s, but the ancients made straight edges and flat plates too), then they were actually not 90 degrees at the corners!

[1] https://ericweinhoffer.com/blog/2017/7/30/the-whitworth-thre...


There are very long and narrow wave pools used for research and testing and they are long enough that the surface of the water curves measurably vs extending perfectly straight lines from the center out.


Long bridges, like the Verrazano Narrows in New York City, have plans that account for Earth being a sphere. The towers at either end are not parallel, but tilted apart so that each is aligned with its local gravity.


i imagine imperfections in construction dominate this effect


If you have two buildings 4km apart (about the length of Central Park), that’s about 1/10,000 of an earth circumference so 0.036° change in ‘up’. If the buildings are 300m tall, 300*sin(0.036°) = 0.188m

That’s less than those buildings are probably expected to sway in a strong wind, but probably outside the tolerances for modern construction so theoretically measurable as an average deviation.


This only offers me 19 languages: https://en.wikipedia.org/wiki/David_Woodard The article claims that it has 335


Explained at the end of the article:

After a full month of coordinated, decentralised action, the number of articles about Mr. Woodard was reduced from 335 articles to 20. A full decade of dedicated self-promotion by an individual network has been undone in only a few weeks by our community.


It’s a good idea to read whatever you’re commenting on


That is a most improper suggestion on this here orange website. It is established etiquette to _imagine what the content of the article might be_, based on the title, and then comment on that, preferably angrily. At _absolute most_ one can read the first paragraph.


And when called out on it reply that the comments are often more interesting than the article which is a) trivially true when you don't read the article and b) probably because bickering in comments is more emotionally satisfying and requires a shorter attention span than reading a rather long article (I'm not immune, seeing as I'm now bickering about the bickering).


No no, thats reddit. We shun this here. They embraced it long ago.


or at least, that's what i guess is written in the guidelines


Maybe we can just configure webservers to block anyone who requests robots.txt, regular browsers don't do it, but robots do to get list of urls to crawl (while ignoring rules). Just create simple PHP/CGI script that adds client IP addres to iptables once /robots.txt is accessed.


One way to easily bypass is to let external services fetching robots.txt (archive.org, GitHub actions, etc...) to cache it and either expose through separate apis/webhook/manual download to the actual scrape server.

robots txt file size is usually small and would not alert external services.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: