I made V.O.B.S., a system that lets you produce interactive TV shows using the internet as a transport medium, multiple webcams in different locations and our server infrastructure.
My project "UrlRoulette" was on the HN homepage for about 24 hours. I received a huge traffic spike at the start. Since then traffic came from other sources such as Reddit, some blog posts and articles that were written - and of course some search engines. After being on HN, UrlRoulette was featured in the german C'T magazine and received a lot of traffic from their website and their print edition. Also, being featured on some more sites certainly helped pushing the site's page rank on Google.
Well, that could happen on any website that you visit. IMO it does not make a difference whether you know the link before you click it or not, because you still don't know what you will get. But I am thinking about implementing some sort of virus/malware scanning.
I think the idea is to detect when different URLs contain the same content. That defends against duplicate entries like example.com/?foo and example.com/?bar (which are the same page).
Yes, but they are no longer distributed to users. I thought that was the original question. I'm keeping them in the database to check for SPAM/multiple submissions mainly.
Of course the author is collecting everything submitted and does whatever he wants with it. Haven't you read the terms of service and privacy policy ? Probably because there are none which usually means you can safely assume that everything is collected, stored and exploited (or will be later).