Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wow:

Spam Filter Not Triggered By Typing "Nigger" 85 Times.

Some new gold standard for shittyness in software. I know a lot of Googleplexizens moved to Fb and Apple, but, huh????



Are you really trying to imply that Google does shitty software? Based on a single failure of spam/obscenity filter? Your reality distortion field must be really powerful.


>Are you really trying to imply that Google does shitty software?

I'm happy to make that argument. We are talking about a site with user comments that allowed unescaped HTML [1].

One of the frustrating things about using Google services is that over the years it has become apparent what a mess the user accounts backend is. There must be a dozen or more different 'types' of youtube accounts (depending on age, if they opted in or out of various levels of gmail/g+ integration over the years etc), so it's no wonder the attempts to consolidate and merge that mess has been so buggy (although I can see why they are desperate to simplify the situation). With the high staff turnover at Google, who at Youtube still remembers the difference between an account that opted to be tied to a gmail account in 2009 and one that didn't? (I think at some point that latter became impossible; I seem to remember that I lost access to my oldest YouTube account because I kept refusing to provide email and/or phone number information and in the end they just killed that type of account).

[1] http://arstechnica.com/tech-policy/2010/07/pranksters-have-a...


So that you won't be able to move goalposts later, what exactly is your criteria to state that a company does shitty software?

As it is now, your comment states the following:

Google did some software with bugs and bad UX => Google does shitty software.

I can't argue against this for quite obvious reasons :)


Allowing HTML unescaped? That is like the college undergraduates security mistake.


What are you arguing exactly? Is allowing HTML unescaped a really bad security practice? Sure it is.

Does it say anything about the overall software quality of a corporation that employs more than 45 000 people? No, it doesn't.


It says a lot. It's a fundamental mistake of such egregious proportions that it indicates a complete failure of processes. How did the hiring process accept people that don't understand the basics of web security? How did the management allocate them in a position to write frontend code for one of the largest sites on the web? How did the code review, security audits and static analysis fail to catch such a basic mistake?

I'm sorry if you work at Google and feel personally insulted by this, but Google have put out a lot of crappy software. Good software too, but your original argument seemed to be that Google is so magnificent that they don't have any shoddy products at all, and the very idea was unthinkable. That is clearly false.


Non sequitur if I ever seen one.

1. I never stated that Google software is magnificent. I stated that it is ridiculous to judge a corporate giant with thousands of engineers by pointing to a bad bug created by one team.

2. I do not work at Google anymore. And my view of the company is worse after my employment there. But I reserve my criticism for issues that I consider to be really important like NSA spying or limiting keyword search data to website owners.

3. I feel personally offended with all the emotional FUD that is going on what is assumed to be one of the best discussion forums on the internet.


The spam/troll filter on Youtube really is egregious. "Nigger "*85 was just one such example. Obvious spam such as "my stay-at-home mother earned $X last month from Y job. Visit Z to find out more" are way too common. I have only seen two filters on youtube comments: 1. block URLs, and 2. upper limit on message length. I'll state it plainly: in this instance, Google does shitty software.

As a counterexample, most comments on Slashdot are far from ideal, but Slashdot has long had filters in place to prevent obvious trolling such as these. Given /.'s OSS-friendliness in general, I'm sure they would have given Google these filters, if Google had only asked.


> in this instance, Google does shitty software.

Worse. It's doing shitty text mining, which is something they usually do pretty well at (seeing it's their core competency).

No-one complains when Apple messes up their cloud data storage (well, they do, but most people just say "OK, local storage + Dropbox still works"), but if the next iOS looked like a late 90s Java UI it wouldn't be a good sign.


Slashdot providing Google with algos to detect spam? Thats adorable. I respect slashdot but claiming that they may have better NLP algorithms than Google is absolutely unbelievable.


> claiming that they may have better NLP algorithms than Google is absolutely unbelievable.

That's why I did not make that claim. The NLP talent at Google is likely better than that at any other company or university. My claim is that their expertise is not used for filtering Youtube comments.


Care to provide ANY facts beyond anecdotes that their expertise is not used for filtering Youtube comments?

BTW you did make a claim that Slashdot would give filters to Google so there is that.


Sure, I found some examples in this very comment thread! These show that filtering is effectively not being done. And it's not anecdotes based on single/rare comments.

https://news.ycombinator.com/item?id=6748803


All these examples are anecdotal evidence provided by a biased party.


I've seen that same spam, and I rarely look at youtube comments. Unless I'm extremely lucky to have seen the exact same comments, this is a widespread problem and proves that even trivial filtering is not being used to block rampant spam.


So, more anecdotal evidence. If you really trying to say that your personal experience and experience of few other people proves that even trivial filtering is not being used, my arguments won't change anything, you already know everything, it seems.


The argument that there is a filter set up to block spam can be disproved by a single comment that would have been blocked by such a filter. I'm not sure what kind of evidence would actually sway you, but it seems fine to me: 1. widespread spam exists, 2. a week later it's still happening 3. therefore any filter is not set up in a way that blocks spam.


Apparently said "expertise" failed to tag 80 or so repetitions of the word "Nigger" as spam. Color me not impressed -- an undergrad could do better.


Reducing spam isn't just about having a better algorithm (short of strong AI, and even then two people can reasonably come to a disagreement about whether something is spam or not). It helps a lot to have the co-operation of the users. Something YouTube used to have, but doesn't anymore.

You have been arguing that pissing off the user base doesn't matter, but there is a real cost. Fighting your users means things like people not reporting spam anymore, or deliberately misreporting things that aren't spam.


You have zero proof that this all matters to any significant portion of the userbase. And I was arguing that pissing off the small part of the user base that wasn't happy with Google in the first place doesn't matter. And I stand by my argument. The majority wont care and will enjoy seeing relevant comments from their G+ friends under YT videos.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: