Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>The software can't recognize a feather if it's never seen a feather like that. It's not a sentient being

Like humans brains?

>are quick to assume that this "AI" can make sense of a picture and understand it, when all it does is apply certain methods for a certain use case.

Like human brains?



No, not at all. If you only showed it a bunch of pickup trucks in various colors, it would be really good at identifying pickup trucks. But if you then showed it a Prius, or a motorcycle, it would have no idea that it was looking at a vehicle. A human brain wouldn't have much trouble with that, though, because it associates more information with the vehicle idea than just statistical similarity to previously seen shapes, and can extrapolate without having direct previous experience with the object being seen.


If you showed a small child 10 pictures of pickup trucks and told them "These are cars" then showed them a motorcycle and said "What is this?" what do you expect to happen?

Remember, this child has never been on the road, never driven a car, never had the mechanics of locomotion taught to them. All they know is that objects that are longer than they are tall with a flat bed on one side and wheels on the bottom are classified as cars.

Once the child (or machine) has more information to associate with the 'vehicle idea' it can call on this information when it sees shapes that are also associated with the 'vehicle idea' in order to extrapolate without having direct previous experience with that object being seen.


Trucks are generally not classified as cars, nor are motorcycles. These are all types of vehicles, per my original terminology. I actually did a similar experiment with my friend's daughter (3 years old) and she was able to figure it out just fine. Humans are generally able to extrapolate that things with wheels move, and if they have a seat, it's meant for someone to sit on, while it's moving. Hence a vehicle. It's this level of conceptual understanding and "how would this thing work" thinking that ML lacks in comparison to human brains. People use more than just sight recognition to identify new objects, while current ML models do not.


Maybe some current implementations lack the ability to make these connections but it is in now way even a small stretch to conceive a machine that understands "Wheels are for moving" "Seats are for passengers" "Things that have both wheels and seats are probably vehicles".

So when that machine learning algorithm recognizes wheels in a picture and recognizes seats in the same picture, it searches for results that include both wheels and seats.

The human brain does not inject any magic in to this process.


It sort of does, though. Let's say we train an ML implementation so that it can recognize things with wheels and seats as vehicles. Now we show it a hovercraft. What will it do? How about a helicopter? All the human brain needs is a single example of people getting in or on something, and it transporting them from point A to point B in order to infer that the thing is a vehicle of some sort. This is because we are able to infer purpose of an object even if we have never seen it before. ML is just statistics - it implies no meaning or comprehension whatsoever beyond "thing A is statistically most like thing B I have seen before". There's an important difference between recognition and understanding, and current ML techniques are solidly in the former camp.


The often forgotten difference between ML and humans is that we learn from stereoscopic video streams, not from a bunch of static pictures. There's a lot more information in a few seconds of watching cars on the road than in a thousand pictures of different cars. We get to see the 3D picture (we have dedicated circuits for that), hear 3D audio and perceive temporal data. We correlate all that and many more data sources to form categoties.

ML trained on bunch of static pictures is like humans dealing with those abstract geometrical riddles that are used on IQ tests. They're difficult for us, because they're not related to our normal, everyday experience.


Neural networks can learn new categories of things like that with about 5 examples. They are already outperforming humans on some tests. https://news.ycombinator.com/item?id=11737640


Not exactly: if you've never seen a particular kind of feather before, you may not recognize it at first sight, but most certainly you'll sit, examine it and eventually acknowledge it's a feather -- the neural networks we're using aren't prepared to do this kind of analysis yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: