Of course, there's severe bias here, in the sense that what we consider abstraction is by definition "human shaped" abstraction
If multiple humans try to "abstract" a cat, the overlap in underlying processes will be pretty big, making it more likely that we can recognise each other's abstractions.
Of course, there's severe bias here, in the sense that what we consider abstraction is by definition "human shaped" abstraction
I can read the words here, but I don't understand the meaning.
We abstract to find a common set of features in things that are supposed to be the same but that are not present in things that are not supposed to be the same. Grouping these features then produces higher level abstractions, and so on.
Where would the bias be?
Even if the features differ, the process is the same.
And even the features are often the same. If you reverse a DCNN to see what it uses to classify things as "cats", expect to see whiskers and fur.
Think of Bugs Bunny. He looks nothing like a real rabbit, yet humans recognise him as a rabbit (presumably) because we look at the characteristics that separate him from a normal human, then compare those characteristics with our list of things with those characteristics (long ears, big feat, eats carrots) and get a rabbit.
If he'd been made to look like a rabbit-octopus hybrid instead of rabbit-human, we may have struggled more.
Computers don't look at things from a human perspective; they're still good at abstraction, just different to human abstraction. i.e. there's a human bias in there.
That's OK though; the objective is to make a computer that sees things the way people do; so it's a bias we want.
However the issue isn't that the computer's not a sentient being and therefore can't abstract things it's never seen before; only that the algorithm hasn't been written to sufficiently take account of human bias.
I think the word you're looking for is "familiarity", insofar as it describes a particularly efficient means of recognition. E.g. humans have become pretty good at identifying cats.
I don't see a fundamental difference between biological and electronic neural nets; so please take the following with a physicalist grain of salt. Imho, precisely because NNs will be fed with nothing else than the reality (physical or virtual) we live in, it should gradually develop the same familiarity as humans have; i.e. nothing more and nothing less than elements of our lives/civs. Visually lots of cats, lots of cars, mountains and coasts; functionally all the tasks we accomplish daily, like driving or cooking or cleaning.
I don't really think you can hard-code "human bias" as it's an emergent property of our biology: too complex (we don't really understand much of it, imho you're bound to miss the mark and induce subjective biases), and somewhat contradictory to how NNs are supposed to evolve (thinking long term here). Basically, I don't think it would be practical nor cost efficient to induce too much perturbations in deep learning, better work on refining the process itself. Think of plants: you can tweak the growing all you want, but the root deciding factors lie in genetics (their potential, and in understanding how to maximize it).
I realize another wording is that we should apply sound evolutionary (Darwin etc.) principles in "growing" AI at large. Because AI and humans share the same environment, we should see converging "intelligence" (skills, familiarity, etc). It's a quite fascinating time from an ontological perspective.
It's interesting to think about what the limits of an AI that doesn't have a full human experience are. I think you're probably right that machine vision will be competitive with human vision. It's already much better in specialized areas.
General purpose machine translation is harder, for instance. Brute force algorithms have gotten decent, but aren't in the same ballpark as humans (though professional translation services now often work by correcting a machine translation). However, MT systems trained on a specific domain do much better (medical or legal docs, etc).
What would be the hardest task for machines that's trivial for humans? Maybe deciding if a joke is funny or not?
Perhaps not the hardest, but one where there's tons of room for improvement: the Story Cloze Test [1] is a test involving very simple, five-sentence stories, where you pick the ending that makes sense out of two endings.
A literate human scores 100% on this test. No computer system so far scores better than 60%. (And remember that random guessing gets 50%.)
Interesting study; whilst it's possible to guess which ending is expected as correct, the alternate could be easily argued. For example, in the case of Jim's getting a new credit card, I recall during my uni days many students took that exact approach to debt...
Good point; I'd not considered whether the human imprint would be down to familiarity (individual's) or in-built through evolution (inherited familiarity); likely a combination of both.
In fact, I recently read that chimpanzees raised by humans are believed to identify as human rather than chimp; so individual familiarity does seem stronger.
The book, "We are all Completely Beside Ourselves" is fiction, but refers to findings from real studies.
You implicitly (and I think without realising) presume objectivity + complete knowledge in the observer.
Human perception is heavily biased towards features that had evolutionary advantages, and limited by whatever technical flaws our eyes/brains/etc have. That's a selection bias in our perception of information, in our processing of said information, and therefore in the abstractions that result from it.
I agree with what you say, but it doesn't support your earlier statements.
I presume it's possible that the limitations of our visual system means we may miss powerful features and hence the ability to build some more powerful abstractions. (I didn't even argue this, just pointed out the process is the same even if features differ)
But I don't see how this supports your original claim of bias, which was: "If multiple humans try to "abstract" a cat, the overlap in underlying processes will be pretty big, making it more likely that we can recognize each other's abstractions."
If humans are good at recognizing each others' abstraction, that's a validation that low-pass (for lack of a better term) filtering the features due to human's physical design still creates very good abstractions and classifiers. That is to say, if anything you're confirming that humans are designed in a way that makes the abstractions they can make maximally useful.
That's exactly what I and others have been arguing. Now to be clear: it's not that these classifications are wrong, just that out of all possible classifications we could have found, we will most likely find the ones that fit the human perspective of the world.
Think of the Turing test and its criticisms; it's kind of has the same issues.
PS: I've upvoted every comment of yours; asking questions like this should be encouraged :)
> still creates very good abstractions and classifiers.
My point is that "good" and "bad" are not objective here, but depend on human use-cases.
Now to be clear: I'm not disagreeing with you! These are good abstractions, for humans. It lets us communicate concepts easily, which is great! But it might not be the best abstraction in every circumstance.
For example, I recall reading an article that said that AI is better at spotting breast cancer from photos (which is essentially interpreting abstract blobs as cancer or not). The main reason seems to be that it is not held back by the human biases in perception.
Cats are probably a particularly unfortunate example to use in comparing abstraction forming cabilities, as given our history it's highly likely that we come supplied with some dedicated cat recognition circuitry.
Of course, there's severe bias here, in the sense that what we consider abstraction is by definition "human shaped" abstraction
If multiple humans try to "abstract" a cat, the overlap in underlying processes will be pretty big, making it more likely that we can recognise each other's abstractions.