This is one of the most ridiculous stories I have ever read. How do they so quickly dismiss Google's search algorithm as something to fall back to? Microsoft have been chasing their tails for years trying to be as relevant as Google and never achieving it.
It sounds like the author is suggesting dynamic web pages built by Watson that would answer questions (summarise court cases etc). The underlying problem is, where does all this data come from? It sounds like Watson would intepret multiple data points from around the web to compile the information. How can they say this information is correct? Google at least points you in the direction of the information then you make an informed decision yourself if it is correct.
This sounds like something we already have, we have Google and Wolfram Alpha. Problem sorted.
This is certainly not ridiculous. It is very legitimate to wonder whether systems involving more AI such as Watson are going to take over standard search techniques or not. Regarding Bing, it is as you said: Microsoft has been chasing Google but Watson is a totally different approach so the question is still open.
The concept of the AI is not what I am disputing, it's this particular story. There is no expansion on how it will replace it and equally dismisses Google's algorithm as simple to replicate.
I don't see it as replacing search either, people search for websites. What is the end result, that Watson replaces all sites on the internet with it's own dynamic page filled with its own information?
How would Watson choose how to display the information it returns to me? Am I getting the full story?
This article breezes over so many specifics I cannot take it seriously.
Neither Google nor WA are good at understanding the semantics behind a natural language question - something that Watson has been designed to do. Also, showing the percentage of certainty, as they do for medical queries, could be useful for many fields, particularly for academic research, given that they include citations, something that should be easy to include. I'm seeing Watson rather as a tool that could make a dent in specialist fields and trickle down from there, as the hardware to do that becomes cheaper.
And I completely agree with this, there is definitely a niche. Very much in the same way that Wolfram has a niche. It's never going to replace search but it definitely plays an important part in the way we look for information.
If anything, IBM could replace / merge with Wolfram
I completely agree. Google has a huge, possibly even insurmountable advantage over any competitor - the sheer quantum of user queries and clicks that helps it continuously refine the context of the information gathered by its spiders.
Information <--> Actual User Questions <--> Actual User Clicks
It is the continuous feedback loop between the three that makes Google what it is. Without the latter two, Watson is seriously disadvantaged.
It sounds like the author is suggesting dynamic web pages built by Watson that would answer questions (summarise court cases etc). The underlying problem is, where does all this data come from? It sounds like Watson would intepret multiple data points from around the web to compile the information. How can they say this information is correct? Google at least points you in the direction of the information then you make an informed decision yourself if it is correct.
This sounds like something we already have, we have Google and Wolfram Alpha. Problem sorted.