With all the publicity over IBM’s Watson, there has been a lot of discussion about how far computers can go to “replace” people. Of course, I think that is not really the question to ask. It should rather be conceived as how computers can complement people. This complementary nature is also the perspective held by Dr. Aditya Vailaya, Chief Scientist for Retrevo.com. He has a PhD in Computer Science with a specialization in Machine Learning and Statistical Pattern Recognition, as well as significant experience with business applications of this technology. We recently discussed the role of computers and people in handling complex topics that require some level of both learning and semantic analysis.
I have often covered Retrevo’s creative surveys and other work such as these posts: Electric Gadget Inter-City Challenges and Are You a Digital Spy? So I was very interested to learn some about what goes into their services.
Aditya pointed out that Watson has demonstrated that you can build a machine to handle some level of cognition. However, this takes considerable effort. Watson was built and trained by a team of experts over a number of years. It uses math algorithms. These coupled with semantic analysis allow it to understand a natural language question and determine the probability that its answer is correct. However, it is good for a very specific task. The years of training may make it better than most, if not all, humans in playing Jeopardy. However, it will fail against humans in most of the other tasks we face every day.
To put the human versus computer issues in some more perspective it is useful to look at the work of pair of researchers, Martin Hilbert (USC) and Prisicilla Lopez (Open University of Catalonia) who have been looking at the growth of computer power and storage over the past twenty plus years. Recent research noted that, "to put our findings in perspective, the 6.4*1018 instructions per second that human kind can carry out on its general-purpose computers in 2007 are in the same ballpark area as the maximum number of nerve impulses executed by one human brain per second.” So all the computers in the world have now reached the capacity of one person. Congratulations. Adtiya feels that these numbers are in the right ballpark.
So the issue is not whether computers will outpace people but how the two can work together. Computers are very good at doing boring tedious, repetitive tasks than would drive people crazy at a rate and scale far beyond what people can do even with a fresh start on their best days. This frees people up to do the more complex and interesting tasks.
James Taylor makes a similar point in his post, Decision Management in the New York Times, commenting on a New York Times article, Smarter than you think, on e-discovery. He concludes that, “automation of decisions sometimes reduces the need for staff. Much more often it innovates and allows companies to apply the same staff to more problems by replacing boring, mechanical work with more interesting, more difficult work that is hard to automate or where automation is not desirable.”
Aditya noted that as computers are able to handle increasing complex tasks this frees people to do increasingly more interesting tasks. He also noted one of the main differences between people and computers. People have motivations. They have the will to survive, improve themselves, and many other things. Despite 2001 Space Odyssey, computers do not have feelings or motivations. Perhaps this lack of motivation is one reason they do not complain about long work hours but even that idea might be giving them too much intelligence.
Aditya said that Watson takes what the e-discovery tools do a step further. They just run search queries and use semantic analysis to sort what is relevant. Watson takes in full sentences in a natural language format and provides results back in a similar format. However, it can still make errors, just as the e-discovery tools often deal with the probability of being correct.
As these tools require extensive training to get started and constant re-training to stay on target there remains a strong role for people even within the use of these tools. Perhaps most importantly, people need to determine the questions these tools are targeted to address.
I asked Aditya how Retrevo uses machine learning. He said they provide information to consumers on products available on the Web such as digital cameras. To offer the best information, they use computers to scan the Web for product information and reviews. Addressing the volume of content out there is certainly more than a person can do or even a large team of people, especially at the speed required to stay current.
This is a great example of the right place to use machine learning and semantic technology. People are still required to train the computers, point them at the right sites, and monitor the results to determine adjustments to the algorithms. I would trust Retrevo’s computers’ ability to find the wide array of current options and viewpoints in product category over a team of people. It is then up to the people at Retrevo to keep pointing them in the right direction, as well as pushing the capabilities of the computers further.
Tanks Bill for this very provocative. I think it points to a number of issues as we move into the future in terms of the need for a new kind of literacy and awareness of how to design and optimize our interactions in social reality and networked space. At this point, as individuals, we're constrained to a brute force mode of rolling through all the different streams we encounter, each imposing its own cognitive requirements, refresh models, and with varying layers of active or passive involvement. In the near future we're going to need to break these streams up into constituents, then re-form them, applying the appropriate semantic glue in order to maintain their coherence. This is the big challenge for adoption, as I see it, of a really semantically-aware information universe.
Posted by: JoeRaimondo | April 13, 2011 at 02:03 PM
Joe Thanks for your comment. We are moving toward a more networked cognition. I know that I am more likely to use the cloud to store the details about any event or interview, etc. I know where to go and find the details but if someone asks me about details when I do not have access to the cloud I have to defer. In a parallel note, I know from my early days as a cognitive psychologist that language helps structure our thought.
Posted by: bill Ives | April 13, 2011 at 02:11 PM