A while back I wrote about the exchange in Slate between Gladwell and James Surowiecki, in a post, Blink vs. Wisdom of Crowds – Experts vs. the Multitude. Blink by Malcom Gladwell could be taken to support the value of individual wisdom, even Gladwell agrees with this. Surowiecki’s book, Wisdom of Crowds, argues for the ability of large numbers of “regular” people to out-think experts in certain situations.
In what appears as a rematch of the Gladwell vs. Surowiecki debate, Nick Carr writes an interesting comparison of Slashdot vs. Memeorandum. Slashdot and Memeorandum are both websites that highlights the headlines of technology-related stories appearing in blogs, newspapers and other media. Slashdot’s stories are chosen by a human editor, most are selected by Rob Malda, its founder. To make its selections, Memeorandum uses a confidential software algorithm written by Memeorandum founder Gabe Rivera to draw on the wisdom of the internet crowd.
Nick Carr does ananalysis of the top stories on a particular day in early March 2006 and finds the machine lagging far behind the human editor. He concludes that, “One can see on Slashdot an active, interested, engaged mind at work - the mind of a skilled editor. In comparison, Memeorandum feels flat and wooden, like the output of a computer.” Score one for Gladwell. He says the Memeorandum is no worst that other sites that use algorithms to filter content. It provides average results. In many cases this may be fine. He goes on to say, “The crowd aggregates all individuals' knowledge about variables while balancing out their personal biases and idiosyncracies. It's not the "wisdom" of crowds that makes crowds useful, in other words; it's their fundamental mindlessness. What crowds are good for is producing average results that are not subject to the biases and other quirks of human minds.” There is more and I recommend going to his well written piece, The editor and the crowd. There are some excellent comments to the post also.
The machine can certainly bring in a larger pool of data points. Google searches provide what it thinks the average reader wants and it looks at a lot more stuff than even the most skillful researcher can assess. But the when you get 67,000,000 hits on a key word in Google, you only see a few hundred, at most, with the caveat that they eliminated the redundant ones. I will add the caveat that I never actually when down to the bottom of one of these 67,000,000 hit items but I have seen the Google caveat at the end of many searches, greatly reducing the number of this you actually see. However, Google is also free, easy to use and is better than most other options with those qualities, score one for Surowiecki.
As we look at all these “web 2.0” applications that use algorithms to filter content, such as Newsvine, Digg, Memeorandum, and even Google, we should keep in mind that what we are largely seeing is the average mind aggregated. This may be very useful for many things, including learning what the average mind within a sub group is thinking about. Or it could be what all the minds in a subgroup are thinking about (e.g. del.icio.us tag) if we want to go through all of the search returns. Like many of these issues, I do not propose to have the answers but these are issues to think about.
I learned about Nick Carr’s post through one by Cesar Brea, Memeorandum: Does a Diggbot Trump Manual Digg-ing? Who points the biased samples that the automated tools generally deal with.
Bill, I wonder if you find it interesting, but "Emerging Tools for Real-Time Business Model Design and Exploration II" posted by Alex Osterwalder at his blog, Business Model Design and Innovation.
http://business-model-design.blogspot.com/2006/10/emerging-tools-for-real-time-business.html
This type of multi-touch-whiteboard will help those who are involved business model desing and innovation
Posted by: Tomoaki Sawada | October 11, 2006 at 04:14 AM
Thanks for the suggestion. I will check it out.
Posted by: bill Ives | October 11, 2006 at 08:24 AM