The integrity of user reviews
I was unpleasantly surprised when I had to learn from The Daily Background that companies (in this case Belkin) use services like Amazon’s Mechanical Turk to improve their online reviews by actually paying people to write positive reviews and rate bad reviews as “not helpful”. For the sake of fairness, the statement of Belkin’s president concerning this incident is available here.
Amazon’s Mechanical Turk (from their website)
is a marketplace for work that requires human intelligence. [...] Mechanical Turk aims to make accessing human intelligence simple, scalable, and cost-effective. Businesses or developers needing tasks done (called Human Intelligence Tasks or “HITs”) can use the robust Mechanical Turk APIs to access thousands of high quality, low cost, global, on-demand workers [...]
Whatever one might think about the Belkin case, it highlights three things:
- As we might have already suspected, user reviews might be indeed bought. In this particular case it was really obvious but I would not be too surprised if there are agencies specialized in writing reviews as part of their marketing services.
- It is not clear how one should evaluate the credibility of user reviews. Given that these reviews were actually written by different people it is rather unlikely to find a pattern – except maybe for the requested downgrading of bad reviews in this particular case.
- How would one actually establish that the request for user reviews was indeed posted by Belkin (or one of its employees). This poses a problem of a completely different dimension: Somebody might stage such a request to deliberately damage the reputation of a company (as discussed here).
If somebody is willing to pay for such a service, there will be somebody doing the job. Actually the construct reminds me of how spam networks work: trojans or viruses take control of an infected computer and report to some kind of a central instance (in many cases IRC channels). Then when spam emails have to be sent out the master forwards the spam email to the (so called) zombies that in turn forward it to millions of people. The catch is that it appears that the emails came from the infected computers making it especially hard to track down the spammers. Similarly in the Belkin case, expect for the fact that the human intelligence task (HIT) was discovered before it was completed (and hence automatically removed from the list of available HIT jobs). Otherwise it would have been equally hard to establish that the reviews were indeed fabricated.
One possibility (at least for Amazon) would be to allow only ratings from people that actually bought the product (which of course might pose some other problems). Alternatively, with user consent, Amazon might indicate when the user bought the product on Amazon. I guess it would cost significantly more than $0.65 (this is what the Belkin rep was willing to pay) to have somebody write a positive review for a product one wishes to not have bought in first place.
In any case, it becomes more and more apparent that we might need strong mechanism to ensure or verify identities. First, to track down questionable behavior and second to protect other entities from false accusations or other forms of misconduct. I am well aware that this discussion also has a lot of privacy related aspects that I didn’t address here…
But honestly, what is a review system good for when it lacks credibility? Just imagine the consequences in the case of scientific publications…
Also check out The Noisy Channel to learn how to trade 10 facebook friends for a whopper.