Let's try to simplify that.
The idea is that a people would be asked to predict how well a specific product will sell. The more accurate an individual's predictions prove to be, the more that person is rewarded either tangibly (cash, store credit, etc) or intangibly (credibility scores, etc).
This arrangement serves the retailer's interests far more than it does those of potential buyers. Online outlets such as Apple's iTunes Store and App Store already suffer from a positive feedback loop that gives prominence to popular items.
Asking people to forecast how well products will sell and then devoting marketing resources to the ones they tip for success smacks of a self-fulfilling prophesy. But if you're going to propose spending $100,000 to promote a product, you'll look good if it turns out to be a big seller - never mind that it might have sold just as well without the promotion.
What do you need from a review? Please read on.
What you really care about is that it does the job you want. You're not going get that from a prediction of sales volume, but you can from a real review, especially one written by someone that actually uses that type of product.
Importantly, the patent suggests the 'reviewers' would not necessarily receive the actual product being sold: "For example, if the item is a song, the store can deliver the entire song, a 60-second full quality sample, a reduced quality sample of the entire song, etc. If the item is an application, the store can deliver the entire application, a full-featured version of the application set to expire after 7 days, or a limited functionality version of the application, etc."
As a potential buyer, would you trust the opinion of someone that hasn't actually used the item you're considering? A fundamental rule of reviewing a product is that you test and report on the sample you receive. If a DVD lacks extras, you say so. If a factory-fresh piece of hardware is DOA, you mention the fact when reporting your experience with its replacement. If the audio quality of a piece of music isn't up to scratch you bring that to your readers' attention.
Back in the day, I bought a David Bowie recording on vinyl and then returned it due to unacceptable noise in one portion. We played every copy in the store and they were had the same fault, but I held out for a replacement. When the next batch arrived, the problem had gone. I seem to recall similar and more recent cases where people reported relatively poor quality downloadable music files that were subsequently fixed by the vendor.
The assumption that a reviewer's ability to accurately predict future sales somehow reflects the quality or accuracy of their reviews is highly questionable. It might have some merit in situations where the primary criterion is "do I like this enough to buy it?", eg, music. But it has little value for relatively complex products where different subsets of functionality are of concern to different groups of buyers. A product might deserve to sell well, but that's no guarantee that it will.
Segmentation is accommodated - see page 3.
Another issue is that an individual can find value in reviews written by people they disagree with. For example, there used to be a film critic whose tastes were almost diametrically opposite mine. If he bagged a movie, there was a pretty good chance that I'd enjoy it. But under the scheme covered by the patent application, if he wasn't very good at predicting the audience, his reviews would be buried: "The system presents in the electronic store received feedback from at least one individual whose predictive ranking coincides with the actual ranking of the item". So that doesn't help me.
It seems to me that the method set out in this patent application would motivate reviewers to 'think average' rather than 'think different'.