Crowdsourcing, the idea of slapping together an algortihm that quietly works alone in the background on a large corpus of data to generate recommendations, most popular lists, etc. is slowly dying. I pointed this out in my 2018 Predictions post, but didn’t offer a concise explaination as to why that would occur, but as I’ve watched the year progress, and added more examples to that predicitions post, I’ve come up with a fairly simple model.

The Model 1.0

  • Computers always beat humans.

The first thing to understand is that on any constrained problem, a computer can be built that will eventually out perform any human. You can see this in constrained domains like playing chess and Go, where human capabilties are now totally surpassed by computer players. And this is where your recommendation algorithm author enters the scene and builds a recommendation engine that they know will be better than anything a human can do. And they may be right. Now just because a recommendation is done by an algorithm doesn’t necessarily mean that it’s better, but from our 1.0 model you can see that if you put enough time into building that algorithm that on that constrained problem space you will eventuallly be able to beat all humans at the task of generating recommendations. And that, unfortunately, is where most companies stopped years ago.

But our model is incomplete, first because it doesn’t recognize that the generated output of that algorithm has value, and I’m not even talking about monetary value, I’m speaking of power to influence people, i.e. the next news article to read or the next book to buy has a lot of power to bring fringe ideas into the mainstream. Let’s just think about the resurgence of flat earthers, or more sinisterly that of white nationalists. Being able to game the algorithm to get your fringe idea bumped up in the recommendation engine is a huge win for these groups. Gaming those algorithms is the sole focus of Russia’s Internet Research Agency.

But didn’t I just say that algorithms can always be built to beat the humans, so isn’t this a contradiction? Well, it is of our 1.0 model, but that’s because we’ve left something out.

The Model 2.0

  • Computers always beat humans.
  • Computers plus humans always beat computers.

You can see this in particular in the realm of chess, where a human and computer pair of players can beat a computer.

Now suddenly launching a recommendation algorithm to silently chug along and produce recommendations looks like a fools errand, as there will always be groups that have an incentive, either monetary or otherwise, to use humans and computers together to subvert your algorithm. See for example Russia’s Internet Research Agency and their computer generated troll army. This Reply All podcast has another great example of gaming rankings.

So every time you create a recommendation engine, you’ve basically signed up for a battle that requires you to continuously update your algorithm, continuously invest engineering resources to keep yourself ahead of the scammers, spammers, and computer generated troll armies. Suddenly it doesn’t seem like such a great idea, does it?