Cracking the Facebook Code

Recommendation agents pay a minor, but increasingly important role in our lives. Whether it’s Amazon’s book recommendations, or Netflix, Pandora Radio, or Google Search, we rely on algorithms and computers to tell us what to watch, listen to, and how to get where we’re going. Anybody who’s a Facebook user knows that the best part of the site is the feed, seeing all your friends and what they’re up to. But we only see a fraction of our friends’ total activity. The “top news” feed is heavily edited for your pleasure. Tom Weber of the Daily Beast goes inside Facebook’s algorithm, running a variety of experiments to see how Facebook decides what we see.

It’s a fascinating read about computers are managing, maybe even censoring our social lives. As we rely more and more on computers to curate the torrent of information out there, we’ll become increasing dependent on these confidential algorithms. Practical reserve-engineering efforts like Tom’s are necessary for us to manage these technologies. As Jaron Lanier put it in a very Prevail-ish op-ed last year.

“What all this comes down to is that the very idea of artificial intelligence gives us the cover to avoid accountability by pretending that machines can take on more and more human responsibility. This holds for things that we don’t even think of as artificial intelligence, like the recommendations made by Netflix and Pandora. Seeing movies and listening to music suggested to us by algorithms is relatively harmless, I suppose. But I hope that once in a while the users of those services resist the recommendations; our exposure to art shouldn’t be hemmed in by an algorithm that we merely want to believe predicts our tastes accurately. These algorithms do not represent emotion or meaning, only statistics and correlations.”

No, these algorithms don’t represent us, but relying on them is far safer if we have better ideas about how they work, and the filters they impose.

Leave a Reply