Home of internet privacy

Your devices are rating you. Behave accordingly.

This post was originally published on August 15, 2019.

Chances are you’ve seen “Nosedive,” the infamous Black Mirror episode depicting a world in which ordinary people rate each other on the basis of their social interactions. On the popular show—which takes aim at the pervasive nature of technology in our lives and paints a dystopian picture of the future—these ratings feed back into a comprehensive algorithm, computing individual social credit scores that determine socioeconomic status.

In this fictitious world, having a low score prevents you from going about your everyday life. You can’t rent a car, book a hotel, or pay for flights.

But this story hits close to home for those subject to China’s evolving social credit system. Built to replicate the Western model of financial credit scores, the Chinese system works in a similar fashion except it rewards good behavior (and punishes bad) beyond just timely debt repayments.

It’s easy to question whether a state ought to wield such influence over the behavior of its citizens. But that would be missing a key point. The broader force making this unique social credit model possible in the first place is the proliferation of facial scanners, digital devices, machine-learning algorithms, and big data models. They are here to stay and becoming more intelligent over time.

Surveillance is a dark and mysterious force

Technology has the ability to fundamentally transform our social contracts and could, perhaps, lead to an elimination of the nation state in its current form.

If you’re reading this, chances are you’re not a fan of surveillance. You value your privacy and you’d rather the government not know what you’re doing 24/7. But that’s because we have the choice (some of us, at least) to live in a world where our rights, to some degree, are respected.

How would our behavior change if we lived inside a panopticon? Born from the writings of British philosopher Jeremy Bentham, a panopticon refers to a building featuring an observation tower in the centre of a circle of prison cells.

A panopticon is supposed to drive home the perception that your actions could be monitored at any time. Every prisoner looking out of their prison cell is able to see the central observation tower. There’s no way of reliably knowing whether a guard is looking at them at the same instant, but there’s no way of avoiding it either.

Bentham believed in the notion that power should be visible yet dark and mysterious. He argued that an individual’s relationship with society depended on it: Through surveillance, we’d be forced to conform to societal norms in terms of ethics, morals, and attitudes towards work.

Social credits are already here

Bentham died in the early 19th century, but his work and ideas live on. And while the panopticon as a behavior enforcement model didn’t really get off the ground, we can certainly argue that it’s only the tools that are different.

Let’s take Uber ratings, for example. The company confirmed earlier this year that it would start banning riders with low ratings. Drivers with low ratings face similar repercussions: They can’t drive for Uber Black in some states, and they could receive fewer ride requests.

The message here is simple yet direct: Be polite, friendly, or get cut off. If that’s not behavior control, I don’t know what is.

And this is far from the only manifestation of technology altering our lifestyle choices. Insurance companies want your Fitbit data so they can check how healthy you are and, potentially, adjust your premiums. Smart toothbrushes are beaming data back to your dental provider, so if you don’t brush your teeth often enough, get ready to cough up more cash.

We’ve come to rely on these models, too. Would you ever buy a product from an eBay seller with a low rating? How often do you read reviews online before trying a new service?

Of course, each of these examples reflects only a small area of monitoring, not the broad stroke taken by a government-run social credit system. But this seeming restraint exists, for the most part, because technology companies have been operating within their limited scopes. If all these data points were welded together, it could fuel a similarly wide-sweeping monitoring system.

It’s a moral quandary. Our addiction to technology and the data points we’re willing to give up make algorithms smarter and tech companies richer. While it’s easy to complain about a lack of privacy, the fact is that technology will only serve us better when it knows more about us. It’s a virtuous or vicious cycle, depending on how you look at it.

There’s a term for it too: surveillance capitalism, coined by Shoshana Zuboff of Harvard Business School.

Is privacy an inevitable casualty of technology?

As social credit systems proliferate throughout society, what’s the use of the state? Centralized forces to uphold liberty and guarantee fundamental rights are essential for societies—or are they? If we know that the devices in our pockets are capable of recording everyday actions, wouldn’t that automatically lead to behavioral changes?

Is anarchism the next major social movement? It’s certainly possible, according to professor Andreas Wittel, who argues that “digital technologies might open up new possibilities for large-scale forms of anarchist organization.”

Many of us are aware of the privacy risks associated with technology. Yet we ask Alexa to buy us stuff and Siri to check driving routes. We play our music on Spotify, order our food through Uber Eats, and split our checks via Venmo.

Technology is here to stay and privacy concerns won’t wipe it off the map. The question we should grapple with is whether it will continue to serve us, or will it eventually be the other way around?