Home of internet privacy

Q&A with surveillance critic Albert Fox Cahn

Albert Fox Cahn is founder and executive director of the Surveillance Technology Oversight Project (S.T.O.P.), a non-profit organization that aims to ensure that technological advancements don’t come at the expense of civil rights. S.T.O.P. litigates and advocates for privacy, working to abolish governmental systems of mass surveillance in New York and beyond. 

We speak with Fox Cahn about whether surveillance keeps us safe, the dangers of law enforcement’s use of facial recognition, and the phone setting he uses to lessen tracking.

[Don’t miss our interviews with privacy experts. Subscribe to the ExpressVPN Blog Newsletter.]

How did S.T.O.P. begin? 

S.T.O.P. began in 2019. I’d been been working for several years as a civil-rights lawyer, prior to that in corporate law, and I saw that even though we had had this sort of log jam at the national level when it came to surveillance, we’d seen exponential growth in state and local surveillance technologies that were fundamentally reshaping the powers of the police and the relationship between the government and the governed. 

I started S.T.O.P with the desire to try and systematically dismantle the local surveillance apparatus here in New York City as a way to show proof of concept for a broader model of dismantling local surveillance all across the United States, and then generally more globally. The state and local anti-surveillance effort focused on an intersectional human-rights model where we acknowledged the racism and bias that’s at the heart of so much surveillance.

Some people would argue that surveillance keeps us safe. How would you respond to that?

There’s been this myth for so many years that somehow if we just have enough cameras, if we just have more tracking, if we just have more and more invasive surveillance, it will somehow keep us safe—but the evidence has never materialized. In London, we saw the “ring of steel” constructed to monitor nearly every part of that city, and it never actually resulted in lower crime rates. What it actually resulted in was a greater sense of being unsafe because suddenly these crimes were more easily documented and shown on the evening news. 

Tools which are costing us money, eroding our privacy, undermining civil rights, and at the same time are failing to actually keep us safe are putting people in harm’s way and at risk of false arrest.

Here in the United States, we’ve seen boondoggle after boondoggle with millions of dollars being wasted on tools that are supposedly putting our safety first but are oftentimes just putting money in the pockets of vendors. We see tools like predictive policing which use really crude algorithms to try to put forward this veneer of objectivity and how police are being deployed. But again, when you look underneath the hood at how the algorithmic tools are operating, it is a very basic and flawed model that takes historical data to extrapolate it into a future of compounding bias and error. 

Here at the Surveillance Technology Oversight Project, we posit that if you asked people to choose between safety and values—no matter how much they treasure values like civil rights and equality—they’ll opt for safety. What we focus on, however, are those tools where it’s a win-win. Tools which are costing us money, eroding our privacy, undermining civil rights, and at the same time are failing to actually keep us safe are putting people in harm’s way and at risk of false arrest. These are the tools that really need to be dismantled.

What is the single policy change you are working on that you most want to see nationally?

I would love to see a ban on all government use of facial recognition across the United States. We’ve been pushing this here in New York City for quite some time, and continue to push it both at the city and state level. We feel that this is something long overdue as it is the most controversial and well-documented surveillance technology. Numerous studies have documented the continued biases and failures of these systems.

Here in New York City, we’ve seen pseudoscientific approaches where photos are Photoshopped before being entered into a facial recognition database. If a suspect’s eyes are closed, they’ll be Photoshopped open. If their mouths are open, they’ll be Photoshopped closed. Sometimes a jawline, or another facial facial feature, will be added from a Google Image search just to get a face, a collage, or something that the facial recognition algorithm can recognize as human. 

At that point, you’re not engaging in scientific evaluation of what a supposed match is; you’re engaging in reckless endangerment of people’s lives. When this technology gets it wrong, it doesn’t just mean a risk of someone going to jail, nor does it just mean a risk of someone being ripped away from their family. It’s the risk of police violence, it’s a risk of someone having a knee to the neck, or a SWAT team at their door. These are the stakes when facial recognition gets it wrong, and these are the reasons why we need to outlaw facial recognition—not have a warrant requirement, not have some oversight process, not have a restriction, we need a clear categorical ban.

S.T.O.P. has a particular focus on discriminatory surveillance. How do you define discriminatory surveillance? Who suffers most from it?

Discriminatory surveillance is any type of mass data collection or mass data analysis that puts a historically marginalized community at risk of harm. The most obvious examples are facial recognition, a system where you have documented patterns of individuals of color, women, LGBT individuals, all being at higher risk of a false positive. That means that in the case of law enforcement’s use of facial recognition, a heightened risk of false arrest. When we’re looking at the ways that data is being collected and used, we look at all the different failure points both at a technical level and also at the broader policy framework that determines whose data is being captured, how it’s being analyzed, and how that data is then being used.

How has Covid-19 impacted your work?

Very early on in the pandemic, we pivoted our focus to look at novel forms of privacy impacted from Covid-19. There are so many ways that this has changed how our communities are tracked. It has impacted the way we do contact tracing, and we’ve done quite a bit of work around exposure notification systems. We looked at the ways that these platforms can create real privacy concerns and at the broader equity implications, especially here in the United States where at least one in five people lack access to a smartphone. Many of those people who do have a smartphone have one that’s so old and limited that it can’t run these exposure notification systems.

One thing we have done again and again, over the last year during this pandemic, is say that we shouldn’t have Silicon Valley setting the public-health agenda when it comes to our response to Covid-19.

We pushed back on the techno-solutionist mindset that assumes that just because you can build an app, that the app will work, and that will make things better. We’re asking, what happens when the app fails? We’ve been looking at the ways these exposure notification systems have ended up failing in the United States, and the ways they’ve proven to be a really dangerous distraction from evidence-based public-health measures like traditional contact tracing, social distancing, and mask wearing—all the things that public-health authorities keep calling for. One thing we have done again and again, over the last year during this pandemic, is say that we shouldn’t have Silicon Valley setting the public-health agenda when it comes to our response to Covid-19. We should have public-health officials doing it.

What immediate and long-term concerns do you have with AI and surveillance?

AI creates a lot of multiplier effects when it comes to surveillance. When you have a CCTV camera, it takes a human being staring at it for hours just to get through a day’s worth of footage. When you are talking about AI, suddenly you have the ability to have hundreds of cameras scanned in real time. This means that automated license-plate readers can create a real-time map of where people are across the city and when they’re driving, it means the ability to analyze geolocation data, and it also means a layer of predictive analytics on top of the mere aggregation of location data. 

All the analytics which can then be extrapolated from our movements, patterns, and decisions, to ask the question: Is this person a threat? That’s such a powerful decision to make, and when the tools get it wrong, which they almost inevitably do, we see the same over-policed communities being the ones that suffer time and again.

What can the average person do to keep themselves informed/safe?

There’s so much we can do to protect ourselves from tracking. Our cell phones are the most powerful tool in our lives for tracking us, and the choices we make reflect the sort of preferences and trade-offs we have about our safety and privacy. 

I personally make choices that many of my clients can’t because I am privileged, because I’m an attorney, because I’m white, and because I’m a U.S. citizen. It gives me latitude that many don’t have, to post on social media, to have public comments, to have my location tracked by a lot of these commercial services. Because any time our location is tracked by a commercial entity. it’s just one legal request away from being tracked by the government. 

We can limit the amount that we are tracked on our phones by keeping our phones off when we’re not using them, putting them in airplane mode, or even putting them in a Faraday cage. There are many ways you can limit how your phone is being tracked. Similarly, you can limit the ways that your face is tracked through facial recognition by wearing a mask, wearing another sort of covering—obviously we should all be doing this during the pandemic as a way to preserve our health, but it can also be a way to preserve our privacy. 

That said, none of these systems are foolproof and it shouldn’t come down to us as individuals to protect ourselves against this sort of invasive spying, because that’s always going to be an outcome that privileges those of us with the time and resources to invest in these protections. Conversely, those who are just trying to make rent, who are just trying to get by, who are desperately trying to make out a living, don’t have the time to spend hours evaluating their cybersecurity and privacy risks. They’re the ones who are going to suffer and need the most protection. That’s why, on top of any steps we take to protect our own privacy, we have to protect our communities by passing new legal requirements, by taking steps to dismantle these police surveillance systems, and by also pressuring corporations to stop the models that enable these types of rampant government dragnets.

Read more: Interview: Encryption expert Riana Pfefferkorn on the erosion of online free speech