Omid Memarian, a journalist, analyst and recipient of Human Rights Watch's Human Rights Defender Award, is the Director of Communications at DAWN.
عربي
The newest and latest technologies, apps and online platforms are typically welcomed with open arms—often too open, according to Arthur Holland Michel. "Every month, there is a new surveillance device coming out that has a new way of watching, listening or tracking us," he warns.
A senior fellow at the Carnegie Council for Ethics in International Affairs and an associate researcher for the United Nations Institute for Disarmament Research, Michel studies both the use and abuse of technology for surveillance and the implications of artificial intelligence in military use, including with drones and other autonomous weapons.
Michel is currently conducting a research project about military applications of artificial intelligence for the International Committee of the Red Cross. He was previously the co-director for the Center for the Study of the Drone, a research institute that he founded as a student at Bard College in upstate New York in 2012. The author of Eyes in the Sky: The Secret Rise of Gorgon Stare and How It Will Watch Us All, about advanced aerial surveillance technology, Michel has also written about drones, surveillance, artificial intelligence, robots and more for Wired, Foreign Policy, The Atlantic, Fast Company and Vice, among other outlets.
"There is still very poor public literacy about online surveillance and its implications for the privacy of every one of us," Michel says. But he still has reasons to be hopeful. "In the last couple of years, people have started to take their digital privacy more seriously."
That includes political activists, who often have the most at stake, especially under authoritarian regimes that lack accountability and any respect or protections for privacy rights.
In an interview with Democracy in Exile conducted earlier this month in Miami, during the Oslo Freedom Forum, Michel explained that the first thing activists can do to protect themselves is to read up on digital privacy and surveillance issues. "There is a surprising amount of information about surveillance technologies available. Keeping up with new information on surveillance technologies will serve as a strong foundation for you to understand how you are, or are not, being watched. It will give you a sense of things you should or shouldn't do." These are lessons for everyone to protect themselves in an increasingly surveilled world.
The following transcript has been edited lightly for clarity and length.
What is so dangerous about surveillance technologies?
One of the first things you hear, especially from the companies that make these technologies, is that surveillance can be used for positive applications—to find children who have been kidnapped or to stop terrorist attacks before they happen, for example. That may all certainly be true, but, to my knowledge, there has not been a single surveillance technology in history that has not also been abused. These technologies do not enter a neutral framework when they come into the world. If they are effective in doing what they do—which is, in a way, creating forms of power—they will inevitably be used in ways that increase those disparities. They will be used in ways that are profoundly unfair; they will be used in ways that foreclose on basic rights. They will, in many cases, be used upon the most vulnerable people in a population. That is the common fundamental threat across all surveillance technology.
One particular technology that I've been studying over the last few years, which is called surveillance fusion, has an added level of danger. We all generate data just by existing in the modern world. Surveillance fusion brings together all of the disparate data feeds and data sets to reveal our movements, our associations with our network of people, our history and even, if there's a predictive element of these systems, indicate what we might do next. Given how accessible surveillance fusion is, and given how under most regulatory regimes it does not require a search warrant or a court order, this technology feels particularly significant and dangerous.
"These technologies do not enter a neutral framework when they come into the world. If they are effective in doing what they do—which is, in a way, creating forms of power—they will inevitably be used in ways that increase those disparities."
-Arthur Holland Michel
How has the development of surveillance fusion technology progressed over the past 10 to 15 years?
When it comes to the development of surveillance technology, the concept of surveillance fusion is not actually new. Investigators have applied the principle of fusion for as long as there have been forms of digital surveillance or even analog surveillance. It's a little bit like a puzzle; one informant gives you one piece of information, you tap your target's phone and get another piece of information, then maybe follow them late at night and get another piece of information. Then you can put all of the people you are tracking together—
Like what Carrie Mathison did in "Homeland" by pinning photos and maps on a cork board and connecting them with string?
—Exactly. You can put those pieces together and create a narrative. However, instead of having a person who has the cork board and has to draw that line from one picture to another and piece things together manually, the idea is to have algorithms that can do that automatically. You have a recording of a person speaking on their phone, and then you have information about their location, and then you bring those together and you then know where this person is having this conversation.
Algorithms for surveillance fusion have become more powerful by getting better at correlating disparate data points. This is due, in part, to the developments in algorithms for digital marketing which do the same thing: correlate data points. For example, one company has your location data, another company has your history of what you buy on Amazon, an algorithm correlates those two things and figures out that you are "X" type of person living in "Y" type of place, and ergo, "You're probably going to want to buy this product next."
The other reasons this technology has developed in recent years is the growing availability of data and the exploding diversity of data that exists. Every month there seems to be a new surveillance device coming out that has a new way of watching, listening or tracking us. With each new system that can be plugged in, these fusion complexes get more powerful.
It seems that based on the very personal nature of surveillance technology, the accessibility to it without the need for a court order or any regulations in place will open the door to those who can really abuse it. Has the development of personal identity and privacy protection progressed at a similar rate to that of the surveillance technologies?
It's been lagging. And I'll tell you why. Part of the power of fusion is that in isolation, any one of these single data points is not very meaningful or intrusive, because it doesn't tell the authorities a lot about you. These data points only become meaningful and intrusive when they are correlated together. When you are detected in a particular location, that data point is just some GPS coordinates, which, on its own, has no significance. But, if they can correlate that data point to the name of all the people that live in the surrounding area, and then correlate that network to find out if you're connected to any of those people, then suddenly they have a much more significant piece of information, which is that you visited the house of so-and-so.
Of course, it would be impossible to regulate individually insignificant data points. The reason the law has not caught up is because it's not about regulating the data points, it's about regulating the software that can do the correlations between data points. I'm hoping that this can be more included in the discussion. It feels very important especially given how much new data is coming online all the time.
"A Facebook profile is actually a fusion interface. It has your photos, your history, your network associations, your location data. People are increasingly observant as to how much you can glean from a person when you put these pieces together and they realize the implications in a very tangible way."
- Arthur Holland Michel
Have there been any differences in the ways governments use these technologies in regard to privacy rights?
Even in open democracies, these technologies don't generally require any regulatory approval. As a result, any police station can buy one of these systems, plug it all in and get going. Unlike wiretaps and other surveillance technologies that require a court order, there is little oversight.
One of the challenges in researching and writing about this issue is that there is a lot of secrecy and not a lot of transparency about these systems. In authoritarian regimes, we have even less information. The companies that sell this software, often Western companies, do not disclose their lists of customers. That has been an uphill battle. I've reached out to many of them, and they are not cooperative. So, we have to read between the lines a little bit.
What I can say with certainty is that the most important difference between how this software is used in more open democracies and in authoritarian regimes, is that if this technology is being used in authoritarian regimes, it is likely to be being fed with even more intrusive data than it is being fed with in a more open and regulated place. In a country where you don't need a warrant to tap someone's phone or intercept their cell phone messages, that information becomes a data set that can just automatically be plugged into surveillance fusion. Information from interrogation transcripts can be plugged in and correlated using some fairly rudimentary natural language processing. Therefore, people named in those transcripts could be correlated to location, and so forth.
During the Gezi Park protests, there was a police chief in Turkey who was interested in buying one of these systems from IBM to try and correlate images of protesters at demonstrations with government databases of people's tattoos and physical markings. Now, to my knowledge, a lot of law enforcement authorities in more regulated and open societies would not have that type of database. That being said, a lot of cities in the U.S., for example, have gang databases with information like that, that are sometimes plugged into surveillance fusion. But I think the main difference is in the type of data that is available for these systems.
Companies will sell surveillance technologies, sometimes through other intermediary or "shell" companies, to governments that we know use them to suppress their civil society. Has there been any efforts to hold these companies responsible?
There has actually been some work done in this space. For example, there has been quite a lot of attention on Western companies implicated in Chinese surveillance infrastructures, especially as part of the crackdown on the Uyghur population in western China. There has been quite a lot of attention on the sale of digital surveillance software, not necessarily fusion, to countries like the United Arab Emirates. A few years ago, the BBC did some in-depth reporting on the business relationship between a British company and the government of the UAE. That is really difficult investigative work, because these companies are so locked down and sometimes operate through intermediaries and shell companies. Secrecy is simply their way of doing business. In a way, it is the only way of doing business in the surveillance technology space. But I am optimistic. People are becoming more aware that these technologies can very easily end up in the wrong hands. They are seeing that there are somewhat traceable paths by which that happens and that, with some scrutiny, you can do something about it. So I'm hopeful on that front.
However, what I can say is that we're at the beginning of a journey. It's very complicated, because a lot of these systems, particularly surveillance fusion, are not very proprietary in nature. It is not as if when you stop one company from selling a technology to a particular government, that will foreclose on the possibility of any other company from any other country selling something very similar to that same government. A lot of the algorithms at the heart of these systems are open-source in one way or another. A lot of the mathematics behind them are common knowledge. As a result, nobody has an exclusive license on this know-how. That makes it really hard and worrisome. It's not like nuclear submarines or jet fighters, where, if you stop a couple of companies from selling the stuff, then you've sort of put a lid on it. I think that there does need to be more of a discussion about how to catch up to that reality.
It seems that people continue living their lives as normal while these technologies advance in their capacities. What should be done about the growing disparity between this advancement and public awareness of its implications?
I'm going to partially push back on that. I think there is more of a public discussion about privacy than there was five years ago. Five years ago, people thought social media was amazing. People were tagging their friends in pictures, not knowing that it was going to be used to train these very powerful facial recognition systems. Then there was a bit of a sea change in the last couple of years where people started to take their digital privacy more seriously.
But I completely agree with you that there is still very poor public literacy about surveillance matters and its implications for the privacy of every one of us. I have a lot of discussions with people who are not in my research space, at cocktail parties and other social gatherings, and I get one of two responses generally. People will say, they had "no idea that these technologies existed," and they are "shocked to learn about them." Inevitably they leave our conversation feeling super spooked, and chances are, I've ruined their night. The other response is the people who say that "privacy doesn't exist anymore." Even if they don't quite grasp all the technical details, they just imagine that this stuff exists and they're being followed and listened to all the time, and there's nothing they can do about it. Neither of those are particularly healthy attitudes to have. If either of those people had a little more literacy about what the technology is, how it works, who's using it, they would, one, have a better informed sense of what is or isn't happening, and, two, they would also see some pretty straightforward avenues to action.
A Facebook profile is actually a fusion interface. It has your photos, your history, your network associations, your location data. People are increasingly observant as to how much you can glean from a person when you put these pieces together and they realize the implications in a very tangible way. So, when I say that the discourse has changed, I think that the awareness surrounding social media has been a major factor in contributing to that. We're definitely not all the way there in terms of awareness, even on the social media front, but it did spark a lot of interest.
Arthur, are you on Facebook?
No, I'm not.
How should activists protect themselves from surveillance technologies?
The first thing that activists can do is to read up on the issues. That's where it all starts. There is a surprising amount of information about surveillance technologies available, thanks largely to the very brave reporting of a lot of other reporters on this beat. And there is more information every day. Keeping up with new information on surveillance technologies will serve as a strong foundation for you to understand how you are, or are not, being watched. It will give you a sense of things you should or shouldn't do.
If people are not also distributing that information to people in their network, they could create a point of vulnerability because a lot of these systems can track you by association. Even if you're very, very careful, your kid might be going on websites without exercising some of this same caution and due diligence and that's how they could get you. You could be tracked by association.
Because there are so many different ways that you can be watched, I'm hesitant to give blanket recommendations such as, "You should install X software on your phone to block tracking." In your case, it may not make a difference and it may not be the main way you're being tracked, or whatever the case may be. This is always a cat and mouse game; a bunch of activists install this encrypted messaging app and within a few months or a couple of years, the government finds a way to get into it, and so the game goes on.
Lastly, in regard to digital surveillance, how do you envision a future where the speed and spread of awareness and regulation is incomparable to the rate of these technologies' advancements?
Well, Omid, for health and safety reasons I try not to think too much about the future because it can have some serious adverse effects. [Laughter] In all seriousness, I am optimistic. As I mentioned, there is more of a discourse about digital privacy now than there was five years ago, and that gives me a lot of hope. I think that the Wild Wild West era of early social media is coming to a close. With every passing day, there is a growing awareness of the ethical implications of AI and algorithmic decision-making. Because of their profound, and increasingly tangible, impact on our lives, these issues will become common knowledge. That can be a tremendously powerful thing. That's what gives me hope.
At the same time, there are real concerns about how the technology continues to evolve. Something that really worries me is how uncritical we can be of the new data sources that we create. We see that with the advent of new home security devices, for example. We are far too willing to take the benefits, however marginal they may be, of new technologies with very little due consideration to what we are giving up in return. If not what we are personally giving up, then what our communities are giving up. It seems that, all too often, we are approaching a new data collection device with wide open arms, and then only realizing later, "Oh crap, this is problematic." If we are a little bit more predictive in evaluating that cost-benefit, it would make a big difference. To be perfectly frank, it's the only way we are going to stay on top of this stuff, because the speed at which new data sources are coming online is only going to accelerate. I think we have it in us to stay ahead, but it's gonna take some work.
Sophie Holin contributed to this interview.