Q&A: Josh Chin and Liza Lin on How China Created a 21st Century Surveillance State
A new book details how much of China's authoritarian technology was made in America
Josh Chin and Liza Lin cover China for the Wall Street Journal. Chin is deputy bureau chief in China based in Seoul and Taiwan, and Lin is a China correspondent based in Singapore.
Together, they are the authors of a new book, Surveillance State: Inside China’s Quest to Launch a New Era of Social Control. They document how China’s Communist Party is striving for a political model that shapes the will of the people not through the ballot box but through the sophisticated—and often brutal—harnessing of data. Chin and Lin report on the Party’s ambitious push, aided by American technology and capital, to engineer a new society around the power of digital surveillance. They also report on how some of the technologies used in China have been used by American law enforcement and the use of Chinese surveillance technology to try to control the COVID-19 Pandemic.
I spoke with Chin and Lin last week over Zoom. Our conversation follows, condensed and edited for clarity.
Luke Johnson: I think many people know that China has a surveillance state and the most public parts of it, like internet censorship. But for clarity, can you explain what China's surveillance state is and what you think are the main components of it?
Josh Chin: In its broadest sense, the Chinese surveillance state is an effort by China's Communist Party to use data and AI to sort of create a new, more responsive form of authoritarianism. The idea is that you can use mass digital surveillance, cameras, microphones, social media data, mobile payment data, and behavioral information to detect threats and problems quickly -- and maybe predict them before they even arise, and optimize the way society functions. It's one gigantic social engineering project.
LJ: States often implement surveillance because they fear social unrest or political unrest. What do you think the Communist Party is concerned about to implement this system?
JC: The Communist Party wants to stay in power -- like any political party. The way it does that is by maintaining social stability. It needs to give Chinese people a stable life where they feel that they can peacefully improve their lot. The way that China has done it for the last couple decades is with immense economic growth. Even if [people] weren't getting rich, they felt they might get rich in the future. For the most part, people bought into that ideal. Recently, that growth has begun to slow. Under the pandemic, it's almost zero. The Communist Party needed some other way to keep society stable, and digital surveillance allows them to do that, at least, theoretically. It allows them to detect threats, [such as] dissidents or groups that might try to challenge the party. It also allows them to head off problems that might cause public uproar. It allows them to make life more smoother and more convenient for people. If you were in China 15 or 20 years ago, [doing] something as simple as paying your electricity bill, you had to stand in line at a bank for an hour and a half. Everything was really inconvenient. Now, you do everything on your smartphone. You can pay bills, book travel, invest, book doctor's appointments, all on your iPhone.
LJ: What are some of the major differences between the 20th century surveillance states in the Eastern Bloc and China's?
JC: I think at the most basic level, they're extremely similar. But they're very different in terms of scale and capability. All state surveillance efforts are utopian projects. They're all driven by this belief that societies can be engineered to fit an ideal if only you have enough information. Hannah Arendt had an observation about rudimentary surveillance during the Russian Empire. The secret police would take anyone who they thought was a person of interest and they would write their name on a card and draw a red circle around it. Then, they would make smaller little circles for all of their political allies and friends, and other smaller green circles for people who are non-political acquaintances. They would draw connections between all those people. Arendt says that the utility of the system was limited only by the size of the piece of paper.
Once you get into the 20th century, that starts to expand. In East Germany, the Stasi collected massive amounts of paper on people. They collected dossiers on people that if you laid them end to end would stretch more than 100 miles. At the turn of the 21st century, U.S. post 9/11 surveillance got rid of paper entirely.
Now, China is the next evolution beyond that. If you could put the amount of data that China has is able to collect in folders, it's hard to even imagine how you would calculate how far that would stretch. It now has analytical tools that make it possible to sift through that information. People in East Germany trying to find information had to go through miles worth of dossiers; China now has cameras that can scan a face and match it against a database of hundreds of thousands of people in a second.
LJ: You write:
Since the end of the Cold War, the debate about state surveillance has focused largely on what limits should be placed around it…Under Chinese leader Xi Jinping, the Communist Party has flipped the discussion on its head. The question Xi now poses is: What public goals can a government accomplish when given maximum access to private data?
Can you explain what you found they could accomplish?
Liza Lin: One thing that we have to highlight is a technological breakthrough. This breakthrough is called deep learning. It's a form of artificial intelligence that forms the backbone of the surveillance state in China right now. Between 2009 to 2010, there was a big revelation that you could train a machine to think like a human and to analyze things like a human. Researcher Andrew Gould discovered in 2010 that certain types of chips would allow deep learning a hundred times faster than before. This breakthrough enabled the commercialization of a lot of AI applications, such as facial recognition, image recognition, a lot of which is used by the Communist Party in China today for its surveillance state, both in nefarious and more benign uses.
In the course of our reporting, we found that China has used data collection to do a host of things. On the more sinister end, in Xinjiang, the Chinese police are collecting iris scans, faces, DNA, all for the set purpose of being able to identify so-called terrorists even before this person even knows that he's a terrorist. On a more benign level, we've also traveled to cities on the eastern coast of China such as Guangzhou, where the same sorts of systems enable the municipal government to help traffic run more smoothly to ease up traffic jams in a city whose infrastructure is decades old. In China, the government uses AI and cameras to spot fires, discover waste or rubbish piled up on the streets.
LJ: One thread is that it [surveillance] seems to try to predict people's behavior. How successful is it at that?
JC: It's essentially the same technology that almost everyone in the world is familiar with because of the tech giants. When Amazon is feeding you product recommendations or Instagram is feeding you ads, that's all based on behavioral data. They're trying to predict what you will buy or be interested in.
In China, they're using it for much more significant purposes.There's a lot of doubt about how effective it actually is. In Xinjiang, the government is using it to predict future religious extremists, people who might launch suicide attacks. When you talk to data and counterterrorism experts, they say it is almost impossible -- no matter how much data you have -- to predict who's going to become a terrorist, because there's not that many terrorists out there. The actual types of data they are collecting in order to predict are adjacent to actual terrorism. They're religious beliefs, gasoline purchases, or electricity use -- that may have something to do with terrorism, but actually may be completely divorced from terrorism.
On the other hand, in terms of predicting the way people move through a city, or the likelihood that someone is going to need a certain type of medical care, those predictions are much more within the capacity of these systems. It's unclear how much the Communist Party is distinguishing between failure and success at this point. They seem to be rolling things out as quickly as they can and they'll just see what works and what doesn't.
LJ: You write that many of the concepts and tools that the Communist Party uses to spy on its citizens were first created in the West. Were there any examples that stuck out to you of that?
LL: One of the biggest findings from the book as well was how much Western companies have aided the development of China's surveillance state from the very beginning. China was hosting public security expos around 2000-2001. Western companies were very eager to sell their technology to China. There's a researcher called Greg Walton who attended some of these big security expos and found Silicon Valley names -- Cisco, Sun Microsystems, Siemens, Nortel Networks. All these companies were then eager to sell their technology to the Chinese surveillance state. Sun Microsystems actually provided the technology for the first national fingerprint database for Chinese police. This situation has not changed across the years. If you look at the current AI software and hardware companies in China, a lot of the software companies, the people who are actually providing the AI for facial recognition to Chinese police, they're using chips from the Nvidia. They're using Intel. One of China's biggest AI companies got funding from investors such as Silver Lake, which is a big IP fund in the US, and also they got backing from Qualcomm, a big chip provider. To this day, there's still a lot of involvement by Western companies in the Chinese surveillance state, both in funding and through supply chains.
JC: Most of the technologies were invented in the U.S. or at least significantly developed in the western U.S., particularly in California. There's facial recognition. Very early on, Facebook was using facial recognition to identify your friends in your photos. Google did the same thing where it will try to categorize them for you and add tags. Also, predictive policing -- the idea that you can take data and use it to predict where new crimes are going to happen. That was being used in Xinjiang to collect data on religious minorities and analyze their behavior. It is based on a military concept that was developed by the the American military when they were fighting the War on Terror. Suddenly, intelligence became really important and the military needed to process lots of information. The military developed these data platforms that could suck in information from lots of different areas and help them plan military operations.
LJ: While the United States and Europe have the rule of law and, at least on paper, limits to how surveillance can be used, you write about an incident where facial recognition was used to prosecute a man for stealing a pair of socks from a T.J. Maxx in the Bronx. Can you explain that case and why it's concerning?
JC: This was a case of a man who lived in the Bronx and was arrested one day for stealing a pair of socks from T.J. Maxx. When the public defender looked into the case, she discovered it was based purely on the testimony of a single security guard. She thought that was weird because it's unusual for the D.A. to bring a case on such flimsy evidence. When she dug into it more, she discovered the NYPD had used a facial recognition system and a video still from the T.J. Maxx surveillance system, and put it into facial recognition and somehow, the system had spit up her client's name. They took the result to the security guard and asked him if this is the guy and the security guard said yes.
What was strange about that is that the guy was in the hospital, waiting for his child to be born. It was technically possible -- because the child was born after the sock theft -- that he left the hospital at some point to steal the socks and come back, but it's really unlikely.
The public defender thought this was all really suspicious and dug into it and tried to get the NYPD to explain how the facial recognition system had come up with her client's face. They refused to. This happens across the United States countless times. Police departments use facial recognition, but they don't use the result as evidence. They say it's an investigative tool. What that does is it shields facial recognition technology from scrutiny in the courts. If they were to use it as evidence, they would have to submit the technology over a rigorous set of tests to determine how accurate and fair it is. It wouldn't probably pass those tests. That's why you almost never see facial recognition showing up as evidence in U.S. courts.
LJ: So the concern is that police departments could use this technology to potentially frame someone innocent of a crime?
JC: I don't know about framing; it's hard to say what their motivations are. I think the general feeling for police is to give them the benefit of the doubt. They want to get things right. But they also have a lot of work to do. They have and facial recognition offers a shortcut; it's a nifty tool and it's and it's a computer. If you're a cop, and you're looking for someone, that just makes your job really easy. Police are not incentivized to question the technology.
When you dig into the technology, you find the technology varies. In general, it doesn't do very well in less than ideal conditions. The lighting has to be good, the angle has to be good, it also tends to do really poorly with people of color. It leads to a lot of misidentifications. The issue with police using it is that the technology never never gets scrutinized. They can use it however they want, and there's low visibility into what they're actually doing with the information.
LJ: Is there a particular technology or use of technology that is on the rise and people should look out for?
LL: I think China's model of using surveillance in governance is something that we're likely to see on the rise. We're already seeing a lot of countries outside of China adopt a smart city or safe city approach that China is using. Data from network cameras is fed into a back end where AI analytics is used to help determine cleaner, safer streets.
In 2020, Sheena Greitens of the University of Texas found that 80 countries were already using some form of China's surveillance state, be it in smart city systems or safe city systems. I think that number is only going to rise. Countries are realizing that there's a lot you can do with computing power that you cannot do with manpower. There are only so many policemen you can put on the street, but it's so much easier to put a hundred network cameras in a city that's monitoring a space 24/7 -- it's something humans cannot do.
Public Sphere is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Very interesting interview; is the US government ( federal or states) looking into how to control this surveillance so it will be used for positive change vs for example targeting innocent people or violating civil rights?