Behavioral and 'Ideological' Engineering: An Analysis of the Effects of AI-based Facial Recognition Technology on Uighur Cultural Identities and Internal Attitudes in China
Within the last decade, the Chinese authoritarian regime, in a stated effort to root out extremist and separatist threats to the nation, has escalated its persecution of the Uighurs, a Turkic Muslim minority predominantly residing in the northwest region of Xinjiang, officially known as the Xinjiang Uyghur Autonomous Region (XUAR) (Raza 488). Particularly, the regime has built a network of internment camps with the purpose of “re-educating” Uighurs, investing in extensive surveillance systems which involve mobile phone tracking, voice and DNA collection, police checkpoints, ubiquitous cameras, and facial recognition algorithms tuned specifically towards tracking, targeting and forcibly relocating Uighurs to these camps (Meterole & Polyakova 5). To date, they have subjugated over a million Uighurs to these internment camps (Samuel).
These unjust practices have garnered international attention and condemnation—on March 11, 2020, a concerned group of seventeen bipartisan senators in a letter to Mike Pompeo wrote: “‘China uses facial recognition to profile Uyghur individuals, classify them on the basis of their ethnicity, and single them out for tracking, mistreatment, and detention,” urging some type of legislation or actions to be enacted against the government (Ng). In China’s dual pursuance of ethnic, “extremist” cleansing and “dominating cutting edge technology industries”—the global race for technology, particularly AI-based technologies—the regime has poured billions of dollars into AI-related facial recognition research in a stated effort towards maintaining social and political stability (Gravett 153).
Crucial to note, especially in news media, is the distinction between the internment camps and actual facial recognition technology, and where reporters and the general public have placed blame: much criticism and emphasis is placed on the facial recognition technology—mainly, however, as a means through which the government identifies and then relocates Uighurs to internment camps, where they are then subject to: physical and psychological torture, including forced indoctrination of Chinese Communist propaganda, vocational training programs, coercive re-education tactics, as well as mass sterilization of Uighur women to suppress the minority population (“China Forcing Birth Control on Uighurs to Suppress Population, Report Says”). Primary concerns with facial recognition, especially in the news media focus on infringement of data privacy, violation of personal rights, technical shortcomings of algorithms used to classify images—biases and inaccuracies in these models—as well as potential misuse by government agencies.
Aside from Islamophobia towards Uighurs, what necessitates China’s reliance, then, on facial recognition technology to constantly surveil its inhabitants? Particularly, how does the fear of being facially captured impact our behavior and internal self? And how does the Chinese government exploit this through facial recognition? While the technology itself can be seen as a means towards altering behavior—detecting and then forcing Uighurs into internment camps with the intention of re-educating them and shifting their ideological allegiance—is facial recognition actually in itself a form of behavioral, and subsequently ideological, engineering (altering behavior and ideological allegiance through technology)? The implications of such a dystopian prospect threaten the foundations of liberty on which the US, and other nations, are built on. They are pertinent to address for the safety and sanctity of citizens in other nations around the globe.
In relation to the Uighur mass crisis, news media focus on the broad implications of AI and facial recognition in enabling the Chinese government to mass incarcerate Uighurs, without aptly attributing the internal, psychological ramifications of the actual facial recognition technology and mass surveillance on the Uighurs’ state of mind outside of the camps and in their daily lives. The very surveillance of Uighurs, enabled through facial recognition, already degrades their social and cultural identities before they have even entered camps: I believe facial recognition itself is already precipitating a shift in ideology and attitudes—internal beliefs—caused by having to repetitively alter external behavior in the face of constant mass surveillance, which in turn shifts internal attitudes. A socio-psychological perspective on these ideas can shed light on these shifting dynamics and provide a scientific foundation for this potential occurrence.
This constant surveillance instills fear in the Uighur population, and forces them to externally adhere to the government’s behavioral norms, both in public and private, which internally alters the Uighurs’ collective and individual cultural identities, degrading the Uighurs’ sense of self towards China’s will. Repetitively altering outward behavior can subconsciously impact our own internal beliefs: from a psychological standpoint, studies have shown that the line between the external and internal self is thin, and that changing behaviors correlate with changing attitudes (Stangor et al.). While the actual “re-education” internment camps are meant to shift extremist, Islamic ideologies, China’s dual use of facial recognition-based surveillance—in its quantification of human facial features and subsequent imposition of socially constructed, essentialized notions of race—coupled with encroaching at-home surveillance methods, also succeed in not only altering outward behaviors, but simultaneously shifting internal ideological attitudes, thus degrading the Uighurs’ cultural identities and senses of self. In essence, this ubiquitous surveillance forces the Uighurs to shift their internal mindsets, complementing the total ideology shift caused by the internment camps, instilling constant fear and uncertainty in their lives. China, potentially aware of the inherent degradation that facial recognition based surveillance has on the Uighurs, deliberately uses this as a method towards psychologically manipulating the internal attitudes of its dissidents.
AI ethics researcher Luke Stark likens the dangers of facial recognition technology to plutonium and nuclear weapons, in its racialization of individuals and simultaneous threat to one’s sense of self-identity—he contests: “Why introduce an invasive technology with a wide range of ill effects, when other mechanisms will do as well?” (Stark 54). Indeed, researcher and professor Ali Caksu at Yildiz Technical University in Istanbul describes how the Chinese government already employs a multitude of surveillance mechanisms to track every minute detail of its citizens’ activities, which are increased especially for Uighurs—these include smartphone surveillance, tracking and translating of texts and audio messages, armed security checkpoints, DNA tracking, even QR code tracking of items bought by Uighurs such as kitchen utensils (knives, especially) (Caksu 187). The Chinese government would seem to have a sufficient number of tracking measures implemented, yet they rely so heavily on the use of facial recognition as a means of surveillance in public environments. Given that the Uighurs’ activities are already tracked in private, facial recognition surveillance adds another dimension in restricting their public behavior, which allows for psychological manipulation in ways that physical, forced compliance at internment camps often falls short.
From a critical race theory based perspective, Stark highlights that facial recognition in China signifies a fundamental attack on human identities in quantifying human facial features as a means for essentializing racial identities (Stark 52). Facial recognition algorithms operate by examining facial features given by images and assigning quantitative values to these features and subsequently towards different race classifications. Stark notes that: “reducing humans into sets of legible, manipulable signs has been a hallmark of racializing scientific and administrative techniques going back several hundred years. The systems used by facial recognition technologies to code human faces perform an essentializing visual schematization” (Stark 52). This essentializing of race, propagated by China, is a social construct, as Stark argues: scientifically, “physiological facial variation is not dispositive of racial categories, either biological or sociological”—the fundamental algorithms behind facial recognition technology further enable this false construction and assignment of race by degrading human identities into numbers. The reduction of humans into quantifiable symbols is in itself a dehumanizing process; when used as a weapon against marginalized groups, it reduces the identities of those oppressed and reinforces—sends a message—to them of their inferior, reducible identities. When considered in the context of Uighur oppression in China, we see that these algorithms, in and of itself, already fundamentally attack the Uighurs’ sense of self and cultural identities. Uighurs are made to believe that they genuinely are inferior because the systems assign numerical values to their facial features, which then votes to imprison them. The “Uighur alarm,” developed by metaconglomerate tech company Huawei, is a prime example of the reduction of human identity—the systems alert Chinese authorities when an Uighur has been detected, objectifying and degrading their identities on a racial scale. This is further exacerbated by constant surveillance that alters behavior, which creates a reciprocal relationship between altering thought and behavior.
Intuitively, constant surveillance through facial recognition alters our outward behavior, which scholars have referred to as “behavioral engineering”—altering external human behavior through technology (Ng). Caksu describes China’s facial recognition based surveillance of Uighurs as a type of “virtual internment,” noting that entire cities even function as internment camps for Uighurs, due to the Chinese government’s constant surveillance. According to Caksu, this type of surveillance “focuses on forced behavioral change through a mentally induced state of control that aims to shape future generations” (Caksu 186). This includes the forced installation of spyware on mobile devices belonging to Uighurs, called “Cleannet Bodyguard,” meant to track online activity and alert authorities of suspicious activity (Leibold 52). Authorities also collect DNA data and attach GPS sensors to vehicles that Uighurs own in order to track them (Meserole & Polyakova 5). China’s facial recognition technology, implemented in security cameras all across Xinjiang and at police checkpoints, analyzes a person’s distinctive facial features in order to classify them into a certain racial category (Gravett 158). Moreover, in real-time, these algorithms can capture an individual’s gender, height, clothing characteristics, and even gait—also capturing behavioral characteristics, as opposed to mere photos (Gravett 158). As a result of these tracking measures and the fear of being transported to camps, Uighurs are forced to alter outward behavior to avoid suspicion.
By examining the scientific basis of changes in behavior, we can further uncover how this impacts changes in attitude and connect this to the Uighurs’ repetitive shift in public behavior and its impacts on their ideological leanings. Change in behavior when surveilled has a scientific basis: oftentimes in surveilled environments, individuals are motivated tend to acquiesce to certain social norms in front of a “watchful eye” and gain approval from a higher authority or general public audience (Rompay et al., 60). Moreover, the constant fear of being physically captured by a camera, especially exposing one’s distinctive facial features, inherently alters our behavior in public—since these cameras not only surveil, but pinpoint identities and access a plethora of data given just a snapshot of an individual’s face, we are inherently more cautious and tend to follow social norms when we know we are being visually surveilled. In private, Uighurs are also cautious given China’s pervasive at-home surveillance. Caksu notes that: “in the Uyghur Region individuals caught praying, fasting, growing a beard or wearing a hijab are arrested and sent to interment camps, as they are considered extremists or even terrorists. Nevertheless, avoiding such ‘extremist’ or ‘terrorist’ actions is not enough; those Uyghurs outside internment camps too must show their sincerity in distancing themselves from Islamic injunctions like avoiding eating pork and drinking alcohol” (Caksu 184). Given the pervasive and extensive nature to which these cameras surveil and are able to identify individuals, Uighurs are forced to repetitively alter their public behavior, and to particularly forsake normal cultural customs, as Caksu notes—along with being surveilled in private, Uighurs are essentially forced to abandon their entire ideology as a result of being in constant fear of sent to internment camps, having to be especially careful of their texts, how they speak at home, and the types of items they buy. In public, they are unable to express any leanings towards Islam and their religions. The Chinese government employs low criteria for characterizing “extremist” Uighurs, as Caksu notes in Table 1, which forces the Uighurs to act oppositely, and refrain from following or practicing Islamic ideology in any setting.
From a socio-psychological perspective, theories regarding cognitive dissonance and its impacts on behavior and internal attitude are complex, but highlight that the Uighur’s change in behavior potentially shifts their ideological and internal attitudes. In essence, the dissonance theory put forth by social psychologists Leon Festinger and Merril Carlsmith hold that engaging in counterattitudinal behavior, often as a result of external stimuli or pressure, creates a cognitive dissonance between our internal beliefs and outward behavior, which can be resolved by internal changes in attitude, and may precipitate attitudinal alterations (Festinger & Carlsmith 113). There has been much debate and evidence opposing this theory, particularly with other factors that influence attitudes, and is often case-specific. Collins and Hoyt, however, established a “personal responsibility-for-consequences” theory that reconciles opposing dissonance theories and highlights that “high consequences, high responsibility, and low inducement situations” yielded the greatest attitude changes (Collins & Hoyt 578). In essence, the researchers argue that when individuals feel a certain degree of responsibility, coupled with grave consequences for their actions, they behave in a way that mitigates self-damage, and in the process are prone to altering their own beliefs and attitudes as a result. The degree to which internal attitudes change is often commensurate to the relative degrees of responsibility, consequence, and inducement. Applying this paradigm to the case of the Uighurs being facially recognized: there is a grave, enormous sense of responsibility and consequence associated with exhibiting non-uniform behavior, and the Chinese government will seldom induce Uighurs into exhibiting what they regard as extremist behavior. The consequence is being forced into an internment camp, and the sense of responsibility stems from a type of public shame and humiliation associated with facial recognition cameras, as authorities will detain Uighurs in public, as a reporter notes (Ng).
Key to note is that Uighurs are forced to refrain from certain behaviors due to facial recognition—in internment camps, they are then forced to actively proclaim Chinese communist ideals, which precipitates their total ideological shift. Yet even in refraining from acting a certain way or adhering to Islamic practices, that fragment of their cultural and sense of self has effectively been shattered, as they no longer actively participate in their former religious practices, and can, in virtually no setting, express devotion to their religion. While Uighurs do not necessarily have to actively proclaim allegiance to Chinese communist ideals, the act of actively avoiding anything Islam-related fragments their sense of self—the repeated practice of disregarding their roots and religion can be interpreted as an active opposal of Islam, which can constitute an active counterattitudinal behavior. Coupled together, we see that there is a strong case for Uighurs altering their own religious leanings as a result of engaging in this active counterattitudinal behavior, where perhaps they subconsciously or unknowingly shift ideological beliefs as an attempt to rationalize their behavior, cope with the gravity of their situation, or as a function of repetitive, ingrained behavior.
Stangor et al., however, note: “when we use harsh punishments we may prevent a behavior from occurring. However, because the person sees that it is the punishment that is controlling the behavior, the person’s attitudes may not change.” This would seem reasonable for occasional or one-time occurrences, but in the case of Uighur surveillance, where they are required to repetitively, and constantly, alter behavior, the validity of that argument can be called into question. The severity of consequences—the theory brought forth by Collins & Hoyt— analogous to Stangor et al.’s “punishment,” however plays a significant role in shaping attitude, countering Stangor et al.’s point; notably, when the consequences, or punishments, are to such an extent as in the case of the Uighurs’, they feel internally obligated to shift both their behavior and beliefs in order to avoid imprisonment and potentially death.
When I first encountered an April 2019 NYTimes article a few years ago, entitled “How China is Using AI to Profile a Minority,” I was appalled: as someone who firmly believed in AI’s power for social good, China’s deliberate exploitation of AI-based facial recognition algorithms to detect and imprison Uighur Muslim minorities—AI being weaponized for use against marginalized communities—appalled me (Mozur). Research editor Richard Van Noorden’s November 2020 article “The Ethical Questions that Haunt Facial Recognition Research” published in Nature—one of the most renowned peer-reviewed scientific journals—focuses on the facial recognition algorithms underlying Uighur persecution in China as the central exhibit of unethical AI in question, repudiating peer-reviewed journals for allowing research articles on the development of facial recognition algorithms to distinguish between races, particularly Uighurs, to have ever been published in the first place (Van Noorden 354).
In considering other facets of facial recognition, thus, particularly its ramifications on the psychological state of those surveilled, I believe it is important to consider the extent to which it can shift internal beliefs and attitudes, and how terrifying it is in that respect, in facilitating a truly dystopian, broken future world. I realize, and understand now, why Stark likens facial recognition to nuclear weapons, in that it simply poses an enormous threat to human safety, well-being, and culture, and must be regulated. I’m heartbroken that with the capabilities of AI to change the world for the better, we see its malicious use on an unprecedented scale, particularly with facial recognition in China. Gene Bunin, a reporter from The Guardian, quotes an Uighur man: “We’re a people destroyed” (Bunin). I think that it says it all. Beyond issues of data privacy and collection associated with facial recognition, there is an inherent degradation of humanity, culture, and the internal sense of self propagated by the technology when put into the wrong hands. The case of the Uighurs in China, who have had their cultural identities shattered and lives ruined, prove how psychologically tormenting this technology is in facilitating this cultural and social degradation, and the need for proper regulation and intervention to prevent this from spreading across the globe.
Annotated Bibliography
Buckley, Chris, and Paul Mozur. “How China Uses High-Tech Surveillance to Subdue Minorities.” The New York Times, 22 May 2019. NYTimes.com, https://www.nytimes.com/2019/05/22/world/asia/china-surveillance-xinjiang.html. This source provided relevant background on the Uighur crisis in China.
Bunin, Gene A. “‘We’re a People Destroyed’: Why Uighur Muslims across China Are Living in Fear.” The Guardian, 7 Aug. 2018. The Guardian, https://www.theguardian.com/news/2018/aug/07/why-uighur-muslims-across-china-are-living-in-fear. This source provided relevant background on the psychological and social state of mind of the Uighurs in China.
Byler, Darren. China’s Hi-Tech War on Its Muslim Minority | China | The Guardian. https://www.theguardian.com/news/2019/apr/11/china-hi-tech-war-on-muslim-minority-xinjiang-uighurs-surveillance-face-recognition. Accessed 28 Apr. 2022. This source provided relevant information on China’s surveillance and use of technology to track Uighurs.
“China Forcing Birth Control on Uighurs to Suppress Population, Report Says.” BBC News, 29 June 2020. www.bbc.com, https://www.bbc.com/news/world-asia-china-53220713. This source provided relevant background on the types of awful things the Chinese government does to suppress the Uighur population.
Collins, Barry E., and Michael F. Hoyt. “Personal Responsibility-for-Consequences: An Integration and Extension of the ‘Forced Compliance’ Literature.” Journal of Experimental Social Psychology, vol. 8, no. 6, Nov. 1972, pp. 558–93. ScienceDirect, https://doi.org/10.1016/0022-1031(72)90080-7. This source provided an essential foundational basis for psychological impacts of engaging in behavior and how it impacts attitude.
Festinger, Leon, and James M. Carlsmith. “Cognitive Consequences of Forced Compliance.” The Journal of Abnormal and Social Psychology, vol. 58, no. 2, 1959, pp. 203–10. APA PsycNet, https://doi.org/10.1037/h0041593. This source was instrumental in illustrating psychosocial dissonance theories associated with behavior and change in internal attitudes.
Harwell, Drew, and Eva Dou. Huawei Tested AI Software That Could Recognize Uighur Minorities and Alert Police, Report Says - The Washington Post. https://www.washingtonpost.com/technology/2020/12/08/huawei-tested-ai-software-that-could-recognize-uighur-minorities-alert-police-report-says/. Accessed 29 Apr. 2022. This source gave information on an “Uighur” alarm implemented in facial recognition software.
“How Facial Recognition Technology Is Bringing Surveillance Capitalism to Our Streets.” OpenDemocracy, https://www.opendemocracy.net/en/oureconomy/how-facial-recognition-surveillance-capitalism-streets/. Accessed 28 Apr. 2022.
Jansen, Anja M., et al. “The Influence of the Presentation of Camera Surveillance on Cheating and Pro-Social Behavior.” Frontiers in Psychology, vol. 9, 2018. Frontiers, https://www.frontiersin.org/article/10.3389/fpsyg.2018.01937. This source highlighted how camera surveillance impacts external behavior.
Khayrallah, Nadia. “Beyond ‘That’s Not Funny’: Reading Into How We Read a Prison Rape Joke.” The Morningside Review, vol. 11, May 2015. journals.library.columbia.edu, https://journals.library.columbia.edu/index.php/TMR/article/view/5421. This is a method source that helped me better use the first person in this paper.
Larkin, Fionnuala, et al. “How Does Restricted and Repetitive Behavior Relate to Language and Cognition in Typical Development?” Development and Psychopathology, vol. 29, no. 3, Aug. 2017, pp. 863–74. Cambridge University Press, https://doi.org/10.1017/S0954579416000535. This source expanded on how repetitive behavior impacts cognition, but focused on much younger years and infant development.
Mozur, Paul. “Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras.” The New York Times, 8 July 2018. NYTimes.com, https://www.nytimes.com/2018/07/08/business/china-surveillance-technology.html. —. One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority - The New York Times. https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html. Accessed 29 Apr. 2022. These sources provided relevant background information on the Uighur crisis in China.
Nagib, Rima Abdul Mujib, and Syaiful Anam. “De-Extremization Effort through Political Re-Education Camps In China: A Case of Uyghur Ethnic Minorities.” Nation State: Journal of International Studies, vol. 4, no. 1, 1, June 2021, pp. 51–72. jurnal.amikom.ac.id, https://doi.org/10.24076/nsjis.v4i1.517. This source provided background information.
Ng, Alfred. “China Tightens Control with Facial Recognition, Public Shaming.” CNET, https://www.cnet.com/news/politics/in-china-facial-recognition-public-shaming-and-control-go-hand-in-hand/. Accessed 29 Apr. 2022. This source highlighted how shame and humiliation plays into facial recognition.
Polyakova, Alina, and Chris Meserole. Exporting Digital Authoritarianism: The Russian and Chinese Models. p. 22. Primack, Dan. “Axios: Facial Recognition Startup Used to Help Detain China’s Uighur Muslims Raises $750 Million.” Axios, Newstex, 8 May 2019, https://www.proquest.com/docview/2428657772/citation/85F1A8FC08E9442EPQ/1. Raza, Zainab. “China’s ‘Political Re-Education’ Camps of Xinjiang’s Uyghur Muslims.” Asian Affairs, vol. 50, no. 4, Aug. 2019, pp. 488–501. Taylor and Francis+NEJM, https://doi.org/10.1080/03068374.2019.1672433. Background information.
Reporter on China’s Treatment of Uighur Muslims: “This Is Absolute Orwellian Style Surveillance.” https://www.cbsnews.com/news/china-puts-uighurs-uyghyrs-muslim-children-in-prison-re-education-internment-camps-vice-news/. Accessed 29 Apr. 2022. Samuel, Sigal. “China Is Treating Islam Like a Mental Illness.” The Atlantic, 28 Aug. 2018, https://www.theatlantic.com/international/archive/2018/08/china-pathologizing-uighur-muslims-mental-illness/568525/. Smith, Marcus, and Seumas Miller. “The Ethical Application of Biometric Facial Recognition Technology.” AI & SOCIETY, vol. 37, no. 1, Mar. 2022, pp. 167–75. Springer Link, https://doi.org/10.1007/s00146-021-01199-9. Talked about ethical applications of facial recognition.
Stangor, Dr Charles, et al. Changing Attitudes by Changing Behavior. 2022. opentextbc.ca, https://opentextbc.ca/socialpsychology/chapter/changing-attitudes-by-changing-behavior/. Talked about psychological theories behind changing behaviors and attitudes.
T, Nhx. “China and Facial Recognition: How the Country Uses the Tech to Monitor, Control, and Even Publicly Shame People.” Tech Times, 11 Aug. 2020, https://www.techtimes.com/articles/251733/20200811/facial-recognition-how-china-uses-the-technology-to-control-or-publicly-shame-its-people.htm. The Tension Between Inner Self and Outer Self. https://www.verywellmind.com/tension-between-inner-self-and-outer-self-4171297. Accessed 28 Apr. 2022.
Tohti, Ilham, and Translated Cindy Carter. Present-Day Ethnic Problems in Xinjiang Uighur Autonomous Region: Overview and Recommendations. p. 39. van Rompay, Thomas J. L., et al. “The Eye of the Camera: Effects of Security Cameras on Prosocial Behavior.” Environment and Behavior, vol. 41, no. 1, Jan. 2009, pp. 60–74. SAGE Journals, https://doi.org/10.1177/0013916507309996. Talked about effects of surveillance on external behavior.
“Who Are the Uyghurs and Why Is China Being Accused of Genocide?” BBC News, 21 June 2021. www.bbc.com, https://www.bbc.com/news/world-asia-china-22278037. Wiggins, Jacky. Survival of Uyghur Ethnic Identity: A Case of Self Preservation. p. 11. Yang, Yuan. “The Role of AI in China’s Crackdown on Uighurs.” FT.Com, Dec. 2019. ProQuest, https://www.proquest.com/docview/2324510201/citation/3C02B0577EC54425PQ/1.
Yee, Nick, et al. “The Proteus Effect: Implications of Transformed Digital Self-Representation on Online and Offline Behavior.” Communication Research, vol. 36, no. 2, Apr. 2009, pp. 285–312. SAGE Journals, https://doi.org/10.1177/0093650208330254. Zuboff, Shoshana. “Opinion | You Are Now Remotely Controlled.” The New York Times, 24 Jan. 2020. NYTimes.com, https://www.nytimes.com/2020/01/24/opinion/sunday/surveillance-capitalism.html