Tuesday, February 2, 2021

Automated Facial Recognition System of India and its Implications

This article is by

Share this article

Article Contributor(s)

Vaishnavi Krishna Mohan

Article Title

Automated Facial Recognition System of India and its Implications

Publisher

Global Views 360

Publication Date

February 2, 2021

URL

CCTV in operation

CCTV in operation | Source: Rich Smith via Unsplash

On 28th of June 2019, the National Crime Records Bureau (NCRB) opened bids and invited Turnkey Solution providers to implement a centralized Automated Facial Recognition System, or AFRS, in India. As the name suggests, AFRS is a facial recognition system which was proposed by the Indian Ministry of Home Affairs, geared towards modernizing the police force and to identify and track criminals using Facial Recognition Technology, or FRT.

The aforementioned technology uses databases of photos collected from criminal records, CCTV cameras, newspapers and media, driver’s license and government identities to collect facial data of people. FRT then identifies the people and uses their biometrics to map facial features and geometry of the face. The software then creates a “facial signature” based on the information collected. A mathematical formula is associated with each facial signature and it is subsequently compared to a database of known faces.

This article explores the implications of implementing Automated Facial Recognition technology in India.

Facial recognition software has become widely popular in the past decade. Several countries have been trying to establish efficient Facial Recognition systems for tackling crime and assembling an efficient criminal tracking system. Although there are a few potential benefits of using the technology, those benefits seem to be insignificant when compared to the several concerns about privacy and safety of people that the technology poses.

Images of every person captured by CCTV cameras and other sources will be regarded as images of potential criminals and will be matched against the Crime and Criminal Tracking Networks and Systems database (CCTNS) by the FRT. This implies that all of us will be treated as potential criminals when we walk past a CCTV camera. As a consequence, the assumption of “innocent until proven guilty” will be turned on its head.

You wouldn’t be surprised to know that China has installed the largest centralized FRT system in the world. In China, data can be collected and analyzed from over 200 million CCTVs that the country owns. Additionally, there are 20 million specialized facial recognition cameras which continuously collect data for analysis. These systems are currently used by China to track and manipulate the behavior of ethnic Uyghur minorities in the camps set up in Xinjiang region. FRT was also used by China during democracy protests of Hong Kong to profile protestors to identify them. These steps raised concerns worldwide about putting an end to a person’s freedom of expression, right to privacy and basic dignity.

It is very likely that the same consequences will be faced by Indians if AFRS is established across the country.

There are several underlying concerns about implementing AFRS.

Firstly, this system has proven to be inefficient in several instances. In August 2018, Delhi police used a facial recognition system which was reported to have an accuracy rate of 2%. The FRT software used by the UK's Metropolitan Police returned more than a staggering 98% of false positives. Another instance was when American Civil Liberties Union (ACLU) used Amazon’s face recognition software known as “Rekognition” to compare the images of the legislative members of American Congress with a database of criminal mugshots. To Amazon’s embarrassment, the results included 28 incorrect matches.. Another significant evidence of inefficiency was the outcome of an experiment performed by McAfee.  Here is what they did. The researchers used an algorithm known as CycleGAN which is used for image translation. CycleGAN is a software expert at morphing photographs. One can use the software to change horses into zebras and paintings into photographs. McAfee used the software to misdirect the Facial recognition algorithm. The team used 1500 photos of two members and fed them into CycleGAN which morphed them into one another and kept feeding the resulting images into different facial recognition algorithms to check who it recognized. After generating hundreds of such images, CycleGAN eventually generated a fake image which looked like person ‘A’ to the naked eye but managed to trick the FRT into thinking that it was person ‘B’. Owing to the dissatisfactory results, researchers expressed their concern about the inefficiency of FRTs. In fact mere eye-makeup can fool the FRT into allowing a person on a no-flight list to board the flight. This trend of inefficiency in the technology was noticed worldwide.

Secondly, facial recognition systems use machine learning technology. It is concerning and uncomfortable to note that FRT has often reflected the biases deployed in the society. Consequently, leading to several facial mismatches. A study by MIT shows that FRT routinely misidentifies people of color, women and young people. While the error rate was 8.1% for men, it was 20.6% for women. The error for women of color was 34%. The error values in the “supervised study” in a laboratory setting for a sample population is itself simply unacceptable. In the abovementioned American Civil Liberties Union study, the false matches were disproportionately African American and people of color. In India, 55% of prisoners undertrial are either Dalits, Adivasis, or Muslims although the combined population of all three just amounts to 39% of the total population (2011 census). If AFRS is trained on these records, it would definitely deploy the same socially held prejudices against the minority communities. Therefore, displaying inaccurate matches. The tender issued by the Ministry of Home Affairs had no indication of eliminating these biases nor did it have any mention of human-verifiable results. Using a system embedded with societal bias to replace biased human judgement defeats claims of technological neutrality. Deploying FRT systems in law enforcement will be ineffective at best and disastrous at worst.

Thirdly, the concerns of invasion of privacy and mass surveillance hasn’t been addressed satisfactorily. Facial Recognition makes data protection almost impossible as publicly available information is collected but they are analyzed to a point of intimacy. India does not have a well established data protection law given that “Personal data Protection Bill” is yet to be enforced. Implementing AFRS in the absence of a safeguard is a potential threat to our personal data. Moreover, police and other law enforcement agencies will have a great degree of discretion over our data which can lead to a mission creep. To add on to the list of privacy concerns, the bidder of AFRS will be largely responsible for maintaining confidentiality and integrity of data which will be stored apart from the established ISO standard. Additionally, the tender has no preference to “Make in India'' and shows absolutely no objections to foreign bidders and even to those having their headquarters in China, the hub of data breach .The is no governing system or legal limitations and restrictions to the technology. There is no legal standard set to ensure proportional use and protection to those who non-consensually interact with the system. Furthermore, the tender does not mention the definition of a “criminal”. Is a person considered a criminal when a charge sheet is filed against them? Or is it when the person is arrested? Or is it an individual convicted by the Court? Or is it any person who is a suspect? Since the word “criminal” isn’t definitely defined in the tender, the law enforcement agencies will ultimately be able to track a larger number of people than required.

The notion that AFRS will lead to greater efficacy must be critically questioned. San Francisco imposed a total ban on police use of facial recognition in May, 2019. Police departments in London are pressurized to put a stop to the use of FRT after several instances of discrimination and inefficiency. It would do well to India to learn from the mistakes of other countries rather than committing the same.

Support us to bring the world closer

To keep our content accessible we don't charge anything from our readers and rely on donations to continue working. Your support is critical in keeping Global Views 360 independent and helps us to present a well-rounded world view on different international issues for you. Every contribution, however big or small, is valuable for us to keep on delivering in future as well.

Support Us

Share this article

Read More

July 17, 2021 6:39 PM

How facebook helps the Authoritarian Regime in Vietnam

The ability of coercing American tech giants like Facebook into compliance is definitely a talking point to brag for the Vietnamese leaders. In October 2019, Facebook’s CEO Mark Zuckerberg stated that “Facebook stands for free expression. In a democracy, a private company shouldn’t have the power to censor politicians or the news.” However, Facebook’s double standard is no novelty. In August 2019, the Minister of Information and Communications, Nguyen Manh Hung took the parliamentary floor and stated that Facebook was restricting access to “increasing amounts” of content in Vietnam. Further, Hung stated that Facebook was complying with 70-75% of the Vietnamese government’s requests for post restrictions. In October 2020, this number went up to 95% for Facebook. Facebook acknowledged that the amount of content on which restrictions were imposed jumped by over 500% in the second half of 2018 alone.

Unlike China, Vietnam has adopted a relatively open attitude to western social media. Vietnamese politicians consider social media beneficial, perhaps it helps the promotion of their missions, personal agendas and even propagandas. In fact, Vietnam happens to have a military unit—called Force 47—with the purpose to correct “wrong views” on the internet. Whereas, there is no set set definition of the “wrong views,” people—if found guilty—can be jailed upto 20 years.

Furthermore, blocking western social media might not be in the self-interest of Vietnam, as doing so can hamper relations with the U.S.—with whom Vietnam desires to strengthen ties. The top communist strata of Vietnam for decades, have been single-minded on what they identify as “toxic information”. The definition of “toxic information” has only broadened over the years and has enabled the authorities to bend the term as per their whims. Vietnamese leaders have misused the threat of “toxic information” by branding content unfavorable to their regime with the term.

Facebook removed over 620 supposed fake accounts, over 2,200 links and several thousand posts which are deemed to be ‘anti-state’ from Vietnam in 2020. In a country without independent media, Vietnamese people are reliant on platforms like Facebook to read and discuss vital and controversial issues such as the dispute in Dong Tam. Dong tam is a village outside Vietnam’s capital, Hanoi, where residents were fighting the authorities’ plans to seize their farmlands in order to build a factory. 40-year-old Bui Van Thuan, a chemistry teacher and blogger, showed his solidarity to the fight and condemned the country’s leaders in one of his Facebook posts which stated “Your crimes will be engraved on my mind. I know you, the land robbers, will do everything, however cruel it is, to grab the people’s land.” On government’s insistence, Facebook blocked his account the very next day preventing over 60-million Vietnamese users from seeing his posts. A day later, Dong tam village was stormed by police with grenades and tear gas. A village leader and three officers were killed just as Thuan had anticipated. Thuan’s account remained suspended for three months after which Facebook informed him that the ban would be permanent. “We have confirmed that you are not eligible to use Facebook,” the message read in Vietnamese. Towards the end of murder trial held over the clash, a Facebook spokesperson said Thuan’s account was blocked due to an error and the timing of the lifting of restrictions was coincidental. The spokesperson denied censoring profiles as per the demands of the government. Thuan’s blacklisting illustrates how willingly Facebook submits to the authoritarian government’s censorship demands.

In April 2018, 16 activist groups and media organizations and 34 well-known Facebook users wrote an open letter to the CEO Mark Zuckerberg, accusing Facebook of assisting Vietnam to suppress dissenting voices. Force 47 or E47, a 10,000-member cyber unit was singled out in the letter. The letter called the unit “state-sponsored trolls” that spread misinformation about the Vietnamese pro-democracy activists.

Force 47 was deployed in 2016 by the state to maintain a “healthy” internet environment. The cyber unit took advantage of the very apparent loophole in Facebook’s community guidelines which automatically removes content if enough people lodge a complaint or report the post/account. The letter alleged that the government used Force 47 to target and suspend accounts or content.

According to a report by The Intercept, the modus operandi of E47 is that a member shares a target who is often a pro-democratic political dissident writer or activist. The information of the target who is nominated for censorship is accompanied with an image of the target with a red “X” marked over it. Anyone interested in victimizing the target needs to just report the account or post for violating Facebook’s pliant community standards regardless of whether the rules were actually broken. The E47 users are asked to rate the targeted page one out of five stars, falsely flag the post and report the page itself.  

Do Nguyen Mai Khoi, a singer and a pro-democracy activist, popularly known as “the Lady Gaga of Vietnam” has been tirelessly trying for over two years to get Facebook to care about the censorship in Vietnam. She has tried to get Facebook’s attention to the fact that groups like Force 47, a pro-government Facebook group of police, military, and other Communist party loyalists have actively been collaborating to suppress the voice of dissidents both offline and online. Her evidence has been substantial and her arguments carry ample clarity. Despite several interactions with Alex Warofka, a Facebook product policy manager for human rights, Mai khoi’s efforts have not been sincerely addressed. Instead, what they claimed was more infuriating. They said “We were not able to identify a sufficient level of community standards violations in order to remove that particular group (E47) or those particular actors.” Since E47 actors are under real names, photos and authentic identities, Facebook dismissed Mai Khoi’s evidence. “At a high level, we require both widespread coordination, as well as the use of inauthentic accounts and identity,” Warofka told Khoi.

Dipayan Ghosh, a former public policy advisor at Facebook and the co-director of the Digital Platforms & Democracy Project at Harvard’s Kennedy School stated:

“I think for Zuckerberg the calculus with Vietnam is clear: It’s to maintain service in a country that has a huge population and in which Facebook dominates the consumer internet market, or else a competitor may step in. The thought process for the company is not about maintaining service for free speech. It’s about maintaining service for the revenue.”

It wouldn’t be surprising to note that the inconsistency of Facebook’s ostensible community guidelines and policies extend beyond Vietnam. In 2016, during the time of political unrest in Turkey, access to Facebook and other social media were repeatedly restricted and further complied to the Turkish government’s request to restrict 1,823 pieces of content which the government deemed unlawful. In 2018, Facebook owned Instagram complied with demands of the Russian government to remove content related to opposition activist Alexei Navalny’s anti-corruption investigation therefore making it inaccessible for over 5 million users who watched and followed Navalny’s investigation. Facebook also routinely restricts posts that governments deem sensitive or off-limits in countries including Cuba, India, Israel, Morocco and Pakistan.

While the CEO of Facebook, Mark Zuckerberg, claims that the platform protects free expression, Facebook has been an active facilitator and flag-bearer of autocratic regimes. The social media giant’s apparent indifference and ignorance has failed its users terribly.

Read More