Tuesday, February 2, 2021

Automated Facial Recognition System of India and its Implications

This article is by

Share this article

Article Contributor(s)

Vaishnavi Krishna Mohan

Article Title

Automated Facial Recognition System of India and its Implications

Publisher

Global Views 360

Publication Date

February 2, 2021

URL

CCTV in operation

CCTV in operation | Source: Rich Smith via Unsplash

On 28th of June 2019, the National Crime Records Bureau (NCRB) opened bids and invited Turnkey Solution providers to implement a centralized Automated Facial Recognition System, or AFRS, in India. As the name suggests, AFRS is a facial recognition system which was proposed by the Indian Ministry of Home Affairs, geared towards modernizing the police force and to identify and track criminals using Facial Recognition Technology, or FRT.

The aforementioned technology uses databases of photos collected from criminal records, CCTV cameras, newspapers and media, driver’s license and government identities to collect facial data of people. FRT then identifies the people and uses their biometrics to map facial features and geometry of the face. The software then creates a “facial signature” based on the information collected. A mathematical formula is associated with each facial signature and it is subsequently compared to a database of known faces.

This article explores the implications of implementing Automated Facial Recognition technology in India.

Facial recognition software has become widely popular in the past decade. Several countries have been trying to establish efficient Facial Recognition systems for tackling crime and assembling an efficient criminal tracking system. Although there are a few potential benefits of using the technology, those benefits seem to be insignificant when compared to the several concerns about privacy and safety of people that the technology poses.

Images of every person captured by CCTV cameras and other sources will be regarded as images of potential criminals and will be matched against the Crime and Criminal Tracking Networks and Systems database (CCTNS) by the FRT. This implies that all of us will be treated as potential criminals when we walk past a CCTV camera. As a consequence, the assumption of “innocent until proven guilty” will be turned on its head.

You wouldn’t be surprised to know that China has installed the largest centralized FRT system in the world. In China, data can be collected and analyzed from over 200 million CCTVs that the country owns. Additionally, there are 20 million specialized facial recognition cameras which continuously collect data for analysis. These systems are currently used by China to track and manipulate the behavior of ethnic Uyghur minorities in the camps set up in Xinjiang region. FRT was also used by China during democracy protests of Hong Kong to profile protestors to identify them. These steps raised concerns worldwide about putting an end to a person’s freedom of expression, right to privacy and basic dignity.

It is very likely that the same consequences will be faced by Indians if AFRS is established across the country.

There are several underlying concerns about implementing AFRS.

Firstly, this system has proven to be inefficient in several instances. In August 2018, Delhi police used a facial recognition system which was reported to have an accuracy rate of 2%. The FRT software used by the UK's Metropolitan Police returned more than a staggering 98% of false positives. Another instance was when American Civil Liberties Union (ACLU) used Amazon’s face recognition software known as “Rekognition” to compare the images of the legislative members of American Congress with a database of criminal mugshots. To Amazon’s embarrassment, the results included 28 incorrect matches.. Another significant evidence of inefficiency was the outcome of an experiment performed by McAfee.  Here is what they did. The researchers used an algorithm known as CycleGAN which is used for image translation. CycleGAN is a software expert at morphing photographs. One can use the software to change horses into zebras and paintings into photographs. McAfee used the software to misdirect the Facial recognition algorithm. The team used 1500 photos of two members and fed them into CycleGAN which morphed them into one another and kept feeding the resulting images into different facial recognition algorithms to check who it recognized. After generating hundreds of such images, CycleGAN eventually generated a fake image which looked like person ‘A’ to the naked eye but managed to trick the FRT into thinking that it was person ‘B’. Owing to the dissatisfactory results, researchers expressed their concern about the inefficiency of FRTs. In fact mere eye-makeup can fool the FRT into allowing a person on a no-flight list to board the flight. This trend of inefficiency in the technology was noticed worldwide.

Secondly, facial recognition systems use machine learning technology. It is concerning and uncomfortable to note that FRT has often reflected the biases deployed in the society. Consequently, leading to several facial mismatches. A study by MIT shows that FRT routinely misidentifies people of color, women and young people. While the error rate was 8.1% for men, it was 20.6% for women. The error for women of color was 34%. The error values in the “supervised study” in a laboratory setting for a sample population is itself simply unacceptable. In the abovementioned American Civil Liberties Union study, the false matches were disproportionately African American and people of color. In India, 55% of prisoners undertrial are either Dalits, Adivasis, or Muslims although the combined population of all three just amounts to 39% of the total population (2011 census). If AFRS is trained on these records, it would definitely deploy the same socially held prejudices against the minority communities. Therefore, displaying inaccurate matches. The tender issued by the Ministry of Home Affairs had no indication of eliminating these biases nor did it have any mention of human-verifiable results. Using a system embedded with societal bias to replace biased human judgement defeats claims of technological neutrality. Deploying FRT systems in law enforcement will be ineffective at best and disastrous at worst.

Thirdly, the concerns of invasion of privacy and mass surveillance hasn’t been addressed satisfactorily. Facial Recognition makes data protection almost impossible as publicly available information is collected but they are analyzed to a point of intimacy. India does not have a well established data protection law given that “Personal data Protection Bill” is yet to be enforced. Implementing AFRS in the absence of a safeguard is a potential threat to our personal data. Moreover, police and other law enforcement agencies will have a great degree of discretion over our data which can lead to a mission creep. To add on to the list of privacy concerns, the bidder of AFRS will be largely responsible for maintaining confidentiality and integrity of data which will be stored apart from the established ISO standard. Additionally, the tender has no preference to “Make in India'' and shows absolutely no objections to foreign bidders and even to those having their headquarters in China, the hub of data breach .The is no governing system or legal limitations and restrictions to the technology. There is no legal standard set to ensure proportional use and protection to those who non-consensually interact with the system. Furthermore, the tender does not mention the definition of a “criminal”. Is a person considered a criminal when a charge sheet is filed against them? Or is it when the person is arrested? Or is it an individual convicted by the Court? Or is it any person who is a suspect? Since the word “criminal” isn’t definitely defined in the tender, the law enforcement agencies will ultimately be able to track a larger number of people than required.

The notion that AFRS will lead to greater efficacy must be critically questioned. San Francisco imposed a total ban on police use of facial recognition in May, 2019. Police departments in London are pressurized to put a stop to the use of FRT after several instances of discrimination and inefficiency. It would do well to India to learn from the mistakes of other countries rather than committing the same.

Support us to bring the world closer

To keep our content accessible we don't charge anything from our readers and rely on donations to continue working. Your support is critical in keeping Global Views 360 independent and helps us to present a well-rounded world view on different international issues for you. Every contribution, however big or small, is valuable for us to keep on delivering in future as well.

Support Us

Share this article

Read More

February 28, 2021 11:13 AM

Parler Shutdown, Big Tech, and Liberal Politics

Controversial social media site Parler, has been facing some problems regarding spreading of misinformation and the influence of several far-right groups. The platform became the most-downloaded free app in the Apple App Store on the weekend of November 8 - the day major media outlets called the election for Joe Biden. It was deplatfomized by Silicon Valley giants Apple, Google and Amazon after the storming of Capitol Hill. This article explains what is parler, how it influences people and what is the controversy about it.

What is Parler?

Parler is a social media website founded by Rebekah Mercer, John Matze and Jared Thomson. The platform refers to itself as an “unbiased social media” where people can “speak freely and express yourself openly without fear of being 'deplatformed' for your views," according to its website and App Store description.

The app mainly attracts conservative users—some of the Parler’s active users among public figures include Fox News host Sean Hannity, far-right activist Laura Loomer, radio personality Mark Levin, Senator Ted Cruz, and Congressman Devin Nunes. Eric Trump and Donald Trump's presidential campaign also have accounts on the platform.

With big tech companies like Twitter, Facebook and Instagram taking strict actions against the ex-President Donald Trump, and flagging misinformation, Parler became the free for all space for the conservatives.

Problems and influences

According to some reports, members of the Proud Boys, adherents of conspiracy theory QAnon, anti-government extremists, and white supremacists all openly promote their views on Parler. Holocaust denial, anti-Semitism, racism and other forms of bigotry can also be found among their ideas.

The co-founder of the website, Rebekah Mercer and her family came into national politics in 2016 elections when they donated more than $23 million to groups backing conservative candidates.

Rebekah Mercer is widely reported to have persuaded then-candidate Trump to reshuffle his campaign organization and hire Steve Bannon and Kellyanne Conway to help run his presidential bid in the final stretch of the 2016 election.

The shutdown: opinions on Parler and the monopoly of tech giants

The social networking site went dark when Amazon stopped providing it cloud hosting services after it was revealed the platform was used to help organize the Capitol Hill attack on January 6—which left five people dead. Amazon's actions were followed by Apple and Google that banned the Parler mobile app from their respective stores.

After the app went offline, it made a comeback after several days, registered with Epik as its provider. But Epik denies in an official statement that the company had any “contact or discussions with Parler in any form regarding our becoming their registrar or hosting provider.”

A Reuters report, citing an infrastructure expert, pointed to a Russian tech firm as supporting Parler's return online. It said that the IP address Epik used is owned by DDos-Guard, which is “controlled by two Russian men and provides services including protection from distributed denial of service attacks.”

The united Silicon Valley attack began on January 8, when Apple emailed Parler and gave them 24 hours to prove they had changed their moderation practices or else face removal from their App Store. The letter claimed: “We have received numerous complaints regarding objectionable content in your Parler service, accusations that the Parler app was used to plan, coordinate, and facilitate the illegal activities in Washington D.C. on January 6, 2021 that led (among other things) to loss of life, numerous injuries, and the destruction of property.”

It ended with this warning: “To ensure there is no interruption of the availability of your app on the App Store, please submit an update and the requested moderation improvement plan within 24 hours of the date of this message. If we do not receive an update compliant with the App Store Review Guidelines and the requested moderation improvement plan in writing within 24 hours, your app will be removed from the App Store.” The next day, Apple removed it from its App Store.

This was a kind of monopoly and alleged misuse of power by the tech giants to ban the website, but, in October, the House Judiciary Subcommittee on Antitrust, Commercial, and Administrative Law issued a 425-page report concluding that Amazon, Apple, Facebook and Google all possess monopoly power and are using that power anti-competitively. According to the report, iOS and Android hold an effective duopoly in mobile operating systems. However, the report concludes, Apple does have a monopolistic hold over what you can do with an iPhone. You can only put apps on your phone through the Apple App Store, and Apple has total gatekeeper control over that App Store.

Not only did leading left-wing politicians not object but some of them were the ones who pleaded with Silicon Valley to use their power this way. After the internet-policing site Sleeping Giants flagged several Parler posts that called for violence, Rep. Alexandria Ocasio-Cortez asked: “What are @Apple and @GooglePlay doing about this?” Once Apple responded by removing Parler from its App Store — a move that House Democrats just three months earlier warned was dangerous antitrust behaviour — she praised Apple and then demanded to know: “Good to see this development from @Apple. @GooglePlay what are you going to do about apps being used to organize violence on your platform?” The same steps were taken by Google later.

These actions showed the amount of power the Silicon Valley giants have, which can actually control the other company’s fate. The powers which were revealed by the steps taken by these companies were dangerous but at the same time helpful when done for the good. The liberal New York Times columnist Michelle Goldberg called herself “disturbed by just how awesome [tech giants’] power is” and added that “it’s dangerous to have a handful of callow young tech titans in charge of who has a megaphone and who does not.” She nonetheless praised these “young tech titans” for using their “dangerous” power to ban Trump and destroy Parler. Her opinion shows that liberals are happy until Silicon Valley censorship is used to silence their adversaries, not on themselves.

As put by Glenn Greenwald “Liberals like Goldberg are concerned only that Silicon Valley censorship powers might one day be used against people like them, but are perfectly happy as long as it is their adversaries being deplatformed and silenced (Facebook and other platforms have for years banned marginalized people like Palestinians at Israel’s behest, but that is of no concern to U.S. liberals).”

Clearly, the way Parler was misused for spreading propaganda had to be stopped as it led to one of the worst days in American history – the storm of the Capitol Hill – but the way they were censored and banned from the internet by the virtual unity of Silicon Valley giants Apple, Google and Amazon, has brought forth another dangerous fact to the world regarding how much power these companies hold. And if misused, they can prove to be more dangerous than Parler itself. But as long as they are using the power and censorship to maintain peace and lawfulness, even the liberals don’t have any problems with it, at least for now.

Read More