Tuesday, February 2, 2021

Automated Facial Recognition System of India and its Implications

This article is by

Share this article

Article Contributor(s)

Vaishnavi Krishna Mohan

Article Title

Automated Facial Recognition System of India and its Implications

Publisher

Global Views 360

Publication Date

February 2, 2021

URL

CCTV in operation

CCTV in operation | Source: Rich Smith via Unsplash

On 28th of June 2019, the National Crime Records Bureau (NCRB) opened bids and invited Turnkey Solution providers to implement a centralized Automated Facial Recognition System, or AFRS, in India. As the name suggests, AFRS is a facial recognition system which was proposed by the Indian Ministry of Home Affairs, geared towards modernizing the police force and to identify and track criminals using Facial Recognition Technology, or FRT.

The aforementioned technology uses databases of photos collected from criminal records, CCTV cameras, newspapers and media, driver’s license and government identities to collect facial data of people. FRT then identifies the people and uses their biometrics to map facial features and geometry of the face. The software then creates a “facial signature” based on the information collected. A mathematical formula is associated with each facial signature and it is subsequently compared to a database of known faces.

This article explores the implications of implementing Automated Facial Recognition technology in India.

Facial recognition software has become widely popular in the past decade. Several countries have been trying to establish efficient Facial Recognition systems for tackling crime and assembling an efficient criminal tracking system. Although there are a few potential benefits of using the technology, those benefits seem to be insignificant when compared to the several concerns about privacy and safety of people that the technology poses.

Images of every person captured by CCTV cameras and other sources will be regarded as images of potential criminals and will be matched against the Crime and Criminal Tracking Networks and Systems database (CCTNS) by the FRT. This implies that all of us will be treated as potential criminals when we walk past a CCTV camera. As a consequence, the assumption of “innocent until proven guilty” will be turned on its head.

You wouldn’t be surprised to know that China has installed the largest centralized FRT system in the world. In China, data can be collected and analyzed from over 200 million CCTVs that the country owns. Additionally, there are 20 million specialized facial recognition cameras which continuously collect data for analysis. These systems are currently used by China to track and manipulate the behavior of ethnic Uyghur minorities in the camps set up in Xinjiang region. FRT was also used by China during democracy protests of Hong Kong to profile protestors to identify them. These steps raised concerns worldwide about putting an end to a person’s freedom of expression, right to privacy and basic dignity.

It is very likely that the same consequences will be faced by Indians if AFRS is established across the country.

There are several underlying concerns about implementing AFRS.

Firstly, this system has proven to be inefficient in several instances. In August 2018, Delhi police used a facial recognition system which was reported to have an accuracy rate of 2%. The FRT software used by the UK's Metropolitan Police returned more than a staggering 98% of false positives. Another instance was when American Civil Liberties Union (ACLU) used Amazon’s face recognition software known as “Rekognition” to compare the images of the legislative members of American Congress with a database of criminal mugshots. To Amazon’s embarrassment, the results included 28 incorrect matches.. Another significant evidence of inefficiency was the outcome of an experiment performed by McAfee.  Here is what they did. The researchers used an algorithm known as CycleGAN which is used for image translation. CycleGAN is a software expert at morphing photographs. One can use the software to change horses into zebras and paintings into photographs. McAfee used the software to misdirect the Facial recognition algorithm. The team used 1500 photos of two members and fed them into CycleGAN which morphed them into one another and kept feeding the resulting images into different facial recognition algorithms to check who it recognized. After generating hundreds of such images, CycleGAN eventually generated a fake image which looked like person ‘A’ to the naked eye but managed to trick the FRT into thinking that it was person ‘B’. Owing to the dissatisfactory results, researchers expressed their concern about the inefficiency of FRTs. In fact mere eye-makeup can fool the FRT into allowing a person on a no-flight list to board the flight. This trend of inefficiency in the technology was noticed worldwide.

Secondly, facial recognition systems use machine learning technology. It is concerning and uncomfortable to note that FRT has often reflected the biases deployed in the society. Consequently, leading to several facial mismatches. A study by MIT shows that FRT routinely misidentifies people of color, women and young people. While the error rate was 8.1% for men, it was 20.6% for women. The error for women of color was 34%. The error values in the “supervised study” in a laboratory setting for a sample population is itself simply unacceptable. In the abovementioned American Civil Liberties Union study, the false matches were disproportionately African American and people of color. In India, 55% of prisoners undertrial are either Dalits, Adivasis, or Muslims although the combined population of all three just amounts to 39% of the total population (2011 census). If AFRS is trained on these records, it would definitely deploy the same socially held prejudices against the minority communities. Therefore, displaying inaccurate matches. The tender issued by the Ministry of Home Affairs had no indication of eliminating these biases nor did it have any mention of human-verifiable results. Using a system embedded with societal bias to replace biased human judgement defeats claims of technological neutrality. Deploying FRT systems in law enforcement will be ineffective at best and disastrous at worst.

Thirdly, the concerns of invasion of privacy and mass surveillance hasn’t been addressed satisfactorily. Facial Recognition makes data protection almost impossible as publicly available information is collected but they are analyzed to a point of intimacy. India does not have a well established data protection law given that “Personal data Protection Bill” is yet to be enforced. Implementing AFRS in the absence of a safeguard is a potential threat to our personal data. Moreover, police and other law enforcement agencies will have a great degree of discretion over our data which can lead to a mission creep. To add on to the list of privacy concerns, the bidder of AFRS will be largely responsible for maintaining confidentiality and integrity of data which will be stored apart from the established ISO standard. Additionally, the tender has no preference to “Make in India'' and shows absolutely no objections to foreign bidders and even to those having their headquarters in China, the hub of data breach .The is no governing system or legal limitations and restrictions to the technology. There is no legal standard set to ensure proportional use and protection to those who non-consensually interact with the system. Furthermore, the tender does not mention the definition of a “criminal”. Is a person considered a criminal when a charge sheet is filed against them? Or is it when the person is arrested? Or is it an individual convicted by the Court? Or is it any person who is a suspect? Since the word “criminal” isn’t definitely defined in the tender, the law enforcement agencies will ultimately be able to track a larger number of people than required.

The notion that AFRS will lead to greater efficacy must be critically questioned. San Francisco imposed a total ban on police use of facial recognition in May, 2019. Police departments in London are pressurized to put a stop to the use of FRT after several instances of discrimination and inefficiency. It would do well to India to learn from the mistakes of other countries rather than committing the same.

Support us to bring the world closer

To keep our content accessible we don't charge anything from our readers and rely on donations to continue working. Your support is critical in keeping Global Views 360 independent and helps us to present a well-rounded world view on different international issues for you. Every contribution, however big or small, is valuable for us to keep on delivering in future as well.

Support Us

Share this article

Read More

February 25, 2021 12:44 PM

Constructing Panopticon: Israeli Surveillance Technology and its Implications for the Palestinians

Jeremy Bentham, an English philosopher and social theorist designed ‘Panopticon’ in the late 18th century. The panopticon is an institutional building which Bentham describes as “a new mode of obtaining power of mind over mind in a quantity hitherto without example”. The structure's central observation tower, placed within a circle of prison cells, allows a watchman to monitor the inmates of the building without the dwellers knowing whether or not they are being watched. Although it is physically impossible for a single watchman to observe all the occupants at once, the fact that the inmates cannot know when they are being watched means that they are motivated to act as though they are being watched at all times. Thus, compelling the inmates to regulate their own behaviour.

Michel Foucoault, a French Philosopher, uses panopticon as a metaphor to explore relations between systems of social control and people in a disciplinary situation. For Foucault, the real danger was not that the individuals are repressed by the social order but the fact that when only certain people or groups of people control knowledge, oppression is a possibility. Contemporary society uses technology for the deployment of panoptic structures ‘invisibly’ throughout society.

This article gives an overview of the massive panopticon that is built and operated by Israel in Occupied Palestine.

Israel’s unaccountable military rule over its Palestinian citizens in east Jeruselum, West Bank and Gaza Strip have kept the Palestinians under constant surveillance and control. As per a report by Amitai Ziv on Haaretz, Israel’s surveillance operation against Palestinians is (as of 2019) “among the largest of its kind in the world. It includes monitoring the media, social media and the population as a whole.”

Among various mechanisms of surveillance, the technological mechanisms of surveillance and control deployed or proposed in the region of Gaza Strip is most empowering to Israel in terms of gathering ‘intelligence’. This includes use of biometric identity cards, Israeli access to Palestinian census data, almost complete access to and control of the telecommunication infrastructure in the Gaza Strip, the ability to track individuals via cell phone, large surveillance zeppelins which monitor the entire electromagnetic spectrum and which can usurp control of these from Palestinian operators (for instance sending text messages to subscribers targeting different demographics) as well as optical surveillance, facial recognition technology, remote controlled and robotic machine gun towers guarding the border that are capable of identifying a target and opening fire automatically—without human intervention.

In the context of occupation, the use of biometric ID cards of Israeli citizens is the sharpest seepage of control technologies.  For a long time, Israel has used a system of differentiated ID cards to distinguish between Jewish and Non-Jewish, citizens and residents of Israel, and citizens and residents of the occupied territories.

These ID cards also have a record of ethnic/religious affiliation of the person, and the ID numbers themselves are coded so as to reflect this information. One’s status of whether they are an Israeli or Palestinian, whether they are a citizen or a resident determines their freedom to travel, their ability to find jobs, and even their ability to get married and avail social benefits.  The Palestinians in East Jerusalem—which was annexed after the 1967 war—are considered as “conditional residents” and not citizens. According to a Human Rights Watch report, a resident of Palestine occupied Israel reported that the Israeli authorities refused to issue birth certificates to his five children, all born in Jerusalem. Other Jerusalem residents without residency status, in their testimonials, described being unable to legally work; obtain social welfare benefits; attend weddings and funerals; or visit gravely ill relatives abroad, for fear Israeli authorities would refuse to allow them to return home.

Another significant technological mechanism is the Facial recognition technology which has found its way into use by Israeli police. Facial recognition system, a globally controversial and scientifically flawed system is being used by the police force in Israel to identify protestors and is also implemented at airports and border crossings.

Israel has also ratcheted its social media surveillance, especially Facebook, Palestinians’ preferred platform. In October 2015, Israeli invasion at the Al-Aqsa Mosque angered several Palestinians. Many teenagers who didn’t belong to military wing or the Palestinian political faction orchestrated the attacks. The Israeli government blamed the social media for instigating the attacks and the military intelligence increased the monitoring of Palestinian social media accounts. Consequently, over 800 Palestinians were arrested for their posts on social media, particularly Facebook. It was later revealed that these arrests were a result of a policing system which uses algorithms to build profiles of supposed Palestinian attackers. This system proctors thousands of Palestinian Facebook accounts sifting for words like shaheed (martyr), Zionist state, Al Quds (Jerusalem), or Al Aqsa. Further, the algorithm identifies a “suspect” based on ‘prediction’ of violence. These targets are marked suspicious and are a potential target for arrest on the grounds of “incitement to violence”. The term incitement refers to all types of resistance to Israeli practices. The Israeli Army declared Military order 1651 in 2010, according to which, anyone who “attempts, orally or otherwise, to influence public opinion in the West Bank area in a manner which may harm public peace or public order” or “publishes words of praise, sympathy or support for a hostile organization, its actions or objectives,” will serve a jail time of 10 years. The order defines this as “incitement”. One notable instance has been the poetry of Dareen Tatour. She is a Palestinian citizen of Israel. She expressed her call to “resist” the occupiers through a poem she posted online in October 2015. The video had less than 300 views. But it resulted in nearly three years of house arrest and five months imprisonment. The Israeli government charged Tatour with inciting violence and terrorism while her poem was a call for a non-violent resistance. This incident is a classic demonstration of how Israel uses vague terminology to criminalize online activity when it serves its discriminatory interests.  

Israel’s military industrial complex is a profound enabler of the digital surveillance of Palestinians. The nation not only implements surveillance and control but also manufactures and exports a massive amount of military and cyber security technologies. A report published by Privacy International—an NGO that investigates government surveillance and companies—in 2016—stated that Israel has about 27 surveillance companies which is the highest per capita in terms of surveillance that any country has in the world.

The Guardian collected testimonies from people who worked in the Israeli Intelligence Corps to understand the big brother surveillance of the Palestinians. One of the testimonies revealed that commoners and even completely innocent people were under the radar of surveillance. The attestor stated “As a soldier in Unit 8200, I collected information on people accused of either attacking Israelis, trying to attack Israelis, desiring to harm Israelis, and considering attacking Israelis. I also collected information on people who were completely innocent, and whose only crime was that they interested the Israeli security system for various reasons. For reasons they had absolutely no way of knowing. All Palestinians are exposed to non-stop monitoring without any legal protection. Junior soldiers can decide when someone is a target for the collection of information. There is no procedure in place to determine whether the violation of the individual’s rights is necessarily justifiable. The notion of rights for Palestinians does not exist at all. Not even as an idea to be disregarded.”

Another testimonial exposed that the data collected was hardly in accordance with the security needs. The testimony stated, “Throughout my service, I discovered that many Israeli initiatives within the Palestinian arena are directed at things that are not related to intelligence. I worked a lot on gathering information on political issues. Some could be seen as related to objectives that serve security needs, such as the suppression of Hamas institutions, while others could not. Some were political objectives that did not even fall within the Israeli consensus, such as strengthening Israel’s stance at the expense of the Palestinian position. Such objectives do not serve the security system but rather agendas of certain politicians. One project in particular, was shocking to many of us as we were exposed to it. The information was almost directly transferred to political players and not to other sections of the security system. This made it clear to me that we were dealing with information that was hardly connected to security needs. We knew the detailed medical conditions of some of our targets, and our goals developed around them. I’m not sure what was done with this information. I felt bad knowing each of their precise problems, and that we would talk and laugh about this information freely. Or, for instance, that we knew exactly who was cheating on their wife, with whom, and how often.”

While hidden and unknown surveillance is prominent, Israel has also imposed explicit panopticon surveillance and restrictions on Palestinians in numerous cases. In the village of Beit Ijza, northwest of Jerusalem, the house of Gharib’s family has been enclosed by a 6-meter-high fence, cutting them off from their olive gardens and rest of the village as Israel claimed ownership of the land surrounding the Gharib family's house and created a West Bank settlement over there. The house was built in 1979 on land the family says has belonged to them from as far back as the Ottoman era. “Ever since Israel occupied the West Bank, Jews have been offering my father to sell the house,” Gharib says. “They even brought him a suitcase of money. He refused.” Now, their every move is filmed as cameras have been set up on the bars of the fence. Along with loss of privacy, the panopticon internalized omniscience prevents the Gharib family from taking radical steps to protect their rights. In Israeli military language this is called an “indicative fence” which is also equipped with sensors.  When the fence was built, the family had to negotiate by phone with the police at the nearby Atarot industrial zone every time they wanted to go out and or they had to get the Red Cross to help out. “Sometimes we waited for several hours for them to come and open it” Gharib said.

Constant surveillance in real life as well as digital space is definitely a critical human rights violation. While the case of Palestinians is unique given the Israeli military occupation, the fight for their rights is global. World leaders, governments, civil societies, social media giants and all internet users have an essential role in the battle for a surveillance and censorship free state.

Read More