The Rise of Facial Recognition Technology

Facial recognition technology has become increasingly advanced in recent years, and is now being widely used for unlocking phones, making payments and for security measures. We take a look at some of the potential benefits, as well as the concerns and dangers surrounding a widespread use of this technology.

 

(Featured image credit: WiredUK)

 

Smile-to-pay and social credit scores

While it’s become common worldwide to open your smartphone using your face, China is a country leading the way in other daily uses of facial recognition technology. This is particularly apparent with their ‘smile-to-pay’ technology, which we mentioned briefly in our previous blog on past tech predictions for the year 2020.

Rather than paying with contactless technology on your bank card or your phone, it’s now common in China to pay using your face. This technology is also used in other identity checks, such as checking into hotels or visiting the hospital.

 

Smile to Pay

Image credit: CNBC

 

However, as well as transactions, facial recognition is also being used widely for surveillance purposes in China and this is being used to determine people’s social credit scores. This is done by identifying ‘good’ and ‘bad’ behaviour, such as jaywalking or parking your bike in the wrong zone.

In one city, Suzhou in Anhui province, government officials took this a step further when they released pictures of seven people wearing their pyjamas in public, caught by surveillance cameras. They published these online, along with their name, ID card and other information, calling it ‘uncivilised behaviour’. Initially, officials excused their actions by arguing that they were entering a national ‘civilised city’ competition and that pyjamas were banned in public. They have since apologised.

Many are angry about the increased surveillance they are under in this part of the world. In Hong Kong, protestors were seen destroying facial recognition towers which they believed were being used by the Chinese authorities for surveillance. Towards the end of 2019, a ban on face masks for protestors hiding their identities was also announced in Hong Kong, presumably to stop protestors from avoiding recognition, although this was later overruled.

 

Reverse image search

When it comes to the future of search, one of the developments we’re really excited about is visual image search, which allows you to find similar or related images on the web. This has a lot of potential for consumers who are shopping for fashion, or even homeware items, as they can take a picture of an existing item they like and find similar products.

Related blog: The Future of Search: Beyond Google?

However, it all starts to get a bit creepy when it comes to people’s faces and the ability to perform a ‘reverse image search’. If someone has a photo of your face, they can use this to pull up all the other photos of you that exist online.

Could employers use this in their hiring process to see if there are any inappropriate photos of you out there on the web? Would this be fair? It certainly brings up a lot of questions relating to our personal privacy and how the digital footprints we create could be used against us in the future.

 

The rise of the surveillance state

 

facial-recognition-surveillance

Image credit: CPO Magazine

 

One of the main applications of facial recognition technology is likely to be within the police force. In the UK, the Metropolitan Police have already announced their intention to trial facial recognition technology on London streets. The cameras will be in use for five to six hours at a time, with bespoke lists of suspects wanted for serious and violent crimes drawn up each time.

Those who support the use of police facial recognition technology say that this will help to prevent crime and keep the general public safe. It will enable the police to more quickly identify and locate suspects who are wanted for serious crimes, therefore maintaining public order. It also has the potential for identifying vulnerable or missing persons, which many believe is a huge positive.

 

 

However, privacy campaigners have described it as a “serious threat to civil liberties in the UK”. Indeed, another concern is that it infringes on the right to peaceful protest, as those who are identified as regularly attending protests may well have their faces stored in a police database and monitored without their consent. 

Furthermore, at the moment, there are major concerns about the accuracy of this technology. When it was used at the UEFA Champions League Final in Wales in 2017, 92 per cent of matches were incorrect. And this inaccuracy could be even more extreme depending on the colour of your skin…

 

Is facial recognition technology racist?

This may seem like a bizarre question to ask, but bear with me while I explain.

The two commonly used types of facial recognition are ‘one-to-one’ and ‘one-to-many’. The first is used to match one photo of a person to another photo of that same person in a database. The latter is used to find a match for a photo of a person within an existing database.

In both types, it’s possible to have ‘false positives’, which is where the technology identifies a match where there is none.

You can imagine that any mistakes in the first type of technology will have implications for crossing border control as false positives might present potential security concerns. It may also just be really annoying for people who can’t open their smartphone!

For the second type, which is what would be used in police surveillance and investigation, false positives could have serious repercussions for falsely identifying and accusing people of crimes they didn’t commit.

A study by the US National Institute of Standards and Technology using different types of facial recognition technology found that, for one-to-one matching, the team saw higher rates of false positives for Asians, African Americans and native groups relative to images of Caucasians. Interestingly, for algorithms developed in Asian countries, there was no significant difference in false positives between Asian and Caucasian faces.

Worryingly, for one-to-many matching, the study found that systems had the worst false positive rates for African-American women, which means that this population is at the highest risk of being falsely accused of a crime if the police are using facial recognition technology.

Technology is not free from bias because it’s designed by humans – it reflects the needs of the people who develop it. This has been explored before in photography, where experts explain how there is racial bias built into the development of colour film. Similarly, facial recognition technology, which has been developed over the years by primarily white people, will most likely be better at recognising white faces.

There are major concerns that, as the technology in its current format is flawed and often misidentifies people of colour, it will lead to racially-biased law enforcement. Some have referred to facial recognition technology as a form of ‘techno-racism’.

 

Is there a way to limit the controversial use of facial recognition technology?

If facial recognition technology is developed to remove race and gender bias and implemented with proper regulations, there could well be some key benefits for society in terms of identifying criminals and finding missing people. However, it’s all moving way too fast at the moment, so much so that the European Commission is considering a ban on the use of facial recognition in public areas for up to five years

Plus, privacy activists are, rightly, concerned about how soon it will be possible for all of our personal information to be accessible without our consent, simply by a surveillance camera recognising our face. It takes concerns over data privacy to a whole new level.

It’s important that governing bodies respond quickly to these technological advances so that we can use facial recognition technology wisely, rather than blindly becoming a 1984-style dystopian surveillance state.

 

SB.