“Sleepwalking into a dystopian nightmare”. Experts warn of AI facial recognition sector growth

Society is sleepwalking into a surveillance dystopian nightmare according to technology experts and researchers speaking to: Citywire Selector.

This is in response to the growth of the biometric and facial recognition sector, which has begun to have a major impact on human behavior and the communities it monitors.

Rune Steenberg, an anthropologist specializing on Xinjiang, Uyghurs, and economic anthropology, has witnessed first-hand the way in which mass surveillance can affect people’s behavior and daily habits.

Describing the ubiquity of the facial recognition and surveillance sector as a ‘global phenomenon’, the Berlin-based anthropologist said potential malignant uses have not yet been fully grasped.

“The surveillance technology has changed the way people think,” added Steenberg, who said that the new way to avoid surveillance in Xinjiang is to always be surveilled..

“Always stay in the camera and microphone grid, but just make sure not to do anything that could make someone flag or report you to the authorities.

Having witnessed how the roll-out of this technology changed his Uyghur friend’s behavior, he warned this could soon become a lived reality for those in the West unless lawmakers enact stricter laws on curbing its widespread use.

‘We are sleepwalking into a dystopian nightmare, while watching it happen right in front of us. This attitude that “it could never happen here” is worrying.

Steenberg’s warning comes from personal experience, having been told in 2015 that one of his Uyghur friends was going missing while he was living in Xinjiang, which has stayed with him for a long time.

One of my Uyghur friends sat me down one day to tell me that our friend was missing, but before he spoke about it, he put our phones away in a different room and ran the water faucet before telling me about it.

“You could see people realizing they were being monitored quite early on before the system exploded in around 2017 to 2018. People were becoming paranoid as they realized people they knew were disappearing. All of a sudden police were showing up at people’s doors knowing things that they should not.’

Steenberg said he has never heard from his missing friend since he disappeared.

Global phenomenon

His warning comes after the United Nations released a report on 31 August 2022 which focused on the large-scale ‘arbitrary’ detention of Uyghurs in western Xinjiang in China.

It concluded that Beijing’s actions could constitute ‘crimes against humanity’. The scale of which, Steenberg added, was partly aided by widespread surveillance technology.

It is not just happening in China now, it could come to us tomorrow, as it is happening globally in different ways and to different degrees all over the word.

“We are looking at a global phenomenon, both in the sense of labor exploitation, as well as surveillance. For instance, the surveillance in western working spaces is highly developed, and maybe even more advanced than in China. Less police like, but more technologically advanced,’ said Steenberg.

While China’s use of surveillance is extreme, they are not outliers in adopting this technology.

This is shown in a 2019 report from the Carnegie Endowment for International Peace, which highlighted that AI-enabled surveillance technology was being used in at least 75 of the 176 countries it studied.

With China being the biggest supplier of such technology, selling to 63 countries, while US companies sold to 32 countries, according to a Financial Times reported earlier this year.

Asset managers split:

Although some countries have taken action in curtailing the adoption of surveillance companies by placing them on blacklists, asset management companies remain split on the issue.

With some vowing never to invest in companies that pose threats to human rights, while others believe engagement is the best route.

Technology ethics researcher Stephanie Hare and author of: Technology Is Not Neutral. A Short Guide to Technology Ethicsused a 2019 example of surveillance technology infringing workers’ rights.

In this case, 14 people who worked for Uber Eats told: Wired: they had been wrongfully suspended or fired because they had failed the company’s ‘Real Time ID Check’.

This is a system which uses 1:1 facial verification technology to check drivers are not subcontracting shifts to people who have not passed background checks or are otherwise not entitled to work.

Hare, who has presented her findings on the dangers of facial recognition technology to the UK’s All-Party Parliamentary Group on Artificial Intelligence, believes governments need to urgently rethink the continued implementation of surveillance systems.

In her book, Hare breaks down different types of facial recognition uses and ranks them according to the risk of civil liberties and privacy.

Unlike Steenberg, Hare said there is more of a widespread understanding of the dangers of surveillance technology among governments and the public, adding that one of the biggest obstacles seems to be apathy.

“Governments know how serious this is, the public has been told repeatedly about the dangers of this type of surveillance through endless articles on the topic. But when you have societies embattling so many serious political and social issues, people find it hard to really care about this stuff.

‘Just look at Australia who are developing a super creepy surveillance system called “The Capability”. Why aren’t we looking at this? Is it because they are so far away, and we don’t care? We need to be really watching the development of this sector.’

US blacklist AI surveillance companies:

In June 2021, President Biden signed an executive order barring Americans from investing in more than 40 companies, including those which enable human rights abuses against Muslim Uyghurs in Xinjiang.

Among these barred firms is Dahua Technologies, a partially state-owned publicly traded company which sells video surveillance products and services.

Other banned biometric surveillance entities include: Cloudwalk Technology, Dawning Information Industry, Leon Technology Company, Megvii Technology, Netposa Technologies, SZ DJI Technology, Xiamen Meiya Pico Information and Yitu.

It is for these reasons that Matthew Asia’s fund manager Vivek Tanneeru has never invested in companies such as Alibaba and Tencent. Citywire AAA-rated Tanneeru, who manages the firm’s Asia ESG emerging market funds, said their policy not to invest in these companies has remained consistent.

“We run an Article 9 fund, and so we believe that not investing in these companies is the right thing to do,” said Tanneeru.

On the other hand, one asset manager who supports engagement over exclusion is Fidelity International who confirmed having a ‘small amount of holdings’ with Dahua in its Fidelity Pacific and Fidelity Global Multi Asset Income funds.

A Fidelity spokesperson told: Citywire Selector:: “We have a very small holding in Dahua. Fidelity has had a long-standing and substantial presence in mainland China with over a thousand employees located in the country.

“We actively engage with our investee companies, with this our preferred path for sustainable investment, rather than exclusion. We have found that our Chinese companies are very receptive to our active engagement and have shifted policies accordingly.

“Where companies fail to improve against agreed goals or develop a pattern of deteriorating sustainability outcomes, we will review and potentially divest our holding.”

Where next?

While many say there is much more work to be done in raising awareness of the risks attached to the sector, activists and ESG analysts have not stood by in silence, with civil rights concerns on the risks of adopting invasive biometric surveillance technology being raised.

This includes Candriam, which in 2021 launched a collaborative engagement project with 51 firms on surveillance technology, as well as publishing a white paper which included the risks attached to surveillance and facial recognition technology.

A 2021 report by the asset manager highlighted the risks posed by facial recognition technology and warned of its potential threats to human rights.

The report stated how law enforcement agencies are already deploying facial recognition on a massive scale globally, with an estimated one billion surveillance cameras predicted to be operational by the end of 2021.

“Today, the citizens of Detroit, London, Monaco, Moscow, Beijing and elsewhere are walking around oblivious that their faces are being scanned by police operated facial recognition systems,” the report stated.

US Policy analyst Isedua Oribhador, who works for AccessNow, also warned in the same report of the risks to human privacy and rights posed by the gender and racial biases baked into these systems. As with Steenberg and Hare, the report called for lawmakers to enact tighter regulations.

“It is imperative to examine these risks and to draw red lines around where the use of this technology is incompatible with a respect for human rights.”