AI-powered cameras spark privacy concerns as usage grows

A new wave of AI-enhanced surveillance is spreading across the US and UK, as private companies and government agencies deploy AI-powered cameras to analyze crowds, detect potential crimes, and even monitor people’s emotional states in public spaces. In the UK, rail infrastructure body Network Rail recently tested AI cameras in eight train stations, including major hubs like London’s Waterloo and Euston stations, as well as Manchester Piccadilly.  Documents obtained by civil liberties group Big Brother Watch reveal the cameras aimed to detect trespassing on tracks, overcrowding on platforms, “antisocial behavior” like skateboarding and smoking, and potential bike theft. Most concerningly, The post AI-powered cameras spark privacy concerns as usage grows appeared first on DailyAI.

Jun 18, 2024 - 01:00
 19
AI-powered cameras spark privacy concerns as usage grows

A new wave of AI-enhanced surveillance is spreading across the US and UK, as private companies and government agencies deploy AI-powered cameras to analyze crowds, detect potential crimes, and even monitor people’s emotional states in public spaces.

In the UK, rail infrastructure body Network Rail recently tested AI cameras in eight train stations, including major hubs like London’s Waterloo and Euston stations, as well as Manchester Piccadilly. 

Documents obtained by civil liberties group Big Brother Watch reveal the cameras aimed to detect trespassing on tracks, overcrowding on platforms, “antisocial behavior” like skateboarding and smoking, and potential bike theft.

Most concerningly, the AI system, powered by Amazon’s Rekognition software, sought to analyze peoples’ age, gender, and emotions like happiness, sadness, and anger when they passed virtual “tripwires” near ticket barriers. 

The Network Rail report, some of which is redacted, says there was “one camera at each station (generally the gateline camera), where a snapshot was taken every second whenever people were crossing the tripwire and sent for analysis by AWS Rekognition.”

It then says, “Potentially, the customer emotion metric could be used to measure satisfaction,” and “This data could be utilised to maximise advertising and retail revenue. However, this was hard to quantify as NR Properties were never successfully engaged.”

Amazon Rekogniton, a computer vision (CV) machine learning platform from Amazon, can indeed detect emotions. However, this was just a pilot test, and its effectiveness is unclear.

The report says that when using cameras to count people crossing railway gates, “accuracy across gate lines was uniformly poor, averaging approximately 50% to 60% accuracy compared to manual counting,” but this is expected to improve. 

“The rollout and normalization of AI surveillance in these public spaces, without much consultation and conversation, is quite a concerning step,” said Jake Hurfurt, head of research at Big Brother Watch.

The use of facial recognition technology by law enforcement has also raised concerns. Not long ago, London’s Metropolitan Police used live facial recognition cameras to identify and arrest 17 individuals in the city’s Croydon and Tooting areas. 

The technology compares live camera feeds against a watchlist of persons with outstanding warrants as part of “precision policing.”

In February, the Met used the system to make 42 arrests, though it’s unclear how many led to formal charges. 

Your emotions on a database

Critics have vehemently argued that facial recognition threatens civil liberties. 

Parliament members in the UK urged police to reconsider how they deploy the technology after suggestions that they could access a database of 45 million passport photos to better train these surveillance models. 

Experts also question facial recognition’s accuracy and legal basis, with Big Brother Watch data showing that 89% of UK police facial recognition matches are misidentifications. 

Met Police officials attempted to allay privacy fears, stating that non-matching images are rapidly deleted and that the facial recognition system has been independently audited. 

However, talk can be cheap in cutting-edge AI use cases that have the potential to truly impact people’s lives.

For one, predictive policing programs in the US have also generally failed to achieve their objectives while causing collateral damage in the form of police harassment and wrongful imprisonments. 

Concerns about bias and inaccuracy in facial recognition systems, especially for people of color, have been a major point of contention. 

Studies have shown the technology can be significantly less accurate for darker-skinned faces, particularly black women.

Policymakers will need to grapple with difficult questions about these powerful tools’ transparency, accountability, and regulation. 

Robust public debate and clear legal frameworks will be critical to ensuring that the benefits of AI in public safety and security are not outweighed by the risks to individual rights and democratic values. 

As technology races ahead, the time for that reckoning may be short.

The post AI-powered cameras spark privacy concerns as usage grows appeared first on DailyAI.