In the UK, a series of AI trials involving thousands of train passengers who were unwittingly subjected to emotion-detecting software raises profound privacy concerns. The technology, developed by Amazon and employed at various major train stations including London’s Euston and Waterloo, as well as Manchester Piccadilly, used artificial intelligence to scan faces and assess emotional states along with age and gender. Documents obtained by the civil liberties group Big Brother Watch through a freedom of information request unveiled these practices, which might soon influence advertising strategies.

Over the last two years, these trials, managed by Network Rail, implemented “smart” CCTV technology and older cameras linked to cloud-based systems to monitor a range of activities. These included detecting trespassing on train tracks, managing crowd sizes on platforms, and identifying antisocial behaviors such as shouting or smoking. The trials even monitored potential bike theft and other safety-related incidents.

The data derived from these systems could be utilized to enhance advertising revenues by gauging passenger satisfaction through their emotional states, captured when individuals crossed virtual tripwires near ticket barriers. Despite the extensive use of these technologies, the efficacy and ethical implications of emotion recognition are hotly debated. Critics, including AI researchers, argue the technology is unreliable and have called for its prohibition, supported by warnings from the UK’s data regulator, the Information Commissioner’s Office, about the immaturity of emotion analysis technologies.

Amazon trucks by Todd Van Hoosear is licensed under WikiMedia Commons
©2024, The American Dossier. All rights reserved. Privacy Policy