My Blog
World News

Ahead of the 2024 Olympic Games, France embraces AI video surveillance

Ahead of the 2024 Olympic Games, France embraces AI video surveillance
Ahead of the 2024 Olympic Games, France embraces AI video surveillance


NICE, France — If you visit this gateway to the glamorous French Riviera, know one thing.

This sun-drenched Mediterranean resort, the scene of a horrific terrorist attack in 2016, has become what its mayor dubs “the most monitored city in France” — and a laboratory for a global revolution in law enforcement, powered by artificial intelligence.

A total of 4,200 cameras are deployed in public spaces, or one for every 81 residents. These are not the CCTV cameras of old. Some are equipped with thermal imaging and other sensors. And they are connected to a command center and AI technology that can flag minor infractions — when someone parks illegally or enters a public park after hours — as well as potentially suspicious activity, such as someone trying to access a school building.

This French mayor saw a truck plow through crowds in Nice. After Barcelona, he seeks to make European streets safer.

The city has trialed facial recognition software so accurate that it can tell the difference between identical twins.

Another system tested this year on Nice’s iconic Promenade des Anglais used algorithms capable of flagging irregular vehicle and pedestrian movements in real time — something officials here say could have rapidly alerted police to the assailant who drove a 19-ton truck into a crowd on the seafront promenade in 2016, killing 86 people and wounding hundreds more.

“There are people who have declared war on us, and we cannot win the war using the weapons of peace,” Mayor Christian Estrosi said. “Artificial Intelligence is the most protective weapon we have.”

France more broadly is moving to deploy sweeping algorithmic video surveillance as it prepares to host the 2024 Olympics, including technology that can detect sudden crowd movements, abandoned objects and someone lying on the ground. Such technology, officials say, could be key to thwarting an attack like the bombing at the 1996 Summer Olympics in Atlanta.

But this embrace of futuristic — some say Orwellian — policing is running into challenges in a region of the world that is seeking to take the lead on AI regulation and is home to some of the strongest digital privacy protections.

“They are putting us all under the all-seeing eye of AI,” said Félix Tréguer, co-founder of La Quadrature du Net, a French digital civil rights group.

People traveling in France aren’t about to see facial recognition cameras in their hotel rooms, like the ones Chinese authorities installed ahead of the Asian Games in September to confirm the identity of hotel guests. But Western governments are increasingly using AI as a crime-fighting tool.

In the United States, police have been partnering with companies such as New York-based Clearview AI, which has developed a facial recognition algorithm and amassed a database of more than 20 billion photos pulled from the internet. The system helped identify rioters at the U.S. Capitol attack on Jan. 6, 2021. It has also run up against privacy lawsuits and concerns about racial profiling and false arrests.

Facial recognition firm Clearview AI tells investors it’s seeking massive expansion beyond law enforcement

In Britain — an early adopter of CCTV surveillance — the government has encouraged police chiefs to double the number of retrospective facial recognition searches they conduct and to consider live facial recognition to find people on police watch lists in places like soccer stadiums.

And in continental Europe, France is far from alone in embracing AI for security. Along the waterways of Venice, for instance, a preexisting camera network pipes feeds into a control center now powered by AI, with the ability to recognize boat shapes and sizes — even in water-refracted light — to monitor speed and safety. Algorithms are also being used to analyze data from sensors in the city’s most packed tourist centers, with the capacity to detect the sort of sudden crowd movements that could indicate an attack. In one instance last year, Venice police used AI to scan captured footage and track a particular jacket to locate and apprehend a group of men allegedly involved in a stabbing.

Regulating AI surveillance

More than anywhere else in the West, the European Union has sought to control social media and enforce privacy in the digital age, enacting landmark regulations that have led to investigations, fines and changes to the operations of the dominant U.S. tech companies, including Google and Meta. Earlier this month, the E.U. reached a historic agreement on a new AI Act that will classify risk, enforce transparency and financially penalize tech companies for noncompliance.

But even as the E.U. seeks to regulate AI’s use in private hands — and ban the riskiest systems — European governments are looking to safeguard their own rights to use it. The AI Act almost fell apart amid demands, led by France, to carve out exceptions for AI use in law enforcement.

E.U. reaches deal on landmark AI bill, racing ahead of U.S.

The compromise deal will require judicial approval for biometric identification. Facial recognition technology can be applied to recorded video only to identify people convicted or suspected of having committed a serious crime. Real-time surveillance could occur in limited circumstances, such as tracking a kidnap victim or terrorism suspect. Searches could run afoul of the AI Act if targets are seen to be categorized by political affiliation, ethnicity or gender identity.

“If they want to look for someone wearing a red shirt, they can do it,” said Brando Benifei, one of two lawmakers running lead on the act in the European Parliament. “But if it’s for categorization, they cannot get biometric data from all people who are Black just because they are looking for a Black terrorist, or search for someone wearing a [Palestinian] kaffiyeh for political reasons.”

The stick-figure solution

Many European countries are devising ways to race ahead with AI policing while sidestepping rules prohibiting mass use of biometric data and facial recognition.

In privacy-conscious Germany, where memories linger of intrusions by Nazi- and Cold War-era secret police, authorities tested an AI-algorithm developed by the Fraunhofer Institute in one of Hamburg’s highest crime zones. The system detects and flags to police a range of acts, including kicking, hitting, aggressive postures, defensive postures, lying down, pushing and running. But the images shown to police appear as matchstick characters — anonymizing people captured on camera.

Coronavirus tracking apps meet resistance in privacy-conscious Europe

The software “is not interested in gender, skin color and any other special personal personality traits,” said Nikolai Kinne, head of the Hamburg Police intelligent surveillance project.

Activists argue that even that kind of observation can be intrusive. People might act more self-consciously, or avoid certain areas altogether, if they know cameras might flag them.

It “pushes and pressures people into behaving in a way that they think would be expected of them,” said Konstantin Macher of Digitalcourage, a German digital rights group. “I think this is taking away the beauty of human behavior and incentivizing us to act in routine, robotic ways.”

Pushing for expanded AI surveillance in France

France is betting big on AI-assisted security for the Olympics. Hundreds of smart cameras in Paris and beyond will monitor the crowds. A new law continues to ban facial recognition in most cases, but it has expanded legal applications of algorithmic video surveillance before, during and at least six months after the Games.

Many observers see it as a test run that could be extended indefinitely if the technology is accepted by the public and seen to work. A poll conducted around the time of the law’s adoption showed overwhelming support, with 89 percent of respondents favoring smart cameras in stadiums, 81 percent in public transit and 74 percent on public roads.

“We know that this fight will be lost,” said Paul Cassia, a law professor and activist with the Paris-based Organization of Defense of Constitutional Freedoms. “We all use our smartphones. There are now cameras everywhere. People ask for these kind of measures, and if anything happens, people say, ‘It’s your fault, you didn’t do enough to protect us.’”

Teacher killed in knife attack in France; officials launch terrorism probe

Estrosi, the mayor, argues that more latitude is essential. His city was allowed to experiment with facial recognition during Carnival in 2019, but the rules were so strict the test could only apply to volunteers who walked through a specific area. Another experiment, involving biometric portals at a local high school, was deemed overly intrusive by France’s data protection agency.

The mayor is advocating for broader AI. Among other things, the city is seeking to deploy experimental technology on buses and trams that would be able to detect a reddening of a passenger’s face, directing officers to a possible health emergency or other source of stress.

About 18 percent of all police cases, the city says, are now solved with the aid of its smart cameras. Estrosi insists that number could be vastly higher. “AI is everywhere except where it would really be useful to us,” he said. “We need to use facial recognition to ensure the security of my city. The software is ready and exists.”

Virgile Demoustier in Paris, Kate Brady in Berlin and Stefano Pitrelli in Rome contributed to this report.

Related posts

China’s Covid controls chance sparking disaster for the rustic — and its chief Xi Jinping

newsconquest

At ICJ Hearing, South Africa Says Palestinians Endure ‘More Extreme Form of Apartheid’

newsconquest

The Machine – The James Brown Mystery

newsconquest