Draft legislation currently passing through France’s two parliamentary chambers, the Senate and the National Assembly, sets out a series of special measures the government is seeking to introduce for the Olympic and Paralympic Games to be held in Paris and other French sites in the summer of 2024.
The bill contains a total of 18 articles, which include relatively unremarkable propositions like the authorisation of genetic tests of athletes in anti-doping checks, the relaxing of restrictions on retail activity on Sundays, tougher fines for those found guilty of invading sporting events, and the use of body scanning of spectators entering the various sites of the games.
But its “article 7” is at the centre of growing controversy and is strongly opposed by a number of rights groups and legal experts. It allows for the “experimental” use of artificial intelligence (AI) in fixed camera and drone video surveillance “to detect, in real time, predetermined events that are likely to present or to reveal” a threat to public order.
Earlier this week, as the bill was debated in the Senate, the French branch of Amnesty International issued a statement which underlined that article 7 proposes the use of algorithmic video surveillance to identify “predetermined abnormal or suspect behaviour” but “without detailing what is included in such notions”.
“Amnesty International France calls on French parliamentarians to refuse to open up the path towards a normalisation of surveillance practices that are of concern for our freedoms, and to avoid the risk of dangerous abuses,” it said.
The text of article 7 of the bill authorises the introduction of “artificial intelligence processing” of surveillance camera images “on an experimental basis”. But in fact, the “experimental” measure is not limited to the Olympic games, which will be held between June 24th and September 8th 2024.
Instead, the bill allows for the use of algorithmic video surveillance up until June 30th 2025 “to ensure the security of sporting, recreational and cultural events which, by their scale or circumstances, are particularly exposed to the risk of acts of terrorism or a serious endangering of people’s safety”. That is fuelling fears among opponents of the technology that it will, if deemed effective, be made a permanent feature in towns and cities across France.

Enlargement : Illustration 1

In an attempt to allay concerns, the government insisted that the AI would not be used to reveal the identities of individuals. “This process uses no system of biometric identification, processes no biometric data and does not use any facial recognition technique,” the article sets out, adding that “no comparison, interconnection or automatic linking with other data processing” is authorised.
But that does not convince Bastien Le Querrec, a jurist specialised in Public Law and a member of La Quadrature du Net, a French association dedicated to protecting civil rights in face of the development of digital technologies. Over the past four years, he has been campaigning against biometric surveillance through a project called Technopolice, in partnership with the Quadrature du Net. “This algorithmic video surveillance will analyse bodies, measure behaviour,” he told Mediapart. “So it is indeed biometric processing.”
Caroline Lequesne-Roth is a senior lecturer in Public Law and director of a master’s degree course on the use of algorithms and data governance at the Côte d’Azur University in Nice, south-east France. If the government is forced to legislate on the experimental use of the surveillance “it’s because there is no [legal] framework as such for algorithmic video surveillance” she said. “There is a law for video surveillance, [and] a law for algorithmic processing, but no rules that oversee the two together.”
Bastien Le Querrec said the issue is also ignored in EU law: “The European directive on police and justice, which covers video surveillance, has nothing on algorithmic surveillance, nor does any other legislative text. Now, everything which is not authorised is forbidden.” But, he added, “in reality” algorithmic video surveillance is already in use in France.
Since the Technopolice project was created in 2019, it has identified and listed numerous French towns and cities using algorithmic surveillance. “We have a number of ongoing legal challenges, but that takes years,” said Le Querrec. “In Marseille, our legal recourse was submitted in December 2020, and in [the small south-east town of] Moirans in July 2021, and we still have no news.”
For the Quadrature du Net, which earlier this week published a report online analysing the issues surrounding article 7 of the bill before parliament, there is no doubt that the technology is, and should remain, illegal. “Data processing must have a legal basis and be necessary and proportionate,” said Le Querrec. “And as of the moment we’re into biometric processing, it’s about sensitive data which must be justified by an absolute necessity and for which there must be a particularly extensive control of proportionality.”
“When one talks of algorithmic surveillance to detect forgotten luggage or suspect behaviour, all of that can be carried out without analysing bodies, which represents the ultimate degree of the surveillance,” added Le Querrec.
For Katia Roux, Amnesty International France’s advocacy officer on public freedoms, “no concrete element, no credible study” has proved the effectiveness of algorithmic video surveillance. “Already, video surveillance is not, in the true sense of the word, evaluated,” she said. “There is no solid study that proves its effectiveness, and [now] an algorithmic version is to be added whereas it includes risks of bias.”
How can an algorithm decide about the ‘committing of offences’? What are ‘abnormal events’?
The dangers of the AI making mistakes and using biased criteria is one of the major issues for those who oppose algorithmic video surveillance. “The impact study [into the proposed legislation and presented to government in December] explained that the aim of these algorithms would be to detect ‘abnormal events, crowd movements, abandoned objects or situations suggesting the committing of offences’,” said Caroline Lequesne-Roth. “For crowd movements, there are studies which show that it can be effective. To detect an abandoned object, that can also work. On the other hand, I’m a lot more sceptical about the two other objectives.”
“How can an algorithm decide about the ‘committing of offences’?” she asked. “What are ‘abnormal events’? One thinks immediately of technologies for ‘emotion recognition’, which are based on analysing expressions, people’s behaviour, and which, for me, are in the domain of magic. Studies show it’s not reliable. What is ‘abnormal behaviour’? If remaining upright [and stationary] is one, what can be said about begging? If the algorithm works with machine learning, and it is fed with images of offences, what criteria will be produced? Will it analyse skin colour?”
Bastien Le Querrec gave the example of a programme available on video surveillance camera systems supplied by Briefcam, a video analytics company owned by Canon: “It consists of detecting a person who would be stationary while others are moving. It’s hunting down the poorest people, who don’t mover, who are homeless. It was notably used by the RATP [the Paris public transport operator] in the Halles station but it was abandoned because it didn’t work.”
“That shows that algorithms are political,” he said. “To use them is a manner of carrying out politics without taking responsibility for the consequences. Moreover, if it doesn’t work, if there are mistakes, it’d be said that ‘It’s the fault of the algorithm’. Whereas the idea of putting in place a hunt for the poorest was decided by political decision-makers.”
Another danger for Le Querrec and others who oppose the surveillance is what is called the “chilling effect”, which he says is the manner in which “people change their behaviour when they know they are being watched over”.
Katia Roux of Amnesty International agreed: “Video surveillance questions our relationship with private space and the freedom for peaceful gatherings. To know, to feel being watched impacts upon our behaviour, with the tendency to censure ourselves.”
Before the draft legislation was submitted before parliament, it was first examined by the Senate’s law committee, which called for a greater oversight from France’s data privacy watchdog, the CNIL. But Caroline Lesquesne-Roth says that is insufficient. “The problem is this way of working does not put in place very strong counter-powers,” she observed. “Notably, the power of the CNIL is quite limited. It will give an opinion on the decree which will only be consultative, and subsequently it will only be informed. The Senate reinforced the information requirements, but without a supplementary coercive power. I would have preferred to see a system of authorisation by the CNIL, which would say green or red.”
There is also a suspicion that the “experimental” nature of article 7 is in fact a precursor to a subsequent permanent introduction of algorithmic video surveillance, even facial recognition systems. On January 18th, conservative senator Marc-Philippe Daubresse announced his intention of filing a motion for draft legislation to allow the introduction of facial recognition surveillance cameras.
“The members of the CNIL college call upon the parliamentarians not to introduce facial recognition of people passing in public spaces,” said CNIL chairwomen Marie-Laure Denis in an interview this week with radio station France Info. “The public space is a place where numerous public freedoms are exercised – the freedom to demonstrate, [freedom] of religion and others,” she added.
“Our fear,” said Amnesty International’s Katia Roux, “is that, under cover of the Olympic games, it will pave the way for the introduction of a very intrusive technology, the practice of which will be legalised, as a first step before it’s written into law. Of course, there will be an evaluation at the end of the experiment. But then, the algorithms would already have been trained and the data collected.”
“One of the things that concerned me when I was questioned at a hearing by the Senate was that the necessity of developing these technologies appeared to be a given,” she added. “The question is not even asked. One has the impression that it’s the sense of progress, of history. But we bring that back into question.”
Bastien Le Querrec has no doubts. “It will be set in the legislative stone,” he said, citing the example of a fact-finding commission in the lower house, the National Assembly, which is currently looking into “the issues of the use of security images in the public domain with the aim of fighting disorder”.
For Caroline Lesquesne-Roth, “there is a political will, and among the police, to enter these technologies into the police arsenal,” but she adds that there is also an economic interest. “Surveillance is a market, with international competition, and China and the United States have taken a significant lead,” she said. “Politicians therefore also have a more capitalistic concern in sight.”
However, she is not opposed to the idea of legislation on the issue. “I have the somewhat delicate position of a jurist,” she explained. “I don’t at all support the deployment of these technologies. But I think that the legal void plays against citizens. Instead of carrying out experiments here and there, it’s best to legislate in order to limit them to the greatest degree.”
-------------------------
- The original French version of this report can be found here.
English version by Graham Tearse