Surveillance

Have a look at this Medium blog post by Azadeh Akbari regarding Automated License Plate Recognition systems. The post discusses how these systems have become a commonplace form of surveillance that often goes unnoticed.

How do you see “banal surveillance” in your everyday life? Think about the streets in your neighbourhood. Are there speed cameras, CCTV, “smart lamp posts”, or other forms of surveillance that you’ve grown accustomed to ignoring? How could you determine what these cameras do, who installed them, and on what legal basis? Can you find out who owns the data and how and where it is processed? We promise you, find out, and you will be surprised.

For instance, I have not yet seen the cameras near the towers in Bologna, even though I am aware of their existence.

From a short article by Jan-Jaap Oerlemans on the privacy risks of automated open source intelligence:

[T]raditional OSINT has evolved into a professional and intrusive practice. With the use of tools hundreds of online sources can be queried simultaneously. These sources are diverse and can range from publicly available data on social media services, to location data generated by advertisements on apps on mobile phones, to leaked user data. Automated OSINT interferes with the right to privacy and the right to the protection of personal data more seriously than before.

Facebook/Meta is shutting down its facial recognition system. They explain their choice in this blog post.

But the many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole. There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use. Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.

The Markup has investigated the shadowy industry of collecting and selling location data. The article details some worrisome examples of very invasive practices.

The following article from The Economist China’s aversion to encryption technologies — which would make mass surveillance more difficult — makes the country’s networks vulnerable to foreign spying:

Weak security is the rule, not the excep­tion, in digital services for the Chinese public. Email and social media must all fa­cilitate state access, as must industrial net­ works used to run factories and offices, even if the extent to which the government uses that access varies. In August it banned the most up ­to ­date version of a protocol used to encrypt web traffic, known as TLS, from the Chinese internet, because it makes online surveillance harder.

(…) The government has responded by promoting programs for companies to improve customer­ data pro­tection, even as it simultaneously enforces weakness in the security of all systems. But as long as the government demands access to data on Chinese people, those data can never be robustly protected.

An article from The Economist mentions how drone technology gets transported from the battlefields of Iraq to American cities:

The notion of putting cameras on orbit­ing drones to catch malefactors was born on the battlefields of Iraq, where American armed forces wanted to nab people leaving bombs on roadsides. Ross McNutt, a for­mer air­force engineer, founded Persistent Surveillance Systems (pss) to offer the same service to American cities (and oth­ers, such as Juárez) struggling with high murder rates. PSS drones flew over parts of Baltimore, most recently in May­-October 2020. St Louis, among America’s most viol­ent cities, also considered but is poised to reject PSS’s services, which raise difficult questions about how much surveillance Americans are willing to tolerate in ex­change for the promise of safer streets.

Two recent articles from BBC Future deal with meta-data and possibilities of surveillance in two everyday technologies:

  • This article explains how photographs can have a unique fingerprint because of the camera’s sensor: “different sensitivities of the photosites creates a type of imperceptible image watermark. Although unintentional, it acts like a fingerprint, unique to your camera’s sensor, which is imprinted onto every photo you take.”
  • And another article explains how colour printers add dots — that are invisible to the naked eye —, representing meta-data about the printer to documents.

Interesting new article from Shoshana Zuboff at The New York Times. She recaps some of her arguments on surveillance capitalism, but also links it with more recent events related to the US elections and the Covid-19 pandemic.

In an information civilization, societies are defined by questions of knowledge — how it is distributed, the authority that governs its distribution and the power that protects that authority. Who knows? Who decides who knows? Who decides who decides who knows? Surveillance capitalists now hold the answers to each question, though we never elected them to govern. This is the essence of the epistemic coup. They claim the authority to decide who knows by asserting ownership rights over our personal information and defend that authority with the power to control critical information systems and infrastructures.

The Guardian published this great piece of investigative journalism on the funding of research on security technologies through EU-funded research programmes (such as Horizon 2020), and the involvement of industry. The following is an excerpt about the way the kinds of funded research projects are presented:

“Often the problem is that the topic itself is unethical,” said Gemma Galdon Clavell, an independent tech ethicist who has evaluated many Horizon 2020 security research projects and worked as a partner on more than a dozen. “Some topics encourage partners to develop biometric tech that can work from afar, and so consent is not possible – this is what concerns me.” One project aiming to develop such technology refers to it as “unobtrusive person identification” that can be used on people as they cross borders. “If we’re talking about developing technology that people don’t know is being used,” said Galdon Clavell, “how can you make that ethical?”

The Markup reviews reviewed different examples of controversial uses of machine learning algorithms in 2020:

Every year there are myriad new examples of algorithms that were either created for a cynical purpose, functioned to reinforce racism, or spectacularly failed to fix the problems they were built to solve. We know about most of them because whistleblowers, journalists, advocates, and academics took the time to dig into a black box of computational decision-making and found some dark materials.