19 Feb 2020

A human-centric internet for Europe

By EDRi

The European Union has set digital transformation as one of its key pillars for the next five years. New data-driven technologies, including Artificial Intelligence (AI), offer societal benefits – but addressing their potential risks to our democratic values, the rule of law, and fundamental rights must be a top priority.

“By driving a human rights-centric digital agenda Europe has the opportunity to continue being the leading voice on data protection and privacy,” said Diego Naranjo, Head of Policy at European Digital Rights (EDRi). “This means ensuring fundamental rights protections for personal data processing and digitalisation, and a regulatory framework for governing the full lifecycle of AI applications.”

The EU must proactively ensure that regulatory frameworks (such as GDPR and the future ePrivacy Regulation) are implemented and enforced effectively. Where this doesn’t suffice, the EU and its Member States must ensure that the legislative ecosystem is “fit for the digital age”. This can be done by increasing the comprehensiveness (filling gaps and closing loopholes), clarity (clear interpretation), and transparency of EU and national rules. The principles of necessity and proportionality should always be front and centre whenever there is an inference with fundamental rights.

To deal with technological developments in a thorough way, in addition to data protection and privacy legislation, we need to take a look at other areas, such as competition rules and consumer law – including civil liability for harmful products or algorithms. Adopting a strong ePrivacy Regulation to ensure the privacy and confidentiality of our communications is also crucial.

From a fundamental rights perspective, one specific concern is the deployment of facial recognition technologies – whether AI-based or not.

“It is of utmost importance and urgency that the EU prevents the deployment of mass surveillance and identification technologies without fully understanding their impacts on people and their rights, and without ensuring that these systems are fully compliant with data protection and privacy law as well as all other fundamental rights,” said Naranjo.

Facial recognition and fundamental rights 101 (04.12.2020)
https://edri.org/facial-recognition-and-fundamental-rights-101/

The human rights impacts of migration control technologies (12.02.2020)
https://edri.org/the-human-rights-impacts-of-migration-control-technologies/

A Human-Centric Digital Manifesto for Europe
https://www.opensocietyfoundations.org/publications/a-human-centric-digital-manifesto-for-europe

close
19 Feb 2020

The impact of competition law on your digital rights

By Laureline Lemoine

This is the first article in a series dealing with competition law and Big Tech. The aim of the series is to look at what competition law has achieved when it comes to protecting our digital rights, where it has failed to deliver on its promises, and how to remedy this.

This series will first look at how competition and privacy law interact, to then focus on how they can support each other in tackling data exploitation and other issues related to Big Tech companies. With a potential reform of competition rules in mind, this series is also a reflection on how competition law could offer a mechanism to regulate Big Tech companies to limit their increasing power over our democracies.

Our personal data is seen by Big Tech companies as a commodity with economic value, and they cannot get enough of it. They track us online and harvest our personal data, including sensitive health data. Data protection and online privacy legislations aim to protect individuals against invasive data exploitation. Even though well-enforced privacy and data protection legislation are a must-have in our connected societies, there are other avenues that could be explored simultaneously. Because of the power imbalance between individuals and companies, as well as other issues affecting our fundamental rights, there is a need for a more structural approach, involving other policies and legislation. Competition law is often referred to as one of the tools that could redress this power imbalance, because it controls and regulates market power, including in the digital economy.

During her keynote speech at the International Association of Privacy Professionals (IAPP) conference in November 2019, Margrethe Vestager, European Commissioner for Competition and Executive Vice-President for A Europe Fit for the Digital Age, argued that, “[…] to tackle the challenges of a data-driven economy, we need both competition and privacy regulation, and we need strong enforcement in both. Neither of these two things can take the place of one another, but in the end, we’re dealing with the same digital world. Privacy and competition are both fundamentally there for the same reason: to protect our rights as consumers”.

Privacy and competition law are different policies

Competition and privacy law (which includes data protection and online privacy legislations) are governed by different legal texts and overseen by different authorities with distinct mandates.

According to Wojciech Wiewiórowski, the European Data Protection Supervisor (EDPS), “the main purpose of these two kinds of oversight is […] very different, because what the competition authorities want to achieve is the well-working fair market, what we want to achieve is to defend the fundamental rights [to privacy and data protection]”.

This means that, in assessing competition infringements, competition authorities do not go beyond competition issues. They have to assume that companies are or will be in compliance with their other legal obligations, including their privacy obligations.

The Court of Justice of the European Union confirmed this difference of mandates in 2006. In the Facebook/WhatsApp merger case, the Commission concluded that privacy-related concerns “do not fall within the scope of the EU competition law rules but within the scope of the EU data protection rules”. Facebook was later fined for “misleading” the competition authority.

Since then, Europe has seen the development of a data-driven economy and its fair share of privacy scandals and data breaches, too. And despite numerous investigations into problematic behaviours, Big Tech companies keep on growing.

But this goes far beyond competition issues, as the dominant position of Big Tech companies also gives them the power and the incentive to limit our freedoms, and to infringe on our fundamental rights. Their dominance is even a threat to our democracies.

As a way to tackle these issues, more people are calling for the alignment of enforcement initiatives of data protection authorities as well as competition and consumer authorities. This has led to debates about the silos between competition and data protection law, their differences but also their common objectives.

Data protection and competition against Big Tech powers

Both competition and data protection law impact economic activities and, at EU level, both are used to to ensure the further deepening of the EU single market. The General Data Protection Regulation (GDPR), as well as ensuring a high level of the protection of personal data, aims to harmonise the Member States’ legislations to remove obstacles to a common European market. Similarly, competition law prevents companies from enacting barriers to trade between competitors.

Moreover, data protection can be considered as an element of competition in cases where companies compete for who can better satisfy privacy preferences. There is, in this case, a common objective of allowing the individual to have control (as a consumer or as a data subject).

In her keynote speech, Vestager explained: “competition and competition policy have an important role to play … because the idea of competition is to put consumers in control. For markets to serve consumers and not the other way around,” she said, “it means if you don’t like the deal we’re getting, we can walk away and find something that meets our needs in a better way. And consumers can also use that power to demand something we really … care about, including maybe our privacy.”

Indeed, giving consumers a genuine choice to use privacy-friendly companies would help uphold standards in terms of privacy. Although now it is hard to believe, once upon a time Facebook prioritized privacy as a way to distinguish itself from MySpace, its biggest competitor back then.

However, the issue in the world of Big Tech today is that privacy is not a leverage. The dominant positions of the few players controlling the market leave no room for others proposing privacy-friendly products. As a result, there is no other choice but to use the services of Big Tech to stay connected online – the consumer is no longer in control.

One way to remedy this power imbalance between individuals and these giant companies could be through a greater cooperation between regulatory authorities. BEUC, the European Consumer Organisation, has called, regarding Facebook’s exploitation of consumers, for a “coherent enforcement approach for the data economy between regulators and across Member States” and wants the “European Commission to explore – with relevant authorities – how to deal with a concrete commercial behaviour that simultaneously breaches different areas of EU law”.

In 2016, The EDPS launched the Digital Clearinghouse, a voluntary network of regulators involved in the enforcement of legal regimes in digital markets, with a focus on data protection, and consumer and competition law. National competition authorities are also looking into competition and data, while in 2019, the European Commission published a report on Competition Policy for the Digital Era, to which EDRi member Privacy International contributed.

Greater cooperation between regulators, inclusion of data protection principles in competition law, and many other ideas are being discussed to redress this issue of power imbalance. Some of them will be explored in the next articles of this series.

Regarding antitrust law, we will look at discussions regarding new sets of rules designed especially for the Big Tech market, as well as the development of the right to portability and interoperability. As for merger control, we will focus on to what extent privacy could be considered as a theory of harm.

Opinion 8/2016 – EDPS Opinion on coherent enforcement of fundamental rights in the age of big data (2016)
https://edps.europa.eu/sites/edp/files/publication/16-09-23_bigdata_opinion_en.pdf

Competition and data
https://privacyinternational.org/learning-topics/competition-and-data

Factsheet – Competition in the digital era (2020)
https://www.beuc.eu/publications/beuc-x-2020-007_competition_in_digital_era.pdf

Report of the European Commission – Competition Policy for the digital era (2019)
https://ec.europa.eu/competition/publications/reports/kd0419345enn.pdf

Family ties: the intersection between data protection and competition in EU Law (2017)
http://eprints.lse.ac.uk/68470/7/Lynskey_Family%20ties%20the%20intersection%20between_Author_2016_LSERO.pdf

close
12 Feb 2020

The human rights impacts of migration control technologies

By Petra Molnar

This is the first blogpost of a series on our new project which brings to the forefront the lived experiences of people on the move as they are impacted by technologies of migration control. The project highlights the need to regulate the opaque technological experimentation documented in and around border zones of the EU and beyond. We will be releasing a full report later in 2020, but this series of blogposts will feature some of the most interesting case studies.

At the start of this new decade, over 70 million people have been forced to move due to conflict, instability, environmental factors, and economic reasons. As a response to the increased migration into the European Union, many states are looking into various technological experiments to strengthen border enforcement and manage migration. These experiments range from Big Data predictions about population movements in the Mediterranean to automated decision-making in immigration applications and Artificial Intelligence (AI) lie detectors at European borders. However, often these technological experiments do not consider the profound human rights ramifications and real impacts on human lives

A human laboratory of high risk experiments

Technologies of migration management operate in a global context. They reinforce institutions, cultures, policies and laws, and exacerbate the gap between the public and the private sector, where the power to design and deploy innovation comes at the expense of oversight and accountability. Technologies have the power to shape democracy and influence elections, through which they can reinforce the politics of exclusion. The development of technology also reinforces power asymmetries between countries and influence our thinking around which countries can push for innovation, while other spaces like conflict zones and refugee camps become sites of experimentation. The development of technology is not inherently democratic and issues of informed consent and right of refusal are particularly important to think about in humanitarian and forced migration contexts. For example, under the justification of efficiency, refugees in Jordan have their irises scanned in order to receive their weekly rations. Some refugees in the Azraq camp have reported feeling like they did not have the option to refuse to have their irises scanned, because if they did not participate, they would not get food. This is not free and informed consent.

These discussions are not just theoretical: various technologies are already used to control migration, to automate decisions, and to make predictions about people’s behaviour.

Palantir machine says: no

However, are these appropriate tools to use, particularly without any governance or accountability mechanisms in place for if or when things go wrong? Immigration decisions are often opaque, discretionary, and hard to understand, even when human officers, not artificial intelligence, are the ones making decisions. Many of us have had difficult experiences trying to get a work permit, reunite with our spouse, or adopt a baby across borders, not to mention seek refugee protection as a result of a conflict and a war. These technological experiments to augment or replace human immigration officers can have drastic results: in the UK, 7000 students were wrongfully deported because a faulty algorithm accused them of cheating on a language acquisition text. In the US, the Immigration and Customs Enforcement Agency (ICE) has partnered with Palantir Technologies to track and separate families and enforce deportations and detentions of people escaping violence in Central and Latin America.

Image credit: Jenny Kim, “Bots at the Gate” Report, University of Toronto September 2018

What if you wanted to challenge one of these automated decisions? Where does responsibility and liability lie – with the designer of the technology, its coder, the immigration officer, or the algorithm itself? Should algorithms have legal personality? It’s paramount to answer these questions, as much of the decision-making related to immigration and refugee decisions already sits at an uncomfortable legal nexus: the impact on the rights of individuals is very significant, even where procedural safeguards are weak.

Sauron Inc. watches you – the role of the private sector

The lack of technical capacity within government and the public sector can lead to potentially inappropriate over-reliance on the private sector. Adopting emerging and experimental tools without in-house talent capable of understanding, evaluating, and managing these technologies is irresponsible and downright dangerous. Private sector actors have an independent responsibility to make sure technologies that they develop do not violate international human rights and domestic legislation. Yet much of technological development occurs in so-called “black boxes,” where intellectual property laws and proprietary considerations shield the public from fully understanding how the technology operates. Powerful actors can easily hide behind intellectual property legislation or various other corporate shields to “launder” their responsibility and create a vacuum of accountability.

While the use of these technologies may lead to faster decisions and shorten delays, they may also exacerbate and create new barriers to access to justice. At the end of the day, we have to ask ourselves, what kind of world do we want to create, and who actually benefits from the development and deployment of technologies used to manage migration, profile passengers, or other surveillance mechanisms?

Technology replicates power structures in society. Affected communities must also be involved in technological development and governance. While conversations around the ethics of AI are taking place, ethics do not go far enough. We need a sharper focus on oversight mechanisms grounded in fundamental human rights.

This project builds on critical examinations of the human rights impacts of automated decision-making in Canada’s refugee and immigration system. In the coming months, we will be collecting testimonies in locations including the Mediterranean corridor and various border sites in Europe. Our next blogpost will explore how new technologies are being used before, at, and beyond the border, and we will highlight the very real impacts that these technological experiments have on people’s lives and rights as they are surveilled and as their movement is controlled.

If you are interested in finding out more about this project or have feedback and ideas, please contact petra.molnar [at] utoronto [dot] ca. The project is funded by the Mozilla and Ford Foundations.

Mozilla Fellow Petra Molnar joins us to work on AI & discrimination (26.09.2020)
https://edri.org/mozilla-fellow-petra-molnar-joins-us-to-work-on-ai-and-discrimination/

Technology on the margins: AI and global migration management from a human rights perspective, Cambridge International Law Journal, December 2019
https://www.researchgate.net/publication/337780154_Technology_on_the_margins_AI_and_global_migration_management_from_a_human_rights_perspective

Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee Systems, University of Toronto, September 2018
https://ihrp.law.utoronto.ca/sites/default/files/media/IHRP-Automated-Systems-Report-Web.pdf

New technologies in migration: human rights impacts, Forced Migration Review, June 2019
https://www.fmreview.org/ethics/molnar

Once migrants on Mediterranean were saved by naval patrols. Now they have to watch as drones fly over (04.08.2019)
https://www.theguardian.com/world/2019/aug/04/drones-replace-patrol-ships-mediterranean-fears-more-migrant-deaths-eu

Mijente: Who is Behind ICE?
https://mijente.net/notechforice/

The Threat of Artificial Intelligence to POC, Immigrants, and War Zone Civilians
https://towardsdatascience.com/the-threat-of-artificial-intelligence-to-poc-immigrants-and-war-zone-civilians-e163cd644fe0

(Contribution, Petra Molnar, Mozilla Fellow, EDRi)

close
12 Feb 2020

Cloud extraction: A deep dive on secret mass data collection tech

By Privacy International

Mobile phones remain the most frequently used and most important digital source for law enforcement investigations. Yet it is not just what is physically stored on the phone that law enforcement are after, but what can be accessed from it, primarily data stored in the “cloud”. This is why law enforcement is turning to “cloud extraction”: the forensic analysis of user data which is stored on third-party servers, typically used by device and application manufacturers to back up data. As we spend more time using social media and messaging apps, store files with the likes of Dropbox and Google Drive, as our phones become more secure, locked devices harder to crack, and file-based encryption becomes more widespread, cloud extraction is, as a prominent industry player says, “arguably the future of mobile forensics.”

The report “Cloud extraction technology: the secret tech that lets government agencies collect masses of data from your apps” brings together the results of Privacy International’s open source research, technical analyses and freedom of information requests to expose and address this emerging and urgent threat to people’s rights. 

Phone and cloud extraction go hand in hand

EDRi member Privacy International has repeatedly raised concerns over risks of mobile phone extraction from a forensics perspective and highlighted the absence of effective privacy and security safeguards. Cloud extraction goes a step further, promising access to not just what is contained within the phone, but also to what is accessible from it. Cloud extraction technologies are deployed with little transparency and in the context of very limited public understanding. The seeming “wild west” approach to highly sensitive data carries the risk of abuse, misuse and miscarriage of justice. It is a further disincentive to victims of serious offences to hand over their phones, particularly if we lack even basic information from law enforcement about what they are doing. 

The analysis of data extracted from mobile phones and other devices using cloud extraction technologies increasingly includes the use of facial recognition capabilities. If we consider the volume of personal data that can be obtained from cloud-based sources such as Instagram, Google photos, iCloud, which contain facial images, the ability to use facial recognition on masses of data is a big deal. Because of this, greater urgency is needed to address the risks that arise from such extraction, especially as we consider the addition of facial and emotion recognition to software which analyses the extracted data.  The fact that it is potentially being used on vast troves of cloud-stored data without any transparency and accountability is a serious concern.

What you can do

There is an absence of information regarding the use of cloud extraction technologies, making it unclear how this is lawful and equally how individuals are safeguarded from abuse and misuse of their data.  This is part of a dangerous trend by law enforcement agencies and we want to ensure globally the existence of transparency and accountability with respect to new forms of technology they use. 

If you live in the UK, you can submit a Freedom of Information Act Request to your local police to ask them about their use of cloud extraction techonoligies using this template: https://privacyinternational.org/action/3324/ask-your-local-uk-police-force-about-cloud-extraction. You can also use it to send a request if you are based in another country which has Freedom of Information legislation.

Privacy International
https://privacyinternational.org/

Cloud extraction technology: the secret tech that lets government agencies collect masses of data from your apps (07.01.2020)
https://privacyinternational.org/long-read/3300/cloud-extraction-technology-secret-tech-lets-government-agencies-collect-masses-data

Phone Data Extraction
https://privacyinternational.org/campaigns/phone-data-extraction

Push This Button For Evidence: Digital Forensics
https://privacyinternational.org/explainer/3022/push-button-evidence-digital-forensics

Can the police limit what they extract from your phone? (14.11.2019)
https://privacyinternational.org/news-analysis/3281/can-police-limit-what-they-extract-your-phone

Facial recognition and fundamental rights 101 (04.12.2019)
https://edri.org/facial-recognition-and-fundamental-rights-101/

Ask your local UK police force about cloud extraction
https://privacyinternational.org/action/3324/ask-your-local-uk-police-force-about-cloud-extraction

(Contribution by Antonella Napolitano, EDRi member Privacy International)

close
12 Feb 2020

Digitalcourage fights back against data retention in Germany

By Digitalcourage

On 10 February 2020, EDRi member Digitalcourage published the German government’s plea in the data retention case at the European Court of Justice (ECJ). Dated 9 September 2019, the document from the government explains the use of retained telecommunications data by secret services, the question whether the 2002 ePrivacy Directive might apply to various forms of data retention, which exceptions from human rights protections apply to secret service operations, and justifies its plans for the use of data retention to solve a broad range of crimes with the example of a case of the abduction of a Vietnamese man in Berlin by Vietnamese agents. However, this case is very specific and, even if then the retained data was “useful”, that is not a valid legal basis for mass data retention, and therefore can not justify drastic incisions into the basic rights of all individuals in Germany. Finally, the German government also argues that the scope and time period of the storage makes a difference regarding the compatibility of data retention laws with fundamental rights.

Digitalcourage calls for all existing illegal data retention laws to be declared invalid in the EU. There are no grounds for blanket and suspicion-less surveillance in a democracy and under the rule of law. Whether it is content data or metadata that is being stored, data retention (blanket and mass collection of telecommunications data) is inappropriate, unnecessary and ineffective, and therefore illegal. Where the German government argues that secret services need to use telecommunications data to protect state interests, Digitalcourage agrees with many human rights organisations that activities of secret services can be a direct threat to the core trust between the general public and the state. The ECJ has itself called for the storage to be reduced to the absolutely required minimum – and that, according to Digitalcourage, can be only be fulfilled if no data is stored without individual suspicion.

Digitalcourage
https://digitalcourage.de/

Press release: EU data retention: Digitalcourage publishes and criticises the position of the German government (only in German, 10.02.2020)
https://digitalcourage.de/pressemitteilungen/2020/bundesregierung-eugh-eu-weite-vorratsdatenspeicherung

(Contribution by Sebastian Lisken, EDRi member Digitalcourage, Germany)

close
12 Feb 2020

Double legality check in e-evidence: Bye bye “direct data requests”

By Chloé Berthélémy

After having tabled some 600 additional amendments, members of the European Parliament Committee on Civil Liberties (LIBE) are still discussing the conditions under which law enforcement authorities in the EU should access data for their criminal investigations in cross-border cases. One of the key areas of debate is the involvement of a second authority in the access process – usually the judicial authority in the State in which the online service provider is based (often called the “executing State”).

To prevent the misuse of this new cross-border data access instrument, LIBE Committee Rapporteur Birgit Sippel’s draft Report had angered the Commission by proposing that the executing State should receive, by default, the European Preservation or Production Order at the same time as the service provider. It should then have ten days to evaluate and possibly object to an Order by invoking one of the grounds for non-recognition or non-execution – including based on a breach of the EU Charter of Fundamental Rights.

What is more, the Sippel Report proposes that if it is clear from the early stages of the investigation that a suspected person does neither reside in the Member State that is seeking data access (the issuing State) nor in the executing State where the service provider is established, the judicial authorities of the State in which the person resides (the affected State) should also get the chance to intervene.

Notification as a fundamental element of EU judicial cooperation

The reasoning behind such a notification system is compelling: Entrusting one single authority to carry out the full legality and proportionality assessment for two or even three different jurisdictions (the issuing, the executing and the affected State) is careless at best. A national prosecutor or judge alone cannot possibly take into account all national security and defence interests, immunities and privileges and the legal framework of the other Member States, nor the special protections a suspected person may have in their capacity as a lawyer, doctor or journalist. This is especially relevant if the other Member States’ rules are different or even incompatible with the rules of the prosecutor’s own domestic investigation. The examination of a second judicial authority with a genuine possibility to review the Order is therefore of paramount importance to ensure its legality.

The LIBE Committee is currently discussing the details of this notification process. Some amendments that were tabled are unfortunately trying to undermine the protections that the notification requirement would bring. For example, some try to restrict the notification to Production Orders only (when data is transmitted directly), excluding all Preservation Orders (when the data is just frozen and needs to be acquired with a separate Order). Others try to limit notification to transactional data (aka metadata) or content data, alleging that subscriber data is somehow less sensitive and therefore needs less protection. Lastly, some propose that the notification does not have suspensive effects on the obligations of the service provider to respond to an order, meaning that if the notified State objects to an order and the service provider already gave out the data, it is too late.

The Parliament should uphold the basic principles of human rights law

If accepted, some of those amendments would bring the Parliament position dangerously close to the Council’s highly problematic weak notification model which does not provide any of the necessary safeguards it is supposed to have. To ensure the human rights compliance of the procedure, notifying the executing and the affected State should be mandatory for all types of data and Orders. Notifications should be simultaneously sent to the relevant judicial authority and the online service provider, and the latter should wait for a positive reaction from the former before executing the Order. The affected State should have the same grounds for refusal as the executing State, because it is best placed to protect its residents and their rights.

There seems to be a general consensus in the European Parliament about the involvement of a second judicial authority in the issuance of Orders. Meanwhile, the Commission grits its teeth and continues to pretend that mutual trust among EU Member States is all that is needed to protect people from law enforcement overreach. So far, the Commission seems to refuse to see the tremendous risks that its “e-evidence” proposal entails – especially in a context where some Member States are subjected to Article 7 proceedings which could lead to the suspension of some of their rights as Member States are suspended, because of endangered independence of their judicial systems and potential breaches of the rule of law. Mutual trust should not serve as an excuse to undermine individuals’ fundamental right to data protection and the basic principles of human rights law.

Cross-border access to data for law enforcement: Document pool
https://edri.org/cross-border-access-to-data-for-law-enforcement-document-pool/

“E-evidence”: Repairing the unrepairable (14.11.2019)
https://edri.org/e-evidence-repairing-the-unrepairable/

EU rushes into e-evidence negotiations without common position (19.06.2019)
https://edri.org/eu-rushes-into-e-evidence-negotiations-without-common-position/

Recommendations on cross-border access to data (25.04.2019)
https://edri.org/files/e-evidence/20190425-EDRi_PositionPaper_e-evidence_final.pdf

(Contribution by Chloé Berthélémy, EDRi)

close
12 Feb 2020

Data protection safeguards needed in EU-Vietnam trade agreements

By Vrijschrift

On 12 February 2020, the European Parliament gave consent for the ratification of the EU-Vietnam trade and investment agreements.

The trade agreement contains two cross-border data flow commitments. The related data protection safeguards in this agreement are similar to the ones in the EU-Japan agreement, which entered into force in February 2019. Civil society organisations and academics had pointed out flaws in these safeguards.

The EU-Vietnam investment agreement contains a variant of the controversial investor-to-state dispute settlement (ISDS) mechanism. In Opinion 1/17 (ISDS in EU-Canada CETA) the Court of Justice of the European Union found this mechanism compatible with the EU Treaties. ISDS does not interfere with the principle of autonomy of EU Law as the EU and its member states can refuse to pay ISDS damages awards, the Court suggests. Refusing to pay ISDS damages, however, comes with serious drawbacks.

The continued use of weak data protection safeguards is all the more disappointing as two years ago, in January 2018, the European Commission adopted a proposal for stronger safeguards to be used in trade agreements. Consumer and digital rights organisations supported these safeguards in principle. The Commission, however, never applied it. In order to properly protect the fundamental right to data protection in the context of trade agreements, the new von der Leyen Commission should adopt the proposed better safeguards and actually use them.

Vrijschrift
https://www.vrijschrift.org/

EU/Vietnam Free Trade Agreement 2018/0356(NLE)
https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?reference=2018/0356(NLE)&l=en

Weak data protection in EU-Vietnam trade agreement (06.02.2020)
https://www.vrijschrift.org/serendipity/index.php?/archives/242-Weak-data-protection-in-EU-Vietnam-trade-agreement.html

EU-Japan trade agreement not compatible with EU data protection (10.01.2018)
https://edri.org/eu-japan-trade-agreement-eu-data-protection/

The European Commission rightly decides to defend citizens’ privacy in trade discussions (28.02.2018)
https://edri.org/the-european-commission-rightly-decides-to-defend-citizens-privacy-in-trade-discussions/

Study launch: The EU can achieve data protection-proof trade agreements (13.07.2016)
https://edri.org/study-launch-eu-can-achieve-data-protection-proof-trade-agreements/

EU Court CETA ruling shows failure of ISDS reform (06.05.2019)
https://www.vrijschrift.org/serendipity/index.php?/archives/237-EU-Court-CETA-ruling-shows-failure-of-ISDS-reform.html

(Contribution by Ante Wessels, EDRi member Vrijschrift, the Netherlands)

close
12 Feb 2020

PI and Liberty submit a new legal challenge against MI5

By Privacy International

On 1 February 2020, EDRi member Privacy International (PI) and civil rights group Liberty filed a complaint with the Investigatory Powers Tribunal, the judicial body that oversees the intelligence agencies in the United Kingdom, against the security service MI5 in relation to how they handle vast troves of personal data.

In mid-2019, MI5 admitted, during a case brought by Liberty, that personal data was being held in “ungoverned spaces”. Much about these ungoverned spaces, and how they would effectively be “governed” in the future, remained unclear. At the moment, they are understood to be a “technical environment” where personal data of an unknown number of individuals was being “handled”. The use of “technical environment” suggests something more than simply a compilation of a few datasets or databases.

The longstanding and serious failings of MI5 and other intelligence agencies, in relation to these “ungoverned spaces” first emerged in PI’s pre-existing case that started in November 2015. The case challenges the processing of bulk personal datasets and bulk communications data by the UK Security and Intelligence Agencies.

In the course of these proceedings, it was revealed that PI’s data were illegally held by MI5, among other intelligence and security agencies. MI5 deleted PI’s data while the investigation was ongoing. With the new complaint PI also requested the reopening of this case in relation to MI5’s actions.

In parallel proceedings brought by Liberty against the bulk surveillance powers contained in the Investigatory Powers Act 2016 (IPA), MI5 admitted that personal data was being held in “ungoverned spaces”, demonstrating a known and continued failure to comply with both statutory and non-statutory safeguards in relation to the handling of bulk data since at least 2014. Importantly, documents disclosed in that litigation and detailed in the new joint complaint showed that MI5 had sought and obtained bulk interception warrants on the basis of misleading statements made to the relevant authorities.

The documents reveal that MI5 not only broke the law, but for years misled the Investigatory Powers Commissioner’s Office (IPCO), the body responsible for overseeing UK surveillance practices.

In this new complaint, PI and Liberty argue that MI5’s data handling arrangements result in the systematic violation of the rights to privacy and freedom of expression (as protected under Articles 8 and 10 of the European Convention of Human Rights) and under EU law. Furthermore, they maintain that the decisions to issue warrants requested by MI5, in circumstances where the necessary safeguards were lacking, are unlawful and void.

Privacy International
https://privacyinternational.org/

MI5 ungoverned spaces challenge
https://privacyinternational.org/legal-action/mi5-ungoverned-spaces-challenge

Bulk Personal Datasets & Bulk Communications Data challenge
https://privacyinternational.org/legal-action/bulk-personal-datasets-bulk-communications-data-challenge

The Investigative Tribunal case no. IPT/15/110/CH
https://privacyinternational.org/sites/default/files/2019-08/IPT-Determination%20-%2026September2018.pdf

Reject Mass Surveillance
https://www.libertyhumanrights.org.uk/our-campaigns/reject-mass-surveillance

MI5 law breaking triggers Liberty and Privacy International legal action (03.02.2020)
https://www.libertyhumanrights.org.uk/news/press-releases-and-statements/mi5-law-breaking-triggers-liberty-and-privacy-international-legal

(Contribution by EDRi member Privacy International)

close
12 Feb 2020

Dangerous by design: A cautionary tale about facial recognition

By Ella Jakubowska

This series has explored facial recognition as a fundamental right; the EU’s response; evidence about the risks; and the threat of public and commercial data exploitation. In this fifth installment, we consider an experience of harm caused by fundamentally violatory biometric surveillance technology.

Leo Colombo Viña is the founder of a software development company and a professor of Computer Science. A self-professed tech lover, it was “ironic”, he says, that a case of mistaken identity with police facial recognition happened to him. What unfolded next, paints a powerful picture of the intrinsic risks of biometric surveillance. Whilst Leo’s experience occurred in Buenos Aires, Argentina, his story raises serious issues for the deployment of facial and biometric recognition in the EU, too.

“I’m not the guy they’re looking for”

One day in 2019, Leo was leaving the bank mid-afternoon to take the metro back to his office. While waiting for the train, he was approached by a police officer who had received an alert on his phone that Leo was wanted for armed robbery 17 years ago. The alert had been triggered by the metro station’s facial recognition surveillance system, which was recently the subject of a large media campaign.

His first assumption was “okay, there’s something up, I’m not the guy they’re looking for”. But once the police showed him the alert, it clearly showed his picture and personal details. “Okay,” he thought, “what the f***?” When they told him that the problem could not be resolved there and then, and he would have to accompany them to the police station, Leo’s initial surprise turned into concern.

Wrongful criminalisation

It turned out that whilst the picture and ID number in the alert matched Leo’s, bizarrely, the name and date of birth did not. Having never committed a crime, nor even been investigated, Leo still does not know how his face and ID number came to be wrongfully included in a criminal suspect database. Despite subsequent legal requests from across civil society, the government have not made information about the processing of, storage of or access to people’s data available. This is not a unique issue: across Europe, policing technology and processing of personal data is frighteningly opaque.

At the police station, Leo spent four hours in the bizarre position of having to “prove that I am who I am”. He says the police treated him kindly and respectfully – although he thinks that being a caucasian professional meant that they dismissed him as a threat. The evidence for this came later, when a similar false alert happened to another man who also did not have a criminal record, but who had darker skin than Leo and came from a typically poorer area. He was wrongfully jailed for six days because the system’s alert was used to justify imprisoning him – despite the fact that his name was not a match.

Undermining police authority

If the purpose of policing is to catch criminals and keep people safe, then Leo’s experience is a great example of why facial recognition does not work. Four officers spent a combined total of around 20 hours trying to resolve his issue (at the taxpayers’ expense, he points out). That doesn’t include the time spent afterwards by the public prosecutor to try and work out what went wrong. Leo recalls that the police were frustrated to be tied up with bureaucracy and attempts to understand the decision that the system had made, whilst their posts were left vacant and real criminals went free.

The police told Leo that the Commissioner receives a bonus tied to the use of the facial recognition system. They confided that it seemed to be a political move, not a policing or security improvement. Far from helping them solve violent crime – one of the reasons often given for allowing such intrusive systems – it mostly flagged non-violent issues such as witnesses who had not turned up for trials because they hadn’t received a summons, or parents who had overdue child support payments.

The implications on police autonomy are stark. Leo points out that despite swift confirmation that he was not the suspect, the police had neither the ability nor the authority to override the alert. They were held hostage to a system that they did not properly understand or control, but they were compelled to follow its instructions and decisions without knowing how or why it had made them.

Technology is a tool made by humans, not a source of objective truth or legal authority. In Leo’s case, the police assumed early on that the match was not legitimate because he did not fit their perception of a criminal. But for others also wrongfully identified, the assumption was that they did look like a criminal, so the system was assumed to be working correctly. Global anti-racism activists will be familiar with these damaging, prejudicial beliefs. Facial recognition does not solve human bias, but rather supports it by giving discriminatory human assumptions a false sense of “scientific” legitimacy.

Technology cannot fix a broken system

The issues faced by Leo, and the officers who had to resolve his situation, reflect deeper systemic problems which cannot be solved by technology. Biased or inefficient police processes, mistakes with data entry, and a lack of transparency do not disappear when you automate policing – they get worse.

Leo has had other experiences with the fallacies of biometric technology.
A few years ago, he and his colleagues experimented with developing fingerprinting software at the request of a client, but ultimately decided against it. “We realised that biometric systems are not good enough,” he says. “It feels good enough, it[‘s] good marketing, but it’s not safe.” He points to the fact that he was recently able to unlock his phone using a picture of himself. “See? You are not secure.”

Leo shared his story – which quickly went viral on Twitter – because he wanted to show that “there is no magic in technology.” As a software engineer, people see him like a “medieval wizard”. As he sees it, though, he is someone with the responsibility and ability to show people the truth behind government propaganda about facial recognition, starting with his own experience.

Aftermath

I asked Leo if the government considered the experiences of those who had been affected. He laughed sardonically. “No, no, absolutely not, no.” He continues that “I shouldn’t be in that database, because I didn’t commit any crime.” Yet it took the public prosecutor four months to confirm the removal of his data, and the metro facial recognition system is still in use today. Leo thinks it has been a successful marketing tool for a powerful city government wanting to assuage citizens’ safety concerns. He thinks that the people have been lied to, and that fundamentally unsafe technology cannot make the city safer.

A perfect storm of human errors, systemic policing issues and privacy violations led to Leo being included in the database, but this is by no means a uniquely Argentinian problem. The Netherlands, for example, have included millions of people in a criminal database despite them never being charged with a crime. Leo reflects that “the system is the whole thing, from the beginning to end, from the input to the output. The people working in technology just look at the algorithms, the data, the bits. They lose the big picture. That’s why I shared my story … Just because.” We hope the EU is taking notes.

As told to Ella Jakubowska by Leo Colombo

Dismantling AI Myths and Hype (04.12.2019)
https://daniel-leufer.com/2019/12/05/dismantling-ai-myths-and-hype/

Data-driven policing: The hardwiring of discriminatory policing practices across Europe (19.11.2019)
https://www.citizensforeurope.eu/learn/data-driven-policing-the-hardwiring-of-discriminatory-policing-practices-across-europe

Facial recognition and fundamental rights 101 (04.12.2019)
https://edri.org/facial-recognition-and-fundamental-rights-101/

The many faces of facial recognition in the EU (18.12.2019)
https://edri.org/the-many-faces-of-facial-recognition-in-the-eu/

Your face rings a bell: Three common uses of facial recognition (15.01.2020)
https://edri.org/your-face-rings-a-bell-three-common-uses-of-facial-recognition/

Stalked by your digital doppelganger? (29.01.2020)
https://edri.org/stalked-by-your-digital-doppelganger/

Facial recognition technology: fundamental rights considerations in the context of law enforcement (27.11.2019)
https://fra.europa.eu/sites/default/files/fra_uploads/fra-2019-facial-recognition-technology-focus-paper.pdf

(Contribution by Ella Jakubowska, EDRi intern)

close
03 Feb 2020

Support our work by investing in a piece of e-clothing!

By EDRi

Your privacy is increasingly under threat. European Digital Rights works hard to have you covered. But there’s only so much we can do.

Help us help you. Help us get you covered.

Click the image to watch the video!

Check out our 2020 collection!*

*The items listed below are e-clothes. That means they are electronic. Not tangible. But still very real – like many other things online.

Your winter stock(ings) – 5€
A pair of hot winter stockings can really help one get through cold and lonely winter days. Help us to fight for your digital rights by investing in a pair of these superb privacy–preserving fishnet stockings. This delight is also a lovely gift for someone special.


A hat you can leave on – 10€
Keep your head undercover with this marvelous piece of surveillance resistance. Adaptable to any temperature types and – for the record – to several CCTV models, the item really lives up to its value. This hat is an indispensable accessory when visiting your favourite public space packed with facial recognition technologies.


Winter/Summer Cape – 25€
Are you feeling heroic yet? Our flamboyant Winter/Summer cape is designed to keep you warm and cool. This stylish accessory takes the weight off your shoulders – purchase it and let us take care of fighting for your digital rights!


Just another White T-Shirt – 50€
A white t-shirt can do wonders when you’re trying to blend in with a white wall. This wildly unexciting but versatile classic is one of the uncontested fundamental pillars of your privacy enhancing e-wardrobe.


THE privacy pants ⭐️ – 100€
This ultimate piece of resistance is engineered to keep your bottom warm in the coldest winter, but also aired up during the hottest summer days. Its colour guarantees the ultimate tree (of knowledge) look. The item comes with a smart zipper.


Anti-tracksuit ⭐️ – 250€
Keep your digital life healthy with the anti-tracking tracksuit. The fabric is engineered to bounce out any attempt to get your privacy off track. Plus, you can impress your beloved babushka too.


Little black dress ⭐️ – 500€
Whether at a work cocktail party, a funeral, shopping spree or Christmas party – this dress will turn you into the center of attention, in a (strangely) privacy-respecting manner.


Sew your own ⭐️ – xxx€
Unsure of any of the items above? Let your inner tailor free, customise your very own unique, designer garment, and put a price tag of your choice on it.



⭐️ The items of value superior to 100€ are delivered with an (actual, analog, non-symbolic) EDRi iron-on privacy patch that you can attach on your existing (actual, analog, non-symbolic) piece of clothing or accessory. If you wish to receive this additional style and privacy enhancer, don’t forget to provide us with your postal address (either via the donation form, or in your bank transfer message)!


Question? Remark? Idea? Please contact us brussels [at] edri [dot] org !

close