How informed consent has become a mechanism legitimizing digital power
Table of Contents
- 1. Introduction – The Privacy Paradox
- 2. From Data Protection Rights to the Bureaucracy of Consent
- 3. GDPR and the Ideal of Individual Control
- 4. Informed Consent as Digital Asymmetry
- 5. Dark Patterns and the Manipulation of Choice
- 6. The Meta-WhatsApp Case: Forced Consent and Relational Capitalism
- 7. Google and the Legal Infrastructure of Tracking
- 8. TikTok and Minors: The Illusion of Pedagogical Privacy
- 9. Data Capitalism and the Transformation of Freedom
- 10. Structural Failures in Europe’s Digital Governance
- 11. Digital Sovereignty and Substantive Freedom
- 12. Conclusion – From Privacy to Substantive Digital Freedom
- Main References (as cited)
1. Introduction – The Privacy Paradox
The word privacy entered the public vocabulary as a synonym for protection, confidentiality, and personal security.
Yet, in today’s digital world, it has turned into an ambiguous concept where the promise of protection often coincides with the legal ability of major tech actors to use personal data under the guise of legality.
The law on privacy, once designed to defend citizens from economic and governmental power, has now become a mechanism legitimizing the private power of digital platforms.
Users click “Accept All” not out of genuine will, but because they have no real alternative.
This is the paradox at the heart of modern privacy: the rule meant to protect has evolved into a tool that justifies exploitation, provided it is legally compliant.
This article analyzes the evolution of personal data protection in Europe, showing how the General Data Protection Regulation (GDPR – Regulation EU 2016/679) has institutionalized a bureaucracy of consent rather than genuine informational freedom.
Through case studies such as Meta/WhatsApp, Google, and TikTok, it demonstrates how privacy has become a form of asymmetric control disguised as self-determination.
2. From Data Protection Rights to the Bureaucracy of Consent
The right to privacy was first articulated in Anglo-American legal thought in 1890, when Samuel Warren and Louis Brandeis published “The Right to Privacy” in the Harvard Law Review, defining it as the right to be let alone.
With the rise of information technology and data processing, privacy became the right to control personal information, eventually evolving into the modern concept of data protection.
The European Union codified this idea with Directive 95/46/EC, later replaced by Regulation (EU) 2016/679 (GDPR), which came into force in 2018.
The GDPR strengthened user rights—such as the right to erasure (Art. 17), the right to data portability (Art. 20), and explicit consent (Art. 7).
However, in practice, these rights often created a complex bureaucratic framework that burdens individuals more than it limits data controllers.
Few users ever read the dense privacy notices before clicking “Accept.”
Thus, the informed consent that European law regards as the highest expression of personal autonomy has become a legal fiction—a procedural act devoid of real meaning.
3. GDPR and the Ideal of Individual Control
The GDPR rests on a simple principle: data control must remain in the hands of individuals.
Article 1 states that its aim is “to protect the fundamental rights and freedoms of natural persons and, in particular, their right to the protection of personal data.”
Yet, this noble goal faces three structural barriers:
- Informational asymmetry: users cannot truly grasp the implications of their consent.
- Technical asymmetry: platforms possess the expertise to use data in opaque, complex ways.
- Functional asymmetry: digital services are essential to modern life—making refusal impractical.
As a result, freedom of choice is theoretical but not real.
As Shoshana Zuboff explains in The Age of Surveillance Capitalism (2019), this system represents a form of behavioral extraction, turning human experience into raw material for predictive markets.
Data protection, therefore, is not merely a legal issue but a political and economic struggle. Even the most advanced laws, like the GDPR, falter when confronted with the structural power of global platforms.
4. Informed Consent as Digital Asymmetry
Article 7 of the GDPR requires that consent be “freely given, specific, informed and unambiguous.”
This assumes individuals can evaluate consequences and refuse without detriment.
In practice, digital consent is a coerced choice.
Consider WhatsApp, which in 2021 updated its Terms of Service to enable data sharing with Meta Platforms Inc.
Users who refused risked losing access or functionality.
While European regulators—including Italy’s Garante per la Protezione dei Dati Personali—investigated, the essential imbalance remained: formal choice existed, but the alternative was social and functional exclusion.
Legally, this resembles contractual asymmetry, where the weaker party adheres to terms dictated by the stronger.
Because the consent is “active,” it remains legally valid—a contradiction that transforms privacy law into a technology of control.
As Evgeny Morozov notes, this represents a “soft coercion,” where freedom of choice exists only as functional illusion.
The rhetoric of informed consent, rather than empowering users, serves to legalize surveillance.
5. Dark Patterns and the Manipulation of Choice
Digital interfaces are never neutral.
Reports from the European Data Protection Board (EDPB) and the EU Observatory on Dark Patterns reveal deliberate interface designs meant to steer users toward data-sharing behaviors that benefit corporations.
For instance, during app installations or cookie pop-ups, the “Accept All” button is large and colorful, while “Reject All” is small or hidden.
Both the ePrivacy Directive (2002/58/EC) and the GDPR prohibit such deceptive design, yet enforcement remains uneven and reactive.
A landmark decision came in 2022, when France’s CNIL fined Google €150 million for making it “easier to accept than to refuse cookies.”
The CNIL ruled that “the freedom to refuse must be equivalent to the freedom to accept”—a crucial precedent for digital rights.
However, fines have little effect on the structural business model of platforms, which depend on massive data collection for targeted advertising.
Dark patterns are not design flaws; they are deliberate features of behavioral capitalism.
6. The Meta-WhatsApp Case: Forced Consent and Relational Capitalism
The Meta-WhatsApp case epitomizes how privacy can be used as a legal shield for corporate interests.
In 2021, WhatsApp revised its policy to allow Meta to use aggregated data to “improve its services.”
This included metadata such as frequency of use, contact lists, and communication patterns—all integrated with Facebook and Instagram.
The Irish Data Protection Commission (DPC) fined Meta €390 million (January 2023) for violating Article 6 of the GDPR (lack of lawful basis).
Yet, it simultaneously allowed continued processing under the “legitimate interest” clause—effectively transforming an unlawful act into a compliant one.
This illustrates how European law, despite strong principles, can be flexible toward dominant players.
Interpretations of legal bases—consent, contract, legitimate interest—become tools of negotiation, allowing compliance without genuine change.
Meta’s model of relational capitalism thrives on this: social sharing is framed as personal freedom, while the network transforms relationships into commodified transparency.
7. Google and the Legal Infrastructure of Tracking
If Meta represents relational surveillance, Google embodies infrastructural surveillance.
Its ecosystem—Search, Android, Gmail, Maps, Chrome, YouTube—collects and correlates data across all aspects of daily life.
According to Google’s 2024 Transparency Report, the company processes over 80 trillion queries annually, an informational resource unprecedented in human history.
Most of this data collection is technically legal, based on user consent.
Installing Android, for example, grants multiple tracking permissions for advertising and geolocation.
The issue is not formal violation but the legal normalization of surveillance.
In Google Spain (C-131/12, 2014), the Court of Justice of the European Union (CJEU) established the right to be forgotten, but its impact remained limited to search results.
Since then, Google has expanded behavioral profiling through products like AdSense and DoubleClick.
While these rely on “pseudonymized” data, cross-platform aggregation still allows reconstruction of identifiable profiles.
As Byung-Chul Han writes in The Transparency Society (2012), modern individuals live under “coercive transparency”—freedom reduced to the obligation of self-exposure.
8. TikTok and Minors: The Illusion of Pedagogical Privacy
TikTok represents a critical test for digital ethics, especially concerning children’s data.
With over 1.6 billion active users, it has repeatedly faced European investigations over underage data processing.
In 2023, the Italian Data Protection Authority temporarily banned TikTok from processing data of users under 13 after a fatal “challenge” incident involving a child in Palermo.
Authorities found TikTok’s age verification ineffective and its algorithmic profiling of minors in breach of Article 8 of the GDPR, which governs children’s consent for information society services.
Despite sanctions, TikTok continues to rely on datafication of youth behavior: likes, comments, and watch time feed algorithms that both learn and manipulate user preferences.
While ethically troubling, this remains legally sanctioned, thanks to pre-checked “consents” accepted at registration.
As sociologist Manuel Castells noted, we live in a network society without childhood: an environment where constant connectivity erases critical distance.
Privacy here becomes a failed pedagogy—a promise of awareness that results in dependence.
9. Data Capitalism and the Transformation of Freedom
Behind Meta, Google, or TikTok lies the structural logic of data capitalism.
According to Nick Srnicek (Platform Capitalism, 2017), personal data are today’s key economic resource, used to predict and influence future behavior.
In classical capitalism, companies produced goods to satisfy needs; in data capitalism, they produce needs through data.
Every search, message, and purchase becomes input for machine learning systems that model reality for predictive efficiency.
Austrian lawyer Max Schrems, through Schrems I (C-362/14) and Schrems II (C-311/18), demonstrated how EU citizens cannot know where their data end up once transferred abroad.
His cases invalidated both the Safe Harbor and Privacy Shield frameworks, proving that European data protection cannot coexist with U.S. surveillance capitalism.
Data capitalism thrives not on illegality but on functional capture of legality.
Each consent click legitimizes extraction. Freedom becomes the condition of subordination.
As Luciano Floridi argues in The Ethics of Information (2013), humans are now both subjects and informational objects.
Our digital identity is not a mirror of the real one—it is its operational duplicate, owned and manipulated by algorithmic systems.
Thus, privacy rights are necessary but insufficient; they protect against abuse, not legitimate exploitation.
10. Structural Failures in Europe’s Digital Governance
Despite leading global standards, Europe struggles with effective data governance.
The GDPR is reactive: it acts after harm occurs.
Supervisory authorities are fragmented and underfunded, reducing deterrence.
The EDPB repeatedly calls for harmonization, but tech giants exploit territorial competence—for example, Meta’s choice of Ireland for its European headquarters, where regulators apply a cooperative approach.
Consequently, European citizens enjoy formal equality but unequal enforcement.
New frameworks such as the AI Act and Data Act (2023–2024) attempt to broaden digital sovereignty by addressing data circulation, transparency, and access.
Yet without a cultural shift toward active digital citizenship, these may remain tools of formal compliance, not empowerment.
11. Digital Sovereignty and Substantive Freedom
Digital sovereignty must mean more than technological autonomy; it must ensure personal control over data.
Every citizen should have the ability to decide—easily and reversibly—who may use their data, for how long, and for what purpose.
Italian jurist Stefano Rodotà envisioned this shift: from privacy as “the right to be left alone” to privacy as the power of informational self-determination.
As he wrote, “freedom does not mean escaping power, but being able to control it.”
Achieving this requires three pillars:
- Algorithmic transparency – understanding how data shape automated decisions.
- Platform interoperability – making data portability (Art. 20 GDPR) truly effective.
- Digital literacy – teaching privacy as a civic competence, not merely a legal duty.
Without these, digital sovereignty risks remaining a rhetorical construct, while platforms continue to exercise para-state power legitimized by consent.
12. Conclusion – From Privacy to Substantive Digital Freedom
The evolution of privacy in the 21st century reveals a disturbing trajectory: from fundamental right to bureaucratic ritual.
Clicking “Accept All” has become an entry ticket to society.
Informed consent no longer emancipates—it certifies subjection.
Privacy laws fail not for lack of ideals but for excess of formalism.
They protect the process, not the person; the act of consent, not the substance of freedom.
The future of digital liberty demands more than compliance. It requires a new social contract of information, where humans are not data sources but informational citizens endowed with cognitive rights.
We must move from passive protection to active participation.
As Rodotà reminded us, “privacy is not a refuge but a condition of freedom in the information society.”
Our challenge is no longer to defend privacy from its violators, but to liberate it from those who exploit it—legally.
Main References (as cited)
- Regulation (EU) 2016/679 – GDPR, Arts. 1, 6, 7, 8, 17, 20.
- Directive 2002/58/EC (ePrivacy).
- Court of Justice of the EU, Google Spain, C-131/12 (2014).
- Court of Justice of the EU, Schrems I, C-362/14 (2015).
- Court of Justice of the EU, Schrems II, C-311/18 (2020).
- CNIL Decision on Google, Jan. 6, 2022.
- Irish DPC, Meta Platforms Ireland, Jan. 2023.
- Rodotà S., Tecnopolitica, 2004.
- Zuboff S., The Age of Surveillance Capitalism, 2019.
- Srnicek N., Platform Capitalism, 2017.
- Floridi L., The Ethics of Information, 2013.
- Han B.-C., The Transparency Society, 2012.
- Morozov E., To Save Everything, Click Here, 2013.
This post is also available in: Italiano (Italian)
