Technology and Social Change

Lecture 13: AI, State Surveillance and Human Rights

Bogdan G. Popescu

Tecnológico de Monterrey

Part 1: Motivating Puzzles & Learning Objectives

Three Puzzles to Guide Us

Puzzle 1: Adoption Variance

Why does China scale public FR while Germany’s use is contested and legally constrained?

Puzzle 2: Compliance vs. Backlash

When does surveillance produce citizen compliance versus organized resistance?

Puzzle 3: The Privacy Law Paradox

Why do strong privacy laws coexist with mass data collection?

What This Lecture Is NOT About

Not a technology primer

We assume basic AI literacy; focus is on political consequences

Not normative advocacy

We analyze mechanisms, not advocate positions

Not AI ethics in the abstract

We study how power structures mediate ethical outcomes

Learning Objectives

By the end of this lecture, you should be able to:

  1. Trace causal mechanisms from AI capability to state-citizen changes
  1. Identify institutional variables that modulate surveillance outcomes
  1. Predict variation in AI control across regime types
  1. Evaluate policy instruments by mechanism-specific targeting
  1. Recognize distributional consequences on vulnerable groups

The Master Causal Framework

%%{init: {'theme':'base','themeVariables':{'fontSize':'20px','primaryColor':'#e8eeeb','primaryTextColor':'#1e293b','primaryBorderColor':'#4a7c6f','lineColor':'#334155','secondaryColor':'#f5f0e8','tertiaryColor':'#fdf2e9'},'flowchart':{'useMaxWidth':true,'width':1150,'height':650,'nodeSpacing':40,'rankSpacing':50,'htmlLabels':true,'curve':'basis','diagramPadding':8}}}%%
flowchart LR
  A["AI<br/>Capability"] --> B["Data<br/>Extraction"]
  B --> C["Surveillance and<br/>Decision Systems"]
  C --> D["Power Shift:<br/>State, Citizens, Firms"]
  D --> E["Political<br/>Feedback"]
  E --> F["Institutional<br/>Adaptation"]
  F -.-> A
  D --> G["Distribution of<br/>Rights and Harms"]

Author’s illustration. Every slide addresses links in this chain.

Four Anchor Scholars

James ScottSeeing Like a State (1998)
States simplify society to make it “legible” for control

Charles TillyCoercion, Capital, and European States (1990)
State capacity = extraction + monitoring + coercion

Acemoglu & RobinsonWhy Nations Fail (2012)
Inclusive vs. extractive institutions determine accountability

Julie CohenBetween Truth and Power (2019)
Privacy is a power resource, not mere secrecy

So What? Why AI Changes Everything

  • AI does not create new goals for states
  • States have always wanted comprehensive monitoring
  • What changed: the cost structure of surveillance collapsed
  • This transforms what is politically feasible
  • Next: What exactly did AI make technically possible?

Part 2: What AI Changes Technically

AI as Task Automation

Prediction: Given inputs, forecast likely outcomes

  • Credit default, recidivism, disease outbreaks

Classification: Given inputs, assign categories

  • Face to identity; text to sentiment; behavior to threat

Generation: Given prompts, produce content

  • Text, images, synthetic media for disinformation

AI Reduces State Monitoring Costs

Figure 1

Key Technical Capabilities for State Control

Facial Recognition — Match faces across databases; identify in crowds

Natural Language Processing — Analyze text at scale; detect sentiment

Predictive Analytics — Risk-score individuals; forecast protests

Behavioral Biometrics — Identify by gait, typing, voice patterns

Discussion Exercise 1

Scenario: Your university announces AI-powered attendance monitoring using facial recognition cameras in every classroom.

Discuss with a neighbor (5 min):

  1. Which AI capability is being used here?
  2. What information becomes “visible” that was not before?
  3. How might students change their behavior?

Connect your answers to Scott’s legibility concept.

So What? From Capability to Deployment

  • AI provides the tools; states choose the application
  • Technical capability is necessary but not sufficient
  • The same tool produces different outcomes in different hands
  • Next: How do states actually deploy these capabilities?

Part 3: State Capacity and Control Mechanisms

Tilly’s State Capacity Framework

Charles Tilly identified core state functions:

  • Extraction — obtaining resources (taxes, data, labor)
  • Monitoring — observing subject populations
  • Coercion — compelling compliance through force or threat
  • Protection — providing security from threats
  • Adjudication — resolving disputes, allocating rights

AI enhances all five, but especially monitoring and coercion.

AI Control Instruments

Instrument State Incentive Rights Risk Failure Mode
Mass Surveillance Threat detection Privacy, assembly Function creep
Predictive Policing Efficiency Due process Feedback loops
Welfare Fraud Detection Cost reduction Dignity False positives
Social Credit Systems Compliance Autonomy Arbitrariness
Border Control Security Asylum rights Exclusion errors
Censorship Narrative control Expression Overblocking

Author’s illustration based on comparative policy analysis.

Scott’s Legibility Concept

Core claim: States simplify complex social reality to govern

  • Standardized names, maps, censuses, registries

Legibility enables intervention:

  • What can be seen can be measured and targeted

AI vastly expands legibility:

  • Movement, social networks, emotions now trackable
  • AI is “Seeing Like a State 2.0”

Surveillance Produces Chilling Effects

%%{init: {'theme':'base','themeVariables':{'fontSize':'22px','primaryColor':'#e8eeeb','primaryTextColor':'#1e293b','primaryBorderColor':'#4a7c6f','lineColor':'#334155','secondaryColor':'#f5f0e8','tertiaryColor':'#fdf2e9'},'flowchart':{'useMaxWidth':true,'width':1150,'height':650,'nodeSpacing':50,'rankSpacing':60,'htmlLabels':true,'curve':'basis','diagramPadding':8}}}%%
flowchart LR
  A["AI-Enabled<br/>Surveillance"] --> B["Perceived<br/>Observation"]
  B --> C["Self-<br/>Censorship"]
  C --> D["Chilling<br/>Effects"]
  D --> E["Reduced<br/>Participation"]
  E --> F["Democratic<br/>Erosion"]

Author’s illustration. Even without punishment, belief in surveillance changes behavior — especially dissent (Penney, 2016).

Case: Predictive Policing

How it works:

  • Historical crime + demographic data produce risk scores
  • Police resources allocated to “high-risk” areas

The feedback loop:

  • More policing leads to more detected crime leads to higher scores

Distributional consequence:

  • Over-policed communities remain over-policed
  • Historical discrimination is baked into algorithms

The Predictive Policing Feedback Loop

%%{init: {'theme':'base','themeVariables':{'fontSize':'22px','primaryColor':'#e8eeeb','primaryTextColor':'#1e293b','primaryBorderColor':'#4a7c6f','lineColor':'#334155','secondaryColor':'#f5f0e8','tertiaryColor':'#fdf2e9'},'flowchart':{'useMaxWidth':true,'width':1150,'height':650,'nodeSpacing':50,'rankSpacing':60,'htmlLabels':true,'curve':'basis','diagramPadding':8}}}%%
flowchart LR
  A["Historical<br/>Crime Data"] --> B["ML Model<br/>Trains"]
  B --> C["Predicts Risk<br/>in Same Areas"]
  C --> D["Police Deployed<br/>to Those Areas"]
  D --> E["More Arrests<br/>Recorded There"]
  E --> A

Author’s illustration. The prediction changes the outcome it predicts (“performative prediction”).

Selective Enforcement and Legitimacy

The selective enforcement logic:

  • AI provides comprehensive information on violations
  • States cannot enforce all laws; discretion is political

Consequences:

  • Laws become weapons against opponents
  • Compliance with law does not guarantee safety

Acemoglu & Robinson: Extractive institutions use law instrumentally; inclusive institutions constrain discretion

Discussion Exercise 2

Scenario: A democratic government proposes AI welfare fraud detection. The system analyzes spending, location, and social media to flag suspicious claims.

Discuss with a neighbor (5 min):

  1. Which state capacity is enhanced? (Tilly)
  2. What becomes “legible” that was not before? (Scott)
  3. What is the likely failure mode?
  4. Who is harmed most by false positives?

So What? Control Is One Side

  • States gain powerful new tools for monitoring and coercion
  • But surveillance affects more than security
  • It reshapes the power balance between institutions and individuals
  • Next: How does this affect privacy and rights?

Part 4: Privacy, Rights, and Inequality

Beyond “Privacy as Secrecy”

Common but inadequate view:

“Nothing to hide, nothing to fear”

Julie Cohen’s alternative:

Privacy preserves conditions for autonomy and self-development

Privacy as power:

  • Information asymmetries structure bargaining power
  • Privacy protects the weaker party from the stronger

Privacy as a Power Resource

%%{init: {'theme':'base','themeVariables':{'fontSize':'22px','primaryColor':'#e8eeeb','primaryTextColor':'#1e293b','primaryBorderColor':'#4a7c6f','lineColor':'#334155','secondaryColor':'#f5f0e8','tertiaryColor':'#fdf2e9'},'flowchart':{'useMaxWidth':true,'width':1150,'height':650,'nodeSpacing':50,'rankSpacing':60,'htmlLabels':true,'curve':'basis','diagramPadding':8}}}%%
flowchart LR
  A["Institution Knows<br/>Much About You"] --> C["Information<br/>Asymmetry"]
  B["You Know Little<br/>About Institution"] --> C
  C --> D["Power<br/>Imbalance"]
  D --> E["Privacy Protection =<br/>Reduce Asymmetry"]

Author’s illustration based on Cohen (2019). The individual is transparent; the institution is opaque.

Rights Tradeoffs in AI Governance

Security vs. Liberty

  • Surveillance may reduce crime — but restricts freedom

Efficiency vs. Due Process

  • Automated systems are faster — but deny appeals

Accuracy vs. Discrimination

  • 95% accurate systems still harm 5% — who are they?

The question is not whether tradeoffs exist, but who decides.

Who Bears the Harms?

Pattern: AI harms fall disproportionately on marginalized groups

  • Facial recognition: higher error on darker-skinned faces
  • Predictive policing: over-policing minority neighborhoods
  • Welfare systems: poorest face most intrusive scrutiny

Why this pattern?

  • Training data reflects historical discrimination
  • Affected groups have less power to contest errors
  • Harms to marginalized groups are politically cheaper

Facial Recognition Accuracy Disparities

Figure 2

From Targeting to Legitimacy Crisis

%%{init: {'theme':'base','themeVariables':{'fontSize':'20px','primaryColor':'#e8eeeb','primaryTextColor':'#1e293b','primaryBorderColor':'#4a7c6f','lineColor':'#334155','secondaryColor':'#f5f0e8','tertiaryColor':'#fdf2e9'},'flowchart':{'useMaxWidth':true,'width':1150,'height':650,'nodeSpacing':35,'rankSpacing':45,'htmlLabels':true,'curve':'basis','diagramPadding':8}}}%%
flowchart LR
  A["AI-Enabled<br/>Targeting"] --> B["Disparate<br/>Impact"]
  B --> C["Grievance<br/>Accumulation"]
  C --> D["Viral Cases<br/>and Visibility"]
  D --> E["Legitimacy<br/>Crisis"]
  E --> F["Political<br/>Pressure"]
  F --> G["Institutional<br/>Reform?"]

Author’s illustration. Reform requires media freedom, civil society, and electoral channels. Without these, grievance accumulates under repression.

So What? Institutions Shape Outcomes

  • Privacy erosion is not uniform across countries
  • The same technology produces different rights outcomes
  • The key variable is institutional context
  • Next: How do institutions modulate AI effects?

Part 5: Comparative Institutions and Regimes

Acemoglu & Robinson: Institutions Matter

Core distinction:

  • Inclusive institutions distribute power and constrain elites
  • Extractive institutions concentrate power for elite benefit

Applied to AI surveillance:

  • Inclusive institutions: AI constrained by law and oversight
  • Extractive institutions: AI amplifies elite control

Key insight: Same technology, different outcomes by institutional context

Regime Types and AI Outcomes

%%{init: {'theme':'base','themeVariables':{'fontSize':'20px','primaryColor':'#e8eeeb','primaryTextColor':'#1e293b','primaryBorderColor':'#4a7c6f','lineColor':'#334155','secondaryColor':'#f5f0e8','tertiaryColor':'#fdf2e9'},'flowchart':{'useMaxWidth':true,'width':1150,'height':650,'nodeSpacing':35,'rankSpacing':45,'htmlLabels':true,'curve':'basis','diagramPadding':8}}}%%
flowchart LR
  A["Same AI<br/>Technology"] --> B["Democracy<br/>(Inclusive)"]
  A --> C["Hybrid<br/>Regime"]
  A --> D["Autocracy<br/>(Extractive)"]
  B --> E["Constrained Use<br/>Courts, Press, Elections"]
  C --> F["Volatile Outcomes<br/>Leader-Dependent"]
  D --> G["Systematic<br/>Repression"]

Author’s illustration based on Acemoglu & Robinson (2012).

Institutional Modulators of AI Effects

Safeguard What It Prevents What It Cannot Prevent
Judicial independence Arbitrary punishment Slow response; limited tech capacity
Media freedom Secrecy and cover-ups Disinformation; elite capture
Civil society Elite-only deliberation Repression; co-optation
Procurement rules Vendor capture; corruption Slow adaptation to threats
Data protection authority Unlawful data collection Under-resourcing; regulatory capture

Author’s illustration. No single safeguard is sufficient; systems require layered protections.

Global Surveillance Camera Density

Figure 3

Discussion Exercise 3

Scenario: Hungary — formally democratic but with increasingly centralized executive power — considers adopting Chinese-style facial recognition in public spaces.

Discuss with a neighbor (5 min):

  1. Using the regime-type diagram, what outcome would you predict?
  2. Which institutional modulators are most critical?
  3. What evidence would make you revise your prediction?

So What? Policy Must Match Context

  • No universal “best” governance tool exists
  • Effective policy depends on institutional capacity
  • Tools that work in democracies fail in autocracies
  • Next: What specific tools are available?

Part 6: Policy Tools and Discussion

A Menu of Governance Tools

Data stage: Minimization, purpose limitation, consent requirements

Model stage: Algorithmic auditing, transparency, impact assessments

Deployment stage: Human-in-the-loop, due process, judicial authorization

System level: Independent oversight, sunset clauses, procurement rules

Each tool targets a specific stage. Single-point interventions fail.

What Works Where?

Tool Democracy Hybrid Regime Autocracy
Data minimization Effective (enforceable) Partial (inconsistent) Ineffective (state ignores)
Algorithmic auditing Effective (capacity exists) Limited (expertise lacking) Cosmetic (regime controls)
Judicial authorization Effective (independent courts) Variable (courts pressured) Ineffective (courts captured)
International pressure Moderate (less needed) Potentially effective Potentially effective (if costs imposed)

Author’s illustration. Effectiveness depends on institutional context, not technical design alone.

Discussion Questions (10 min)

  1. When does surveillance shift from deterring crime to chilling political activity?
  1. Can AI accelerate democratic erosion or merely reflect weakness?
  1. Should democracies ban surveillance technology exports to autocracies?
  1. How does the analysis change when corporations collect the data?
  1. Could AI empower citizen monitoring of states?
  1. Can workers organize against algorithmic management?

Summary: Four Key Takeaways

1. AI reduces monitoring costs, expanding state control capacity

  • But effects depend on institutional context

2. Privacy is a power resource, not secrecy

  • Information asymmetries structure institutional bargaining

3. Harms are distributed unequally

  • Marginalized groups bear disproportionate costs

4. Institutions mediate AI effects

  • Same technology, different outcomes across regime types

Returning to Our Three Puzzles

Puzzle 1 (Adoption Variance)

Institutional configurations — extractive vs. inclusive — shape deployment

Puzzle 2 (Compliance vs. Backlash)

Institutional checks determine whether grievance yields reform or repression

Puzzle 3 (Privacy Law Paradox)

Enforcement gaps and weak regulatory capacity explain why laws fail

References

Scott, J. C. (1998). Seeing like a state: How certain schemes to improve the human condition have failed. Yale University Press.

Tilly, C. (1990). Coercion, capital, and European states, AD 990–1990. Basil Blackwell.

Acemoglu, D., & Robinson, J. A. (2012). Why nations fail: The origins of power, prosperity, and poverty. Crown Business.

Cohen, J. E. (2019). Turning privacy inside out. Theoretical Inquiries in Law, 20(1), 1–31.

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 77–91.

Penney, J. W. (2016). Chilling effects: Online surveillance and Wikipedia use. Berkeley Technology Law Journal, 31(1), 117–182.

Comparitech. (2021). Surveillance camera statistics: Which cities have the most CCTV cameras?