November 18, 2025
Does social media use cause increased polarization?
We saw this correlation in the US…
This correlation compares the levels of polarization for people using more and less social media.
What if we:
Let’s make a causal graph:
This is a common source of confounding:
“Exposure to social media increases political polarization.”
In the lead-up to the 2025 Canadian Election, platforms owned by Meta (Facebook, Instagram, WhatsApp) blocked users access to content from Canadian news organizations while simultaneously ending fact-checking (source). This “enabled hyper-partisan content to dominate in the absence of balanced media coverage.”
“causal variable”: exposure to social media during the election
outcome variable: political polarization \(\to\) willingness to go on a date with someone who supports an opposing political party.
“Exposure to social media increases political polarization.”
Imagine: If you used social media during the election, would you be willing to go on a date with someone who supported a rival political party?
Imagine: If you did not use social media during the election, would you go on a date with someone who supported a rival political party?
If we just examined the correlation between using social media and willingness to date across party lines…
What could be sources of confounding?
Must ask…
What if we could do this?
What if Meta (the monopolist corporation formally known as Facebook) did the following:
What if we could do this?
We can’t know the causal effect for individual cases… but what would happen on average if we switched EVERYONE from “no social media” to “social media” exposure?
We would like to:
What permits us to use sample as unbiased inference about the population?
What is an experiment?
Experiment:
Experiments give us unbiased (no confounding) correlation, if two key assumptions are met:
\(2\). Exclusion Restriction: only one thing is changing – \(X\)
\(^*\)Technically, there are other assumptions, but not important for this class
How do experiments solve confounding? Three ways to think about it…
Removes all confounding: even from variables we have not thought of.
Cases in “treatment” and “control” are the same in terms of potential outcomes, on average:
Allcott et al (2024) actually ran a social media deactivation experiment:
Sample: random sample of US adults active in previous month (Facebook or Instagram)
Recruitment: study invitation appeared on their feed. Asked if willing to deactivate for 1 week (for $25) or 6 weeks (for $150).
Random Assignment: of those willing to deactive (~20k FB, ~16K Insta), 27% given $150 and deactivated for 6 weeks; 73% given $25 and deactivated for 1 week.
Outcome: Surveyed on political polarization
Small reduction in affective polarization.
Allcott et al (2024) actually ran a social media deactivation experiment:
Sample: random sample of US adults active in previous month (Facebook or Instagram)
Recruitment: study invitation appeared on their feed. Asked if willing to deactivate for 1 week (for $25) or 6 weeks (for $150).
Random Assignment: those willing to deactive: 27% given $150 and deactivated for 6 weeks; 73% given $25 and deactivated for 1 week.
Outcome: Surveyed on political polarization
assumption is that we aren’t adding confounding in the design of the experiment
But the “Treatment” group…
Multiple differences between “treatment” and “control”
Vaccine clinical trials…
Why does it matter that clinical trials use placebos and are “double blind”?
Without placebo and double-blind…
Experiments are the best solution to confounding/FPCI
strong severity says that evidence is convincing to extent assumptions are checked:
But experiments have their limits:
All solutions to confounding face a trade-off between internal and external validity
Internal Validity: is the extent to which the correlation of \(X\) and \(Y\) found in a research design is the true causal effect of \(X\) on \(Y\) / does not suffer from confounding. (unbiased FOR THOSE CASES)
External Validity: is the degree to which the causal relationship we find in a study is relevant to the causal relationship in our causal question/claim
Study has external validity if it examines causal relationship for the cases we are interested in
Study has external validity if the causal variable in the study maps onto the concept/definition of the cause in the causal claim.
Does the efficacy of vaccines in clinical trials translate to real world use??
More internal validity (unbiased estimate of causal effect) comes at the cost of external validity (relevance of study sample or cause to the theory)
What can we manipulate?
Who/what cases can we study?
| Solution | How Confounding Solved |
Which Confounding Removed |
Assumes | Internal Validity |
External Validity |
|---|---|---|---|---|---|
| Experiment | Randomization Breaks \(W \rightarrow X\) link |
All confounding variables | \(X\) is random; Change only \(X\) |
High | Low |