1. Review
- causality as counterfactual
- potential outcomes
2. Types of Causal Claims
- deterministic causal claims
- probabilistic causal claims
3. Testing Causal Claims
- Fundamental Problem of Causal Inference
October 30, 2025

\(S\)cathing \(H\)atred of \(I\)nformation \(T\)echnology and the
\(P\)assionate \(H\)emorrhaging of \(O\)ur \(N\)eo-liberal \(E\)xperience

Chanting: “We must free the iPad babies, even if it gives us rabies”
“Exposure to absurdist neo-Luddite protest reduces average hours spent on smart devices.”
All causal claims are claims about how the world would be changed in an alternate timeline in which some thing (or things) were different than they actually are.
These alternate timelines/universes are counterfactuals
It follows that, all causal claims can be re-stated as counterfactual claims
“Exposure to absurdist neo-Luddite protest reduces average hours spent on smart devices.”
\[\overbrace{\text{If people were not exposed to absurdist neo-Luddite protest}}^{\text{If-clause in Subjunctive Mood}}, \\ \underbrace{\text{then they would have spent more hours on smart devices.}}_{\text{Then-clause in Conditional Mood}}\]
Note: Counterfactual claims get increasingly complicated, the more complicated your causal claim is
With your neighbors: turn these causal claims into counterfactual claims.
“The expansion of NATO into Eastern Europe caused Russia to invade Ukraine”
implicitly claims that…
in the counterfactual world:
If NATO did not expand (the “cause” is not present), Russia would not have invaded Ukraine in February 2022 (the “effect” would be different).
potential outcomes are values for variables that describe the factual world (that has occurred) and counterfactual worlds (that have not).
Take the form of \(Y_i(X)\) or \(Outcome_{case}(Cause)\)
“The expansion of NATO into Eastern Europe caused Russia to invade Ukraine”
What are the potential outcomes this causal claim implies?
(TO THE BOARD)
Which of these potential outcomes is factual? Counterfactual?
\(\mathrm{Russian \ Invasion}_{Ukr}(\mathrm{E. \ Europ. \ NATO \ Memb.} = 0) = ?\) \(\mathrm{Russian \ Invasion}_{Ukr}(\mathrm{E. \ Europ. \ NATO \ Memb.} = 14) = ?\)
(red indicates \(\color{red}{\mathrm{counterfactual}}\))
\(\color{red}{\mathrm{Russian \ Invasion}_{Ukr}(\mathrm{E. \ Europ. \ NATO \ Memb.} = 0) = ?}\) \(\mathrm{Russian \ Invasion}_{Ukr}(\mathrm{E. \ Europ. \ NATO \ Memb.} = 14) = ?\)
What are the values of these potential outcomes if the following claim is true?
“The expansion of NATO into Eastern Europe caused Russia to invade Ukraine”
“The expansion of NATO into Eastern Europe caused Russia to invade Ukraine”
If this causal claim were true: then it implies these potential outcomes:
\(\color{red}{\mathrm{Russian \ Invasion}_{Ukr}(\mathrm{E. \ Europ. \ NATO \ Memb.} = 0) = \mathrm{No}}\) \(\mathrm{Russian \ Invasion}_{Ukr}(\mathrm{E. \ Europ. \ NATO \ Memb.} = 14) = \mathrm{Yes}\)
Usually… different questions leads to different kinds of causal claims
And different types of causal claims imply:
deterministic causal claims
claims about what happens with certainty under specific causal conditions
There are several varieties and combinations
A claim: “If Germany had not had an economic collapse during the Great Depression, Hitler and the Nazi Party would not have come to power.”
Head to menti.com and use code \(1587 \ 4520\)
If this claim is true: “If Germany had not had an economic collapse during the Great Depression, Hitler and the Nazi Party would not have come to power.”…
The fact that there was an economic collapse in Germany during the Great Depression does not mean that the the Nazi takeover was inevitable.
A causal claim that there is some cause \(C\) without which the effect \(E\) cannot occur
Claims about necessary conditions have specific implications about potential outcomes:
If we say that: “The Great Depression was a necessary condition for the Nazis to take power in Germany.”
What are the implied potential outcomes?
Claims about necessary conditions have specific implications about potential outcomes:
If we say that: “The Great Depression was a necessary condition for the Nazis to take power in Germany.”
What are the implied potential outcomes?
\(\mathrm{Nazis}_{Germany} \ (\mathrm{Economic \ Crisis = No}) = \mathrm{No}\)
\(\mathrm{Nazis}_{Germany} \ (\mathrm{Economic \ Crisis = Yes}) = \mathrm{Yes} \ or \ \mathrm{No}\)
Something else might have had to happen, in addition to economic crisis, for Nazis to take power.
(In contrast to necessary conditions)
“Doug Ford running ads criticizing tariffs was a sufficient condition for Trump to increase tariff on Canada by 10 percentage points.”
Sufficient conditions also imply specific potential outcomes:
“Doug Ford running ads criticizing tariffs was a sufficient condition for Trump to increase tariff on Canada by 10 percentage points.” implies:
\(\mathrm{Raise \ Tariffs}_{Canada} \ (\mathrm{Reagan \ Ad = No}) = \mathrm{No \ or \ Yes}\)
\(\mathrm{Raise \ Tariffs}_{Canada} \ (\mathrm{Reagan \ Ad = Yes}) = \mathrm{Yes}\)
Aired this ad on YouTube and then surveyed people in “treatment” and “control” conditions to see if they recognize the misinformation tactic
menti.com \(4680 \ 9916\)
Does it make sense to say that “being inoculated” is a necessary condition for spotting misinformation?
Does it make sense to say that “being inoculated” is a sufficient condition for spotting misinformation?
Causality may be deterministic… there are exact conditions for when effect always/never happens.
But in reality, it is almost always complex
Rather than spell out complex deterministic causal claims, easier to make
are claims that the presence/absence of a cause \(C\) makes an effect \(E\) more or less likely to occur. Or cause \(C\) increases/decreases effect \(E\) on average
Because causality is complex, we do not fully know the deterministic rules…
\(C\) appears to only cause a change in the probability or likelihood of seeing the effect \(E\).
Which are probabilistic causal claims?
Which is a probabilistic causal claim?
Not every probabilistic statement is causal
In this course, we focus on how to provide evidence that that pertain to claims about effects of causes rather than causes of effects.
A claim for today:
“Exposure to absurdist neo-Luddite protest reduces average hours spent on smart devices.”
Causal claims imply relationships between potential outcomes
\(\text{Phone Hours}_{i}(\text{Luddites}) < \\ \color{red}{\text{Phone Hours}_{i}(\text{No Luddites})}\)
\(\mathrm{Black}\) indicates factual potential outcomes (we observe this state of the world)
\(\color{red}{\mathrm{Red}}\) indicates counterfactual potential outcomes (we do not observe this state of the world)
Let’s say that, in the week following the protest, those who witnessed it used their phones an average of 2 hours per day.
\(\text{Phone Hours}_{Witnesses}(\text{Luddites}) = 2 \ hours\)
\(\color{red}{\text{Phone Hours}_{Witnesses}(\text{No Luddites})} = \ \mathbf{????}\):
For New Yorkers who saw the protest: we can only observe the potential outcome of \(\text{Phone Hours}_{Witnesses}\) for where the value of \(\text{Luddites} = Yes\): what actually happened to these people.
We can never observe the other, counterfactual, potential outcomes of \(\color{red}{\text{Phone Hours}_{Witnesses}}\) where \(\color{red}{\text{Luddites} = No}\), because that was not the actual policy.
We can never empirically observe, for those witnesses, whether the \(\text{Luddite}\) protest caused \(\downarrow \text{Phone Hours}\).
By definition, \(X\) causes \(Y\) if the potential outcomes of \(Y_i(X)\) were different if we changed \(X\) for the exact same case.
For a specific case, we can only observe the potential outcome of \(Y\) for the value of \(X\) it actually takes.
We never observe the counterfactual potential outcomes of \(Y\) for different possible values of \(X\) that the case did not experience.
We can never empirically observe, for a specific case, whether \(X\) causes \(Y\).

I thought evidence for empirical claims based on observing the world?!
If we can never see the counterfactual… does this mean that all evidence for causal claims fails weak severity?
Are there “solutions” to this fundamental problem?