- “Can Attribution Science Close the Loop?” Eastern Division APA Meeting. New York, NY. Jan. 8-11, 2025.
[ Slides (coming soon) •
Abstract ]
Climate change attribution involves measuring the human contribution to warming. In principle, inaccuracies in the characterization of the climate's internal variability could undermine these measurements. Equally in principle, the success of the measurement practice could provide evidence that our assumptions about internal variability are correct. I argue that neither condition obtains: current measurement practices do not provide evidence for the accuracy of our assumptions precisely because they are not as sensitive to inaccuracy in the characterization of internal variability as might be worried. I end by drawing some lessons about “robustness reasoning” more generally.
- “Stability in Climate Change Attribution.” PSA2024. New Orleans, LA. Nov. 14-17, 2024.
[ Slides (coming soon) •
Abstract ]
Climate change attribution involves measuring the human contribution to warming. In principle, inaccuracies in the characterization of the climate's internal variability could undermine these measurements. Equally in principle, the success of the measurement practice could provide evidence that our assumptions about internal variability are correct. I argue that neither condition obtains: current measurement practices do not provide evidence for the accuracy of our assumptions precisely because they are not as sensitive to inaccuracy in the characterization of internal variability as might be worried. I end by drawing some lessons about “robustness reasoning” more generally.
- “The Case of the Mislabled Axis.” Next Generation Event Horizon Telescope working group. Online. Sep. 11, 2024.
[ Slides •
Abstract ]
In 2015 testimony before congress, John Christy presented a series of graphs purporting to show that climate models are badly wrong about how quickly the mid-troposphere is warming. His graphs have been widely criticized by other climate scientists, who have shown that the same data can be graphed in a way that makes the apparent divergence disappear. So who's right? What does it even mean for one graph to be right and another wrong, given that they're constructed using the same data? These are questions for philosophy of science, but (unfortunately) philosophers have to date largely ignored data visualization and its role in the sciences. In this talk, I aim to rectify that, first by sketching some basic principles of a “philosophy of graphs” and then by applying them to debate in climate science. The upshot: in an important technical sense, Christy's graphs are even more wrong than the critics allege.
- “The Crisis in the Machine: On Economics and Climate Change.” From Crisis to Coordination: Conversations Between Philosophy of Environmental Justice & Philosophy of Conservation Science. Minneapolis, MN. May 24, 2024.
[ Slides •
Abstract ]
Kyle Whyte (2021) argues that a crisis involves an (a) unprecedented threat to (b) the present that (c) requires unprecedented solutions. On this definition, climate change might be considered crisis par excellence. Surprisingly, however, climate economics tends to view climate change as (a) familiar investment problem that (b) threatens the future and (c) should be addressed using familiar tools such as taxes. I argue that this tells us more about economics than about climate change. Specifically, it helps us see that economics is in an important sense permentantly “crisis-oriented.”
- “Consistent Estimators and the Argument from Inductive Risk.” Revitalizing Science and Values. Pittsburgh, PA. Apr. 7, 2024.
[ Slides •
Abstract ]
In this paper, we argue that classical statistical inference relies on risk evaluations that are precisely analogous to the tradeoff between false positives and negatives even in cases where the latter tradeoff is irrelevant. Crucially, however, these value-laden choices are constrained by a “consistency” demand that ensures that the influence of values will wash out in the long run. These convergence results suggest that there are important disanalogies between the role of values in statistical examples—including, importantly, Rudner's original error prioritization example—and other cases.
- “On Depictive Testimony, or: How do you Assert a Graph?” Pacific Division APA Meeting. Portland, OR, March 20, 2024.
[ Handout •
Abstract ]
To date, the philosophical literature on testimony has typically ignored testimony involving non-linguistic vehicles. While there's some work outside of epistemology that touches on the subject, there is little extended discussion of the relationship between depictive testimony and its more familiar linguistic cousin. The extension is non-trivial: like narrative testimony, depictive testimony is \textit{perspectival} in that the “literal” content of the depiction is accompanied by a framework for interpreting that content. Indeed, as I'll argue in this talk, a speaker who uses a depiction is responsible for both content and perspective: the perspective must be reliable and the content must be accurate from that perspective.
- “Higher-Order Uncertainty and the Methodology of Climate Economics” Central Division APA Meeting. New Orleans, LA, Feb. 22, 2024.
[ Slides •
Handout •
Abstract ]
In 2007, Martin Weitzman set off a major debate within climate economics by arguing that avoiding catastrophic climate change should be the main aim of climate policy. In this talk, I examine Weitzman's work and argue that the debate turns on a methodological disagreement about how economists should address higher-order uncertainty.
- “What Makes Statistics Valuable?” MWPMW2023. Notre Dame, IN, Nov. 5, 2023.
[ Slides •
Abstract ]
What is the epistemic value of statistical methods? In his “On the Mathematical Foundations of Theoretical Statistics,” R. A. Fisher tells us that the answer lies in data reduction: data sets are too large to “enter the mind,” and the role of statistics is to “represent” this large quantity of data using a more manageable set of quantities (Fisher 1922, 311). But there are many different ways of “reducing” data: we could represent the whole data set using an element picked at random, the overall average, the spread, or even more qualitative measures. Why should we prefer the mathematical methods of statistics to these alternatives? Or: what problem does (e.g.) classical hypothesis testing solve that these other methods don’t?
The answer is ambiguity. We’re often in situations where data (or, better, evidence) is “ambiguous” in that we don’t know the precise (probabilistic) relationship between the data and whatever hypotheses or conclusions we’re interested in. Neither (merely) collecting more data nor applying various qualitative data reduction approaches is effective at reducing ambiguity. If a method yields ambiguous evidence, then repeating the method just yields more ambiguous evidence. And the average and spread of ambiguous evidence is just as ambiguous as the evidence itself. By contrast, statistical methods such as classical hypothesis testing are effective at reducing ambiguity.
- “When is a Graph Honest? Simplification and Ethics in Science Communication.” BSPS2023. Bristol, UK, July 7, 2023.
[ Slides •
Abstract ]
I extend recent work on scientific testimony to account for testimony involving depictions such as graphs or figures. On the account offered, the testimonial presentation of a depiction should be analyzed as involving commitment to both (a) the reliability of the depiction's perspective and (b) the perspective-relative accuracy of the depiction's content. I end by defending the role of honesty in scientific testimony against recent arguments.
- “Who Wants a Transparent Map? Honesty and (Mis-)Interpretation in Scientific Communication.” The (Mis-)Interpretation of Scientific Evidence. Bielefeld, DE, March 30, 2023.
[ Slides •
Abstract ]
Most of the philosophical work on science communication focuses on written or spoken testimony—statements that are either true (good) or false (bad). But much of science communication doesn't fit this paradigm. Graphs and other figures are commonplace in science communication, for instance, but are neither true or false. Similarly, science communication is just as shot through with idealizations and approximations as science “proper,” meaning that much of good or virtuous science communication must involve spreading (strictly-speaking) falsehoods. In this talk, I discuss how to apply traditional communicative values such as honesty and transparency to these examples, and discuss what the resulting picture means for cases where the goals of the audience and the expert don't align.
- “Uncertainty and Trustworthiness in Regional Climate Modeling.” Forest - City - River: Transforming Regional Climate Models into Local Climate Knowledge for Decision Making. Bielefeld, DE, March 9, 2023.
[ Slides •
Abstract ]
In this talk, I discuss the conditions under which the most common regional climate modeling produces reliable and trustworthy results and the various challenges that climate scientists face in ensuring that these conditions hold. A central theme: in regional climate modeling, post-hoc validation of models and simulations is crucial to ensuring reliability.
- “Against ‘Possibilist’ Interpretations of Climate Models.” PSA2022. Pittsburgh, PA, November 10, 2022.
[ Slides •
Abstract ]
Climate scientists frequently employ (groups of) heavily idealized models. How should these models be interpreted? A number of philosophers have suggested that a possibilist interpretation might be preferable, where this entails interpreting climate models as standing in for possible scenarios that could occur, but not as providing any sort of information about how probable those scenarios are. The present paper argues that possibilism is (a) undermotivated by the philosophical and empirical arguments that have been advanced in the literature, (b) incompatible with successful practices in the science, and (c) liable to present a less accurate picture of the current state of research and/or uncertainty than probabilistic alternatives. There are good arguments to be had about how precisely to interpret climate models but our starting point should be that the models provide evidence relevant to the evaluation of hypotheses concerning the actual world in at least some cases.
- “How Should the IPCC Present Uncertainty?” Perspectives on Science seminar series. Helsinki, FI, October 31, 2022.
[ Slides •
Abstract ]
At present, the IPCC has a unique two-tier method for communicating uncertainty: claims about (e.g.) future warming are qualified using both "likelihood" and "confidence" scales. Recently, however, a number of climate scientists have called attention to the weaknesses of this method, arguing that it is confusing, hard to understand, and used in different ways by different author groups. In this talk, I consider what a better alternative might look like. I begin by arguing that good science communication is like good science modeling: it highlights or emphasizes what's important by abstracting away from the unimportant. The IPCC's current approach can be thought of as emphasizing two features of the IPCC's knowledge: the degree of imprecision or uncertainty and origins of imprecision or uncertainty. I suggest that there are reasons why we should prioritize emphasizing imprecision, but that the origins of uncertainty are less important. Finally, I consider a few different options for capturing imprecision and consider some broader lessons for science communication.
- “Making Your Audience Ignorant: Simplification and Accuracy in the Presentation of Scientific Results.” SPSP2022. Ghent, BE, July 1, 2022.
[ Slides •
Abstract ]
It's commonplace to assume that scientists and other experts shouldn't present false or misleading information to the public. In this talk, by contrast, I argue that experts almost always have to present information that is misleading in one way or another. The question we need to ask, therefore, is how they mislead and for what reasons.
- “Calibrating Statistical Tools: Improving the Measure of Humanity’s Influence on the Climate.” Measurement at the Crossroads. Milan, IT, June 29, 2022.
[ Slides •
Abstract ]
Over the last twenty-five years, climate scientists working on the attribution of climate change to humans have developed increasingly sophisticated statistical models in a process that can be understood as a kind of calibration: the gradual changes to the statistical models employed in attribution studies served as iterative revisions to a measurement(-like) procedure motivated primarily by the aim of neutralizing particularly troublesome sources of error or uncertainty. This practice is in keeping with recent work on the evaluation of models more generally that views models as tools for particular tasks: what drives the process is the desire for models that provide more reliable grounds for inference rather than accuracy to the underlying mechanisms of data-generation.
- “Climate Models and the Irrelevance of Chaos.” PSA2020/2021. Baltimore, MD, November 11, 2021.
[ Slides (old) •
Slides (new) •
Abstract ]
Philosophy of climate science has witnessed substantial recent debate over the existence of a dynamical or “structural” analogue of chaos, which is alleged to spell trouble for certain uses of climate models. In this paper, I argue that the debate over the analogy can and should be separated from its alleged epistemic implications: chaos-like behavior is neither necessary nor sufficient for small dynamical misrepresentations to generate erroneous results. I identify the relevant kind of kind of sensitivity with a kind of safety failure and argue that the resulting set of issues has different stakes than the extant debate would indicate.
- “Interpreting Probability Claims in Climate Science.” EPSA2021. Turin, IT, Sep. 17, 2021.
[ Slides •
Abstract ]
Probablistic claims are common in climate science. To date, these claims have usually been treated as expressing subjective credences. I argue against this view, which has three major problems. First, it fails to account for how the probabilities in question are in fact generated, which often involves the use of classical statistics rather than Bayesian updating. Second and third, the presentation of subjective credences by scientists in scientific reports is both (descriptively) atypical and (normatively) inappropriate. A better view is that such claims represent the authors' best estimate of the objective “weight of the evidence.”
- “Statistically-Indistinguishable Ensembles and the Evaluation of Climate Models,” Central Division APA meeting. Chicago, IL, Feb. 28, 2020.
[ Slides •
Abstract ]
Over the last decade, the climate scientists James Annan and Julia C. Hargreaves have argued for a new “paradigm” for the interpretation of ensembles of climate models that promises to allow us to better justify drawing inferences from them. I argue that the apparent benefits of their view are illusory: the assumptions required by the view are strong enough to justify drawing inferences from an ensemble regardless of the choice of interpretative paradigm. I end by sketching how climate scientists might go about supporting these assumptions. [[Note: I'm no longer inclined to gloss the results in this way.]]
- “Variation in Evidence and Simpson's Paradox,” Eastern Division APA meeting. Philadephia, PA, Jan. 11, 2020.
[ Slides •
Abstract ]
Standard accounts of variation in evidence (e.g., Fitelson 2001) treat Reichenbach's “screening-off” condition as sufficient for varied sources of evidence to confirm better than non-varied sources. Stegenga and Menon (2017) demonstrate that violating this condition can lead to instances of Simpson's paradox and thus to surprising disconfirmations, but they fail to show that the condition is necessary. In this talk, I argue that independently-motivated changes to how these accounts treat variation allow us to demonstrate the much stronger claim: in any case in which Simpson's paradox is avoided, varied sources of evidence confirm more than non-varied sources. [[Note: I'm no longer inclined to gloss the results in this way.]]
- “The Unity of Robustness,” Pacific Division APA meeting. Vancouver, Canada, Apr. 19, 2019.
[ Slides •
Abstract ]
There's substantial skepticism among philosophers about the evidential value of robustness across models. By contrast, there's very little skepticism about the evidential value of robustness when what's varied over are experiments or instruments, and there's a long tradition of arguing that the success or importance of the latter cannot be employed as an argument for the success or importance of the former. The central argument against unifying our treatment of robustness goes back to Nancy Cartwright, who urges that unlike what is the case when experiments or measurements agree, different models “do not constitute independent instruments doing different things, but rather different ways of doing the same thing: instead of being unrelated, they are often alternatives to one another, sometimes even contradictory” (Cartwright 1991, 153). I'm convinced that this argument fails: robustness across models and robustness across experiments are essentially the same epistemic phenomenon and should be given the same analysis. Once we accept this position—call it “unity”—a number of the traditional criticisms of robustness in modeling contexts are shown to be much weaker than they initially appear.