Downloads

Here you can find links to my published articles, recent presentations, and CV. Where my publications are not open access, I've endeavored to include preprints as well as the final version. Please contact me if there's a paper or presentation that's not available.


CV: download (current as of September 2024)

  1. In press. “How do you Assert a Graph? Towards an Account of Depictions in Scientific Testimony.” Noûs.
    [ Article (open access)Abstract ]
  2. I extend the literature on norms of assertion to the ubiquitous use of graphs in scientific papers and presentations, which I term “graphical testimony.” On my account, the testimonial presentation of a graph involves commitment to both (a) the in-context reliability of the graph's framing devices and (b) the perspective-relative accuracy of the graph's content. My account resolves apparent tensions between the demands of honesty and common scientific practice of presenting idealized or simplified graphs: these “distortions” can be honest so long as there's the right kind of alignment between the distortion and the background beliefs and values of the audience. I end by suggesting that we should expect a similar relationship between perspectives and the norms of testimony in other non-linguistic cases and indeed in many linguistic cases as well.
  3. In press. “Methodologies of Uncertainty: Philosophical Disagreement in the Economics of Climate Catastrophes.” British Journal for the Philosophy of Science.
    [ Article PreprintAbstract ]
  4. What probability distribution should we use when calculating the expected utility of different climate policies? There's a substantial literature on this question in economics, where it is largely treated as empirical or technical—i.e., the question is what distribution is justified by the empirical evidence and/or economic theory. The question has a largely overlooked methodological component, however—a component that concerns how (climate) economics should be carried out, rather than what the science tells us. Indeed, the major dispute in the literature is over precisely this aspect of the question: figures like William Nordhaus and Martin Weitzman disagree less about the evidence or the theory than they do about which possibilities we should consider when making political decisions—or offering economic advice—about climate change. There are two important implications. First, at least some of the economic literature misfires in attempting to treat the debate as open to empirical or technical resolution; a better path to progress on the question involves further investigating the policy recommendations that can be derived from the two positions. Second, the choice of discount rate is entangled with the choice of probability distribution: as both choices are responsive to the same normative reasons, we cannot evaluate the arguments in favour of a particular discount rate without considering the implications of those same arguments for the choice of distribution.
  5. In press. “Stability in Climate Change Attribution.” Philosophy of Science.
    [ PreprintAbstract ]
  6. Climate change attribution involves measuring the human contribution to warming. In principle, inaccuracies in the characterization of the climate's internal variability could undermine these measurements. Equally in principle, the success of the measurement practice could provide evidence that our assumptions about internal variability are correct. I argue that neither condition obtains: current measurement practices do not provide evidence for the accuracy of our assumptions precisely because they are not as sensitive to inaccuracy in the characterization of internal variability as might be worried. I end by drawing some lessons about “robustness reasoning” more generally.
  7. In press. “Who's Afraid of the Base-Rate Fallacy?” Philosophy of Science.
    [ PreprintAbstract ]
  8. This paper evaluates the back-and-forth between Mayo, Howson, and Achinstein over whether classical statistics commits the base-rate fallacy. I show that Mayo is correct to claim that Howson's arguments rely on a misunderstanding of classical theory. I then argue that Achinstein's refined version of the argument turns on largely undefended epistemic assumptions about “what we care about” when evaluating hypotheses. I end by suggesting that Mayo's positive arguments are no more decisive than her opponents': even if correct, they are unlikely to compel anyone not already sympathetic to the classical picture.
  9. 2024. “Contrast Classes and Agreement in Climate Modeling.” European Journal for Philosophy of Science 14.14: 1-19.
    [ Article • Preprint (embargoed) • Abstract ]
  10. In an influential paper, Wendy Parker argues that agreement across climate models isn't a reliable marker of confirmation in the context of cutting-edge climate science. In this paper, I argue that while Parker's conclusion is generally correct, there is an important class of exceptions. Broadly speaking, agreement is not a reliable marker of confirmation when the hypotheses under consideration are mutually consistent—when, e.g., we're concerned with overlapping ranges. Since many cutting-edge questions in climate modeling require making distinctions between mutually consistent hypotheses, agreement across models will be generally unreliable in this domain. In cases where we are only concerned with mutually exclusive hypotheses, by contrast, agreement across climate models is plausibly a reliable marker of confirmation.
  11. 2024. “The Unity of Robustness: Why Agreement Across Model Reports is Just as Valuable as Agreement Among Experiments.” Erkenntnis 89.7: 2733–52.
    [ Article (open access)Abstract ]
  12. A number of philosophers of science have argued that there are important differences between robustness in modeling and experimental contexts, and—in particular—many of them have claimed that the former is non-confirmatory. In this paper, I argue for the opposite conclusion: robust hypotheses are confirmed under conditions that do not depend on the differences between and models and experiments—that is, the degree to which the robust hypothesis is confirmed depends on precisely the same factors in both situations. The positive argument turns on the fact that confirmation theory doesn't recognize a difference between different sources of evidence. Most of the paper is devoted to rebutting various objections designed to show that it should. I end by explaining why philosophers of science have (often) gone wrong on this point.
  13. 2023. “Against 'Possibilist' Interpretations of Climate Models.” Philosophy of Science 90.5: 1417-26.
    [ Article (open access) Abstract ]
  14. Climate scientists frequently employ (groups of) heavily idealized models. How should these models be interpreted? A number of philosophers have suggested that a possibilist interpretation might be preferable, where this entails interpreting climate models as standing in for possible scenarios that could occur, but not as providing any sort of information about how probable those scenarios are. The present paper argues that possibilism is (a) undermotivated by the philosophical and empirical arguments that have been advanced in the literature, (b) incompatible with successful practices in the science, and (c) liable to present a less accurate picture of the current state of research and/or uncertainty than probabilistic alternatives. There are good arguments to be had about how precisely to interpret climate models but our starting point should be that the models provide evidence relevant to the evaluation of hypotheses concerning the actual world in at least some cases.
  15. 2023. “The Cooperative Origins of Epistemic Rationality?” Erkenntnis 88.3: 1269-88.
    [ ArticlePreprintAbstract ]
  16. Recently, both evolutionary anthropologists and some philosophers have argued that cooperative social settings unique to humans play an important role in development of both our cognitive capacities and the “construction” of “normative rationality” or “a normative point of view as a self-regulating mechanism” (Tomasello 2017, 38). In this article, I use evolutionary game theory to evaluate the plausibility of the claim that cooperation fosters epistemic rationality. Employing an extension of signal-receiver games that I term “telephone games,” I show that cooperative contexts work as advertised: under plausible conditions, these scenarios favor epistemically rational agents over irrational ones designed to do just as well as them in non-cooperative contexts. I then show that the basic results are strengthened by introducing complications that make the game more realistic.
  17. 2023. “Interpreting the Probabilistic Language in IPCC Reports.” Ergo 10.8: 203-25.
    [ Article (open access)Abstract ]
  18. The Intergovernmental Panel on Climate Change (IPCC) often qualifies its statements by use of probabilistic “likelihood” language. In this paper, I show that this language is not properly interpreted in either frequentist or Bayesian terms—simply put, the IPCC uses both kinds of statistics to calculate these likelihoods. I then offer a deflationist interpretation: the probabilistic language expressones nothing more than how compatible the evidence is with the given hypothesis according to some method that generates normalized scores. I end by drawing some tentative normative conclusions.
  19. 2023. “Supposition and (Statistical) Models.” Philosophy of Science 90.3: 744-49.
    [ ArticlePreprintAbstract ]
  20. In a recent paper, Sprenger advances what he calls a “suppositional” answer to the question of why a Bayesian agent's credences should align with the probabilities found in statistical models. We show that Sprenger's account trades on an ambiguity between hypothetical and subjunctive suppositions and cannot succeed once we distinguish between the two.
  21. 2022. “Accuracy, Probabilism, and the Insufficiency of the Alethic.” Philosophical Studies 179.7: 2285-301.
    [ Article (open access)Abstract ]
  22. The best and most popular argument for probabilism is the accuracy-dominance argument, which purports to show that alethic considerations alone support the view that an agent's degrees of belief should always obey the axioms of probability. I argue that extant versions of the accuracy-dominance argument face a problem. In order for the mathematics of the argument to function as advertised, we must assume that every omniscient credence function is classically consistent; there can be no worlds in the set of dominance-relevant worlds that obey some other logic. This restriction cannot be motivated on alethic grounds unless we're also willing to accept that rationality requires belief in every metaphysical necessity, as the distinction between a priori logical necessities and a posteriori metaphysical ones is not an alethic one. To motivate the restriction to classically consistent worlds, non-alethic motivation is required. And thus, if there is a version of the accuracy-dominance argument in support of probabilism, it isn't one that is grounded in alethic considerations alone.
  23. 2022. “Calibrating Statistical Tools: Improving the Measure of Humanity's Influence on the Climate.” Studies in the History and Philosophy of Science 94: 158-66.
    [ ArticlePreprintAbstract ]
  24. Over the last twenty-five years, climate scientists working on the attribution of climate change to humans have developed increasingly sophisticated statistical models in a process that can be understood as a kind of calibration: the gradual changes to the statistical models employed in attribution studies served as iterative revisions to a measurement(-like) procedure motivated primarily by the aim of neutralizing particularly troublesome sources of error or uncertainty. This practice is in keeping with recent work on the evaluation of models more generally that views models as tools for particular tasks: what drives the process is the desire for models that provide more reliable grounds for inference rather than accuracy to the underlying mechanisms of data-generation.
  25. 2022. “Science, Assertion, and the Common Ground.” Synthese 200.30: 1-19.
    [ Article (open access)Abstract ]
  26. I argue that the appropriateness of an assertion is sensitive to context—or, really, the “common ground”—in a way that hasn't previously been emphasized by philosophers. This kind of context-sensitivity explains why some scientific (and philosophical) conclusions seem to be appropriately asserted even though they are not known, believed, or justified on the available evidence. I then consider other recent attempts to account for this phenomenon and argue that if they are to be successful, they need to recognize the kind of context-sensitivity that I argue for.
  27. 2022. “When is an Ensemble like a Sample? ‘Model-Based’ Inferences in Climate Modeling.” Synthese 200.52: 1-20.
    [ Article (open access)Abstract ]
  28. Climate scientists often apply statistical tools to a set of different estimates generated by an “ensemble” of models. In this paper, I argue that the resulting inferences are justified in the same way as any other statistical inference: what must be demonstrated is that the statistical model that licenses the inferences accurately represents the probabilistic relationship between data and target. This view of statistical practice is appropriately termed “model-based,” and I examine the use of statistics in climate fingerprinting to show how the difficulties that climate scientists encounter in applying statistics to ensemble-generated data are the practical difficulties of normal statistical practice. The upshot is that whether the application of statistics to ensemble-generated data yields trustworthy results should be expected to vary from case to case.
  29. 2021. “Climate Models and the Irrelevance of Chaos.” Philosophy of Science 88.5: 997-1007.
    [ ArticlePreprintAbstract ]
  30. Philosophy of climate science has witnessed substantial recent debate over the existence of a dynamical or “structural” analogue of chaos, which is alleged to spell trouble for certain uses of climate models. In this paper, I argue that the debate over the analogy can and should be separated from its alleged epistemic implications: chaos-like behavior is neither necessary nor sufficient for small dynamical misrepresentations to generate erroneous results. I identify the relevant kind of kind of sensitivity with a kind of safety failure and argue that the resulting set of issues has different stakes than the extant debate would indicate.
  31. 2021. “Forces in a True and Physical Sense: From Mathematical Models to Metaphysical Conclusions.” Synthese 198.2: 1109-22.
    [ ArticlePreprintAbstract ]
  32. J. Wilson (2009), Moore (2012), and Massin (2017) identify an overdetermination problem arising from the principle of composition in Newtonian physics. I argue that the principle of composition is a red herring: what's really at issue are contrasting metaphysical views about how to interpret the science. One of these views—that real forces are to be tied to physical interactions like pushes and pulls—is a superior guide to real forces than the alternative, which demands that real forces are tied to “realized” accelerations. Not only is the former view employed in the actual construction of Newtonian models, the latter is both unmotivated and inconsistent with the foundations and testing of the science.
  33. 2021. “How to Do Things with Theory: The Instrumental Role of Auxiliary Hypotheses in Testing.” Erkenntnis 86.6: 1453-68.
    [ ArticlePreprintAbstract ]
  34. Pierre Duhem's influential argument for holism relies on a view of the role that background theory plays in testing: according to this still common account of “auxiliary hypotheses,” elements of background theory serve as truth-apt premises in arguments for or against a hypothesis. I argue that this view is mistaken. Rather than serving as truth-apt premises in arguments, auxiliary hypotheses are employed as (reliability-apt) “epistemic tools”: instruments that perform specific tasks in connecting our theoretical questions with the world but that are not (or not usually) premises in arguments. On the resulting picture, the acceptability of an auxiliary hypothesis depends not on its truth but on contextual factors such as the task or purpose it is put to and the other tools employed alongside it.
  35. 2018. “William Whewell's Semantic Account of Induction.” HOPOS 8.1:141-56.
    [ ArticlePreprintAbstract ]
  36. William Whewell's account of induction differs dramatically from the one familiar from 20th Century debates. I argue that Whewell's induction can be usefully understood by comparing the difference between his views and more standard accounts to contemporary debates between semantic and syntactic views of theories: rather than understanding inductive inference as capturing a relationship between sentences or propositions, Whewell understands it as a method for constructing a model of the world. The difference between this (“semantic”) view and the more familiar (“syntactic”) picture of induction is reflected in other aspects of Whewell's philosophy of science, particularly his treatment of consilience and the order of discovery.

  1. “Can Attribution Science Close the Loop?” Eastern Division APA Meeting. New York, NY. Jan. 8-11, 2025.
    [ Slides (coming soon) • Abstract ]
  2. Climate change attribution involves measuring the human contribution to warming. In principle, inaccuracies in the characterization of the climate's internal variability could undermine these measurements. Equally in principle, the success of the measurement practice could provide evidence that our assumptions about internal variability are correct. I argue that neither condition obtains: current measurement practices do not provide evidence for the accuracy of our assumptions precisely because they are not as sensitive to inaccuracy in the characterization of internal variability as might be worried. I end by drawing some lessons about “robustness reasoning” more generally.
  3. “Stability in Climate Change Attribution.” PSA2024. New Orleans, LA. Nov. 14-17, 2024.
    [ Slides (coming soon) • Abstract ]
  4. Climate change attribution involves measuring the human contribution to warming. In principle, inaccuracies in the characterization of the climate's internal variability could undermine these measurements. Equally in principle, the success of the measurement practice could provide evidence that our assumptions about internal variability are correct. I argue that neither condition obtains: current measurement practices do not provide evidence for the accuracy of our assumptions precisely because they are not as sensitive to inaccuracy in the characterization of internal variability as might be worried. I end by drawing some lessons about “robustness reasoning” more generally.
  5. “The Case of the Mislabled Axis.” Next Generation Event Horizon Telescope working group. Online. Sep. 11, 2024.
    [ SlidesAbstract ]
  6. In 2015 testimony before congress, John Christy presented a series of graphs purporting to show that climate models are badly wrong about how quickly the mid-troposphere is warming. His graphs have been widely criticized by other climate scientists, who have shown that the same data can be graphed in a way that makes the apparent divergence disappear. So who's right? What does it even mean for one graph to be right and another wrong, given that they're constructed using the same data? These are questions for philosophy of science, but (unfortunately) philosophers have to date largely ignored data visualization and its role in the sciences. In this talk, I aim to rectify that, first by sketching some basic principles of a “philosophy of graphs” and then by applying them to debate in climate science. The upshot: in an important technical sense, Christy's graphs are even more wrong than the critics allege.
  7. “The Crisis in the Machine: On Economics and Climate Change.” From Crisis to Coordination: Conversations Between Philosophy of Environmental Justice & Philosophy of Conservation Science. Minneapolis, MN. May 24, 2024.
    [ SlidesAbstract ]
  8. Kyle Whyte (2021) argues that a crisis involves an (a) unprecedented threat to (b) the present that (c) requires unprecedented solutions. On this definition, climate change might be considered crisis par excellence. Surprisingly, however, climate economics tends to view climate change as (a) familiar investment problem that (b) threatens the future and (c) should be addressed using familiar tools such as taxes. I argue that this tells us more about economics than about climate change. Specifically, it helps us see that economics is in an important sense permentantly “crisis-oriented.”
  9. “Consistent Estimators and the Argument from Inductive Risk.” Revitalizing Science and Values. Pittsburgh, PA. Apr. 7, 2024.
    [ SlidesAbstract ]
  10. In this paper, we argue that classical statistical inference relies on risk evaluations that are precisely analogous to the tradeoff between false positives and negatives even in cases where the latter tradeoff is irrelevant. Crucially, however, these value-laden choices are constrained by a “consistency” demand that ensures that the influence of values will wash out in the long run. These convergence results suggest that there are important disanalogies between the role of values in statistical examples—including, importantly, Rudner's original error prioritization example—and other cases.
  11. “On Depictive Testimony, or: How do you Assert a Graph?” Pacific Division APA Meeting. Portland, OR, March 20, 2024.
    [ HandoutAbstract ]
  12. To date, the philosophical literature on testimony has typically ignored testimony involving non-linguistic vehicles. While there's some work outside of epistemology that touches on the subject, there is little extended discussion of the relationship between depictive testimony and its more familiar linguistic cousin. The extension is non-trivial: like narrative testimony, depictive testimony is \textit{perspectival} in that the “literal” content of the depiction is accompanied by a framework for interpreting that content. Indeed, as I'll argue in this talk, a speaker who uses a depiction is responsible for both content and perspective: the perspective must be reliable and the content must be accurate from that perspective.
  13. “Higher-Order Uncertainty and the Methodology of Climate Economics” Central Division APA Meeting. New Orleans, LA, Feb. 22, 2024.
    [ SlidesHandoutAbstract ]
  14. In 2007, Martin Weitzman set off a major debate within climate economics by arguing that avoiding catastrophic climate change should be the main aim of climate policy. In this talk, I examine Weitzman's work and argue that the debate turns on a methodological disagreement about how economists should address higher-order uncertainty.
  15. “What Makes Statistics Valuable?” MWPMW2023. Notre Dame, IN, Nov. 5, 2023.
    [ SlidesAbstract ]
  16. What is the epistemic value of statistical methods? In his “On the Mathematical Foundations of Theoretical Statistics,” R. A. Fisher tells us that the answer lies in data reduction: data sets are too large to “enter the mind,” and the role of statistics is to “represent” this large quantity of data using a more manageable set of quantities (Fisher 1922, 311). But there are many different ways of “reducing” data: we could represent the whole data set using an element picked at random, the overall average, the spread, or even more qualitative measures. Why should we prefer the mathematical methods of statistics to these alternatives? Or: what problem does (e.g.) classical hypothesis testing solve that these other methods don’t?
    The answer is ambiguity. We’re often in situations where data (or, better, evidence) is “ambiguous” in that we don’t know the precise (probabilistic) relationship between the data and whatever hypotheses or conclusions we’re interested in. Neither (merely) collecting more data nor applying various qualitative data reduction approaches is effective at reducing ambiguity. If a method yields ambiguous evidence, then repeating the method just yields more ambiguous evidence. And the average and spread of ambiguous evidence is just as ambiguous as the evidence itself. By contrast, statistical methods such as classical hypothesis testing are effective at reducing ambiguity.
  17. “When is a Graph Honest? Simplification and Ethics in Science Communication.” BSPS2023. Bristol, UK, July 7, 2023.
    [ SlidesAbstract ]
  18. I extend recent work on scientific testimony to account for testimony involving depictions such as graphs or figures. On the account offered, the testimonial presentation of a depiction should be analyzed as involving commitment to both (a) the reliability of the depiction's perspective and (b) the perspective-relative accuracy of the depiction's content. I end by defending the role of honesty in scientific testimony against recent arguments.
  19. “Who Wants a Transparent Map? Honesty and (Mis-)Interpretation in Scientific Communication.” The (Mis-)Interpretation of Scientific Evidence. Bielefeld, DE, March 30, 2023.
    [ SlidesAbstract ]
  20. Most of the philosophical work on science communication focuses on written or spoken testimony—statements that are either true (good) or false (bad). But much of science communication doesn't fit this paradigm. Graphs and other figures are commonplace in science communication, for instance, but are neither true or false. Similarly, science communication is just as shot through with idealizations and approximations as science “proper,” meaning that much of good or virtuous science communication must involve spreading (strictly-speaking) falsehoods. In this talk, I discuss how to apply traditional communicative values such as honesty and transparency to these examples, and discuss what the resulting picture means for cases where the goals of the audience and the expert don't align.
  21. “Uncertainty and Trustworthiness in Regional Climate Modeling.” Forest - City - River: Transforming Regional Climate Models into Local Climate Knowledge for Decision Making. Bielefeld, DE, March 9, 2023.
    [ SlidesAbstract ]
  22. In this talk, I discuss the conditions under which the most common regional climate modeling produces reliable and trustworthy results and the various challenges that climate scientists face in ensuring that these conditions hold. A central theme: in regional climate modeling, post-hoc validation of models and simulations is crucial to ensuring reliability.
  23. “Against ‘Possibilist’ Interpretations of Climate Models.” PSA2022. Pittsburgh, PA, November 10, 2022.
    [ SlidesAbstract ]
  24. Climate scientists frequently employ (groups of) heavily idealized models. How should these models be interpreted? A number of philosophers have suggested that a possibilist interpretation might be preferable, where this entails interpreting climate models as standing in for possible scenarios that could occur, but not as providing any sort of information about how probable those scenarios are. The present paper argues that possibilism is (a) undermotivated by the philosophical and empirical arguments that have been advanced in the literature, (b) incompatible with successful practices in the science, and (c) liable to present a less accurate picture of the current state of research and/or uncertainty than probabilistic alternatives. There are good arguments to be had about how precisely to interpret climate models but our starting point should be that the models provide evidence relevant to the evaluation of hypotheses concerning the actual world in at least some cases.
  25. “How Should the IPCC Present Uncertainty?” Perspectives on Science seminar series. Helsinki, FI, October 31, 2022.
    [ SlidesAbstract ]
  26. At present, the IPCC has a unique two-tier method for communicating uncertainty: claims about (e.g.) future warming are qualified using both "likelihood" and "confidence" scales. Recently, however, a number of climate scientists have called attention to the weaknesses of this method, arguing that it is confusing, hard to understand, and used in different ways by different author groups. In this talk, I consider what a better alternative might look like. I begin by arguing that good science communication is like good science modeling: it highlights or emphasizes what's important by abstracting away from the unimportant. The IPCC's current approach can be thought of as emphasizing two features of the IPCC's knowledge: the degree of imprecision or uncertainty and origins of imprecision or uncertainty. I suggest that there are reasons why we should prioritize emphasizing imprecision, but that the origins of uncertainty are less important. Finally, I consider a few different options for capturing imprecision and consider some broader lessons for science communication.
  27. “Making Your Audience Ignorant: Simplification and Accuracy in the Presentation of Scientific Results.” SPSP2022. Ghent, BE, July 1, 2022.
    [ SlidesAbstract ]
  28. It's commonplace to assume that scientists and other experts shouldn't present false or misleading information to the public. In this talk, by contrast, I argue that experts almost always have to present information that is misleading in one way or another. The question we need to ask, therefore, is how they mislead and for what reasons.
  29. “Calibrating Statistical Tools: Improving the Measure of Humanity’s Influence on the Climate.” Measurement at the Crossroads. Milan, IT, June 29, 2022.
    [ SlidesAbstract ]
  30. Over the last twenty-five years, climate scientists working on the attribution of climate change to humans have developed increasingly sophisticated statistical models in a process that can be understood as a kind of calibration: the gradual changes to the statistical models employed in attribution studies served as iterative revisions to a measurement(-like) procedure motivated primarily by the aim of neutralizing particularly troublesome sources of error or uncertainty. This practice is in keeping with recent work on the evaluation of models more generally that views models as tools for particular tasks: what drives the process is the desire for models that provide more reliable grounds for inference rather than accuracy to the underlying mechanisms of data-generation.
  31. “Climate Models and the Irrelevance of Chaos.” PSA2020/2021. Baltimore, MD, November 11, 2021.
    [ Slides (old)Slides (new)Abstract ]
  32. Philosophy of climate science has witnessed substantial recent debate over the existence of a dynamical or “structural” analogue of chaos, which is alleged to spell trouble for certain uses of climate models. In this paper, I argue that the debate over the analogy can and should be separated from its alleged epistemic implications: chaos-like behavior is neither necessary nor sufficient for small dynamical misrepresentations to generate erroneous results. I identify the relevant kind of kind of sensitivity with a kind of safety failure and argue that the resulting set of issues has different stakes than the extant debate would indicate.
  33. “Interpreting Probability Claims in Climate Science.” EPSA2021. Turin, IT, Sep. 17, 2021.
    [ SlidesAbstract ]
  34. Probablistic claims are common in climate science. To date, these claims have usually been treated as expressing subjective credences. I argue against this view, which has three major problems. First, it fails to account for how the probabilities in question are in fact generated, which often involves the use of classical statistics rather than Bayesian updating. Second and third, the presentation of subjective credences by scientists in scientific reports is both (descriptively) atypical and (normatively) inappropriate. A better view is that such claims represent the authors' best estimate of the objective “weight of the evidence.”
  35. “Statistically-Indistinguishable Ensembles and the Evaluation of Climate Models,” Central Division APA meeting. Chicago, IL, Feb. 28, 2020.
    [ SlidesAbstract ]
  36. Over the last decade, the climate scientists James Annan and Julia C. Hargreaves have argued for a new “paradigm” for the interpretation of ensembles of climate models that promises to allow us to better justify drawing inferences from them. I argue that the apparent benefits of their view are illusory: the assumptions required by the view are strong enough to justify drawing inferences from an ensemble regardless of the choice of interpretative paradigm. I end by sketching how climate scientists might go about supporting these assumptions. [[Note: I'm no longer inclined to gloss the results in this way.]]
  37. “Variation in Evidence and Simpson's Paradox,” Eastern Division APA meeting. Philadephia, PA, Jan. 11, 2020.
    [ SlidesAbstract ]
  38. Standard accounts of variation in evidence (e.g., Fitelson 2001) treat Reichenbach's “screening-off” condition as sufficient for varied sources of evidence to confirm better than non-varied sources. Stegenga and Menon (2017) demonstrate that violating this condition can lead to instances of Simpson's paradox and thus to surprising disconfirmations, but they fail to show that the condition is necessary. In this talk, I argue that independently-motivated changes to how these accounts treat variation allow us to demonstrate the much stronger claim: in any case in which Simpson's paradox is avoided, varied sources of evidence confirm more than non-varied sources. [[Note: I'm no longer inclined to gloss the results in this way.]]
  39. “The Unity of Robustness,” Pacific Division APA meeting. Vancouver, Canada, Apr. 19, 2019.
    [ SlidesAbstract ]
  40. There's substantial skepticism among philosophers about the evidential value of robustness across models. By contrast, there's very little skepticism about the evidential value of robustness when what's varied over are experiments or instruments, and there's a long tradition of arguing that the success or importance of the latter cannot be employed as an argument for the success or importance of the former. The central argument against unifying our treatment of robustness goes back to Nancy Cartwright, who urges that unlike what is the case when experiments or measurements agree, different models “do not constitute independent instruments doing different things, but rather different ways of doing the same thing: instead of being unrelated, they are often alternatives to one another, sometimes even contradictory” (Cartwright 1991, 153). I'm convinced that this argument fails: robustness across models and robustness across experiments are essentially the same epistemic phenomenon and should be given the same analysis. Once we accept this position—call it “unity”—a number of the traditional criticisms of robustness in modeling contexts are shown to be much weaker than they initially appear.

  1. in press. “Methodologies of Uncertainty.” BJPS Short Reads.
  2. Oct. 1, 2024. “The Case of the Mislabled Axis.” Extinct: The Philosophy of Palaeontology Blog.
    url: http://www.extinctblog.org/extinct/2024/9/30/the-case-of-the-mislabled-axis
  3. Jan. 27, 2024. “Interpretting the Probabilistic Language in IPCC Reports.” Ergo Blog. url: https://ergoblog.org/corey-dethier-interpreting-the-probabilistic-language-in-ipcc-reports/