Corey Dethier

curriculum vitae (as of Mar. 2024)
corey[dot]dethier[at]gmail[dot]com

Selected Publications

  • In press. “The Unity of Robustness: Why Agreement Across Model Reports is Just as Valuable as Agreement Among Experiments.” Erkenntnis.
    [ Article (open access) Abstract ]
  • A number of philosophers of science have argued that there are important differences between robustness in modeling and experimental contexts, and—in particular—many of them have claimed that the former is non-confirmatory. In this paper, I argue for the opposite conclusion: robust hypotheses are confirmed under conditions that do not depend on the differences between and models and experiments—that is, the degree to which the robust hypothesis is confirmed depends on precisely the same factors in both situations. The positive argument turns on the fact that confirmation theory doesn't recognize a difference between different sources of evidence. Most of the paper is devoted to rebutting various objections designed to show that it should. I end by explaining why philosophers of science have (often) gone wrong on this point.
  • 2024. “Contrast Classes and Agreement in Climate Modeling.” European Journal for Philosophy of Science. 14.14: 1-19.
    [ Article • Penultimate (embargoed) • Abstract ]
  • In an influential paper, Wendy Parker argues that agreement across climate models isn't a reliable marker of confirmation in the context of cutting-edge climate science. In this paper, I argue that while Parker's conclusion is generally correct, there is an important class of exceptions. Broadly speaking, agreement is not a reliable marker of confirmation when the hypotheses under consideration are mutually consistent—when, e.g., we're concerned with overlapping ranges. Since many cutting-edge questions in climate modeling require making distinctions between mutually consistent hypotheses, agreement across models will be generally unreliable in this domain. In cases where we are only concerned with mutually exclusive hypotheses, by contrast, agreement across climate models is plausibly a reliable marker of confirmation.
  • 2023. “Against 'Possibilist' Interpretations of Climate Models.” Philosophy of Science. 90.5: 1417-26.
    [ Article (open access) Abstract ]
  • Climate scientists frequently employ (groups of) heavily idealized models. How should these models be interpreted? A number of philosophers have suggested that a possibilist interpretation might be preferable, where this entails interpreting climate models as standing in for possible scenarios that could occur, but not as providing any sort of information about how probable those scenarios are. The present paper argues that possibilism is (a) undermotivated by the philosophical and empirical arguments that have been advanced in the literature, (b) incompatible with successful practices in the science, and (c) liable to present a less accurate picture of the current state of research and/or uncertainty than probabilistic alternatives. There are good arguments to be had about how precisely to interpret climate models but our starting point should be that the models provide evidence relevant to the evaluation of hypotheses concerning the actual world in at least some cases.
  • 2023. “The Cooperative Origins of Epistemic Rationality?” Erkenntnis 88.3: 1269-88.
    [ ArticlePreprint Abstract ]
  • Recently, both evolutionary anthropologists and some philosophers have argued that cooperative social settings unique to humans play an important role in development of both our cognitive capacities and the “construction” of “normative rationality” or “a normative point of view as a self-regulating mechanism” (Tomasello 2017, 38). In this article, I use evolutionary game theory to evaluate the plausibility of the claim that cooperation fosters epistemic rationality. Employing an extension of signal-receiver games that I term “telephone games,” I show that cooperative contexts work as advertised: under plausible conditions, these scenarios favor epistemically rational agents over irrational ones designed to do just as well as them in non-cooperative contexts. I then show that the basic results are strengthened by introducing complications that make the game more realistic.
  • 2023. “Interpreting the Probabilistic Language in IPCC Reports.” Ergo 10.8: 203-25.
    [ Article (open access) Abstract ]
  • The Intergovernmental Panel on Climate Change (IPCC) often qualifies its statements by use of probabilistic “likelihood” language. In this paper, I show that this language is not properly interpreted in either frequentist or Bayesian terms—simply put, the IPCC uses both kinds of statistics to calculate these likelihoods. I then offer a deflationist interpretation: the probabilistic language expressones nothing more than how compatible the evidence is with the given hypothesis according to some method that generates normalized scores. I end by drawing some tentative normative conclusions.
  • 2023. “Supposition and (Statistical) Models.” Philosophy of Science 90.3: 744-49.
    [ ArticlePreprint Abstract ]
  • In a recent paper, Sprenger advances what he calls a “suppositional” answer to the question of why a Bayesian agent's credences should align with the probabilities found in statistical models. We show that Sprenger's account trades on an ambiguity between hypothetical and subjunctive suppositions and cannot succeed once we distinguish between the two.
  • 2022. “Accuracy, Probabilism, and the Insufficiency of the Alethic.” Philosophical Studies 179.7: 2285–301.
    [ Article (open access) Abstract ]
  • The best and most popular argument for probabilism is the accuracy-dominance argument, which purports to show that alethic considerations alone support the view that an agent's degrees of belief should always obey the axioms of probability. I argue that extant versions of the accuracy-dominance argument face a problem. In order for the mathematics of the argument to function as advertised, we must assume that every omniscient credence function is classically consistent; there can be no worlds in the set of dominance-relevant worlds that obey some other logic. This restriction cannot be motivated on alethic grounds unless we're also willing to accept that rationality requires belief in every metaphysical necessity, as the distinction between a priori logical necessities and a posteriori metaphysical ones is not an alethic one. To motivate the restriction to classically consistent worlds, non-alethic motivation is required. And thus, if there is a version of the accuracy-dominance argument in support of probabilism, it isn't one that is grounded in alethic considerations alone.
  • 2022. “Calibrating Statistical Tools: Improving the Measure of Humanity's Influence on the Climate.” Studies in the History and Philosophy of Science 94: 158-66.
    [ ArticlePenultimate Abstract ]
  • Over the last twenty-five years, climate scientists working on the attribution of climate change to humans have developed increasingly sophisticated statistical models in a process that can be understood as a kind of calibration: the gradual changes to the statistical models employed in attribution studies served as iterative revisions to a measurement(-like) procedure motivated primarily by the aim of neutralizing particularly troublesome sources of error or uncertainty. This practice is in keeping with recent work on the evaluation of models more generally that views models as tools for particular tasks: what drives the process is the desire for models that provide more reliable grounds for inference rather than accuracy to the underlying mechanisms of data-generation.
  • 2022. “Science, Assertion, and the Common Ground.” Synthese 200.30: 1-19.
    [ Article (open access) Abstract ]
  • I argue that the appropriateness of an assertion is sensitive to context—or, really, the “common ground”—in a way that hasn't previously been emphasized by philosophers. This kind of context-sensitivity explains why some scientific (and philosophical) conclusions seem to be appropriately asserted even though they are not known, believed, or justified on the available evidence. I then consider other recent attempts to account for this phenomenon and argue that if they are to be successful, they need to recognize the kind of context-sensitivity that I argue for.
  • 2022. “When is an Ensemble like a Sample? ‘Model-Based’ Inferences in Climate Modeling ” Synthese 200.52: 1-20.
    [ Article (open access) Abstract ]
  • Climate scientists often apply statistical tools to a set of different estimates generated by an “ensemble” of models. In this paper, I argue that the resulting inferences are justified in the same way as any other statistical inference: what must be demonstrated is that the statistical model that licenses the inferences accurately represents the probabilistic relationship between data and target. This view of statistical practice is appropriately termed “model-based,” and I examine the use of statistics in climate fingerprinting to show how the difficulties that climate scientists encounter in applying statistics to ensemble-generated data are the practical difficulties of normal statistical practice. The upshot is that whether the application of statistics to ensemble-generated data yields trustworthy results should be expected to vary from case to case.
  • 2021. “Climate Models and the Irrelevance of Chaos.” Philosophy of Science 88.5: 997-1007.
    [ ArticlePreprint Abstract ]
  • Philosophy of climate science has witnessed substantial recent debate over the existence of a dynamical or “structural” analogue of chaos, which is alleged to spell trouble for certain uses of climate models. In this paper, I argue that the debate over the analogy can and should be separated from its alleged epistemic implications: chaos-like behavior is neither necessary nor sufficient for small dynamical misrepresentations to generate erroneous results. I identify the relevant kind of kind of sensitivity with a kind of safety failure and argue that the resulting set of issues has different stakes than the extant debate would indicate.
  • 2021. “Forces in a True and Physical Sense: From Mathematical Models to Metaphysical Conclusions.” Synthese 198.2: 1109–22.
    [ ArticlePenultimate Abstract ]
  • J. Wilson (2009), Moore (2012), and Massin (2017) identify an overdetermination problem arising from the principle of composition in Newtonian physics. I argue that the principle of composition is a red herring: what's really at issue are contrasting metaphysical views about how to interpret the science. One of these views—that real forces are to be tied to physical interactions like pushes and pulls—is a superior guide to real forces than the alternative, which demands that real forces are tied to “realized” accelerations. Not only is the former view employed in the actual construction of Newtonian models, the latter is both unmotivated and inconsistent with the foundations and testing of the science.
  • 2021. “How to Do Things with Theory: The Instrumental Role of Auxiliary Hypotheses in Testing.” Erkenntnis 86.6: 1453-68.
    [ ArticlePenultimate Abstract]
  • Pierre Duhem's influential argument for holism relies on a view of the role that background theory plays in testing: according to this still common account of “auxiliary hypotheses,” elements of background theory serve as truth-apt premises in arguments for or against a hypothesis. I argue that this view is mistaken. Rather than serving as truth-apt premises in arguments, auxiliary hypotheses are employed as (reliability-apt) “epistemic tools”: instruments that perform specific tasks in connecting our theoretical questions with the world but that are not (or not usually) premises in arguments. On the resulting picture, the acceptability of an auxiliary hypothesis depends not on its truth but on contextual factors such as the task or purpose it is put to and the other tools employed alongside it.
  • 2018. “William Whewell's Semantic Account of Induction.” HOPOS 8.1:141-56.
    [ ArticlePenultimate Abstract]
  • William Whewell's account of induction differs dramatically from the one familiar from 20th Century debates. I argue that Whewell's induction can be usefully understood by comparing the difference between his views and more standard accounts to contemporary debates between semantic and syntactic views of theories: rather than understanding inductive inference as capturing a relationship between sentences or propositions, Whewell understands it as a method for constructing a model of the world. The difference between this (“semantic”) view and the more familiar (“syntactic”) picture of induction is reflected in other aspects of Whewell's philosophy of science, particularly his treatment of consilience and the order of discovery.

    Classes Taught

  • Winter 2021 – Introduction to Philosophy in English (Leibniz Universität Hannover)
    [ Syllabus Description ]
  • This course provides students with an English-language introduction to philosophy with a particular focus on developing and practicing skills of communicating philosophical ideas in English. No prior philosophical experience in English is assumed; readings will (mostly) be focused on contemporary discussions of free speech, personal identity, and ethical issues in medicine and public health.
  • Fall 2020 – University Philosophy Seminar: Philosophy in the 21st Century (University of Notre Dame)
    [ Syllabus Description ]
  • The goal of this course is to offer students an introduction to philosophy, and particularly to philosophy as it is practiced in the 21st Century. Particular attention will be paid to issues that are relevant outside the philosophy classroom, such as limits on free speech, differences in identity, scientific knowledge, and medical ethics. As this course is a University Seminar, there will also be significant focus on writing skills and—to the extent possible while maintaining social distancing—class discussions.
  • Fall 2018 – Philosophy of the Life Sciences (University of Notre Dame)
    [ Syllabus Description ]
  • Designed for non-majors undergraduates, this course serves as an introduction to the philosophy of the life sciences, with a specific focus on contemporary issues relating to genes and genetics. The class begins with a discussion of evolution and its conceptual foundations, paying particular attention to different views on the role of natural selection within evolutionary biology. We then turn our attention to a number of more specific philosophical issues, such as the implications of evolutionary biology for human nature, individuals, and society. The course ends by considering some contemporary ethical issues raised by medicine, evolution, and our increasing ability to manipulate life.
  • Syllabi for other classes that I've designed: Uncertainty in ScienceEthics, Justice, and Climate ChangeEthics of Emerging TechnologiesEarly Modern Philosophy
  • Recent Presentations

  • [with Samuel C. Fletcher] “Consistent Estimators and the Argument from Inductive Risk.” Revitalizing Science and Values. Pittsburgh, PA. Apr. 7, 2024.
    [ Slides Abstract ]
  • In this paper, we argue that classical statistical inference relies on risk evaluations that are precisely analogous to the tradeoff between false positives and negatives even in cases where the latter tradeoff is irrelevant. Crucially, however, these value-laden choices are constrained by a “consistency” demand that ensures that the influence of values will wash out in the long run. These convergence results suggest that there are important disanalogies between the role of values in statistical examples—including, importantly, Rudner's original error prioritization example—and other cases.
  • “On Depictive Testimony, or: How do you Assert a Graph?” Pacific Division APA Meeting. Portland, OR, March 20, 2024.
    [ Handout Abstract ]
  • To date, the philosophical literature on testimony has typically ignored testimony involving non-linguistic vehicles. While there's some work outside of epistemology that touches on the subject, there is little extended discussion of the relationship between depictive testimony and its more familiar linguistic cousin. The extension is non-trivial: like narrative testimony, depictive testimony is \textit{perspectival} in that the “literal” content of the depiction is accompanied by a framework for interpreting that content. Indeed, as I'll argue in this talk, a speaker who uses a depiction is responsible for both content and perspective: the perspective must be reliable and the content must be accurate from that perspective.
  • “Higher-Order Uncertainty and the Methodology of Climate Economics” Central Division APA Meeting. New Orleans, LA, Feb. 22, 2024.
    [ SlidesHandout Abstract ]
  • In 2007, Martin Weitzman set off a major debate within climate economics by arguing that avoiding catastrophic climate change should be the main aim of climate policy. In this talk, I examine Weitzman's work and argue that the debate turns on a methodological disagreement about how economists should address higher-order uncertainty.
  • “What Makes Statistics Valuable?” MWPMW2023. Notre Dame, IN, Nov. 5, 2023.
    [ Slides Abstract ]
  • What is the epistemic value of statistical methods? In his “On the Mathematical Foundations of Theoretical Statistics,” R. A. Fisher tells us that the answer lies in data reduction: data sets are too large to “enter the mind,” and the role of statistics is to “represent” this large quantity of data using a more manageable set of quantities (Fisher 1922, 311). But there are many different ways of “reducing” data: we could represent the whole data set using an element picked at random, the overall average, the spread, or even more qualitative measures. Why should we prefer the mathematical methods of statistics to these alternatives? Or: what problem does (e.g.) classical hypothesis testing solve that these other methods don’t?
    The answer is ambiguity. We’re often in situations where data (or, better, evidence) is “ambiguous” in that we don’t know the precise (probabilistic) relationship between the data and whatever hypotheses or conclusions we’re interested in. Neither (merely) collecting more data nor applying various qualitative data reduction approaches is effective at reducing ambiguity. If a method yields ambiguous evidence, then repeating the method just yields more ambiguous evidence. And the average and spread of ambiguous evidence is just as ambiguous as the evidence itself. By contrast, statistical methods such as classical hypothesis testing are effective at reducing ambiguity.
  • “When is a Graph Honest? Simplification and Ethics in Science Communication.” BSPS2023. Bristol, UK, July 7, 2023.
    [ Slides Abstract ]
  • I extend recent work on scientific testimony to account for testimony involving depictions such as graphs or figures. On the account offered, the testimonial presentation of a depiction should be analyzed as involving commitment to both (a) the reliability of the depiction's perspective and (b) the perspective-relative accuracy of the depiction's content. I end by defending the role of honesty in scientific testimony against recent arguments.
  • “Who Wants a Transparent Map? Honesty and (Mis-)Interpretation in Scientific Communication.” The (Mis-)Interpretation of Scientific Evidence. Bielefeld, DE, March 30, 2023.
    [ Slides Abstract ]
  • Most of the philosophical work on science communication focuses on written or spoken testimony—statements that are either true (good) or false (bad). But much of science communication doesn't fit this paradigm. Graphs and other figures are commonplace in science communication, for instance, but are neither true or false. Similarly, science communication is just as shot through with idealizations and approximations as science “proper,” meaning that much of good or virtuous science communication must involve spreading (strictly-speaking) falsehoods. In this talk, I discuss how to apply traditional communicative values such as honesty and transparency to these examples, and discuss what the resulting picture means for cases where the goals of the audience and the expert don't align.
  • “Uncertainty and Trustworthiness in Regional Climate Modeling.” Forest - City - River: Transforming Regional Climate Models into Local Climate Knowledge for Decision Making. Bielefeld, DE, March 9, 2023.
    [ Slides Abstract ]
  • In this talk, I discuss the conditions under which the most common regional climate modeling produces reliable and trustworthy results and the various challenges that climate scientists face in ensuring that these conditions hold. A central theme: in regional climate modeling, post-hoc validation of models and simulations is crucial to ensuring reliability.
  • “Against ‘Possibilist’ Interpretations of Climate Models.” PSA2022. Pittsburgh, PA, November 10, 2022.
    [ Slides Abstract ]
  • Climate scientists frequently employ (groups of) heavily idealized models. How should these models be interpreted? A number of philosophers have suggested that a possibilist interpretation might be preferable, where this entails interpreting climate models as standing in for possible scenarios that could occur, but not as providing any sort of information about how probable those scenarios are. The present paper argues that possibilism is (a) undermotivated by the philosophical and empirical arguments that have been advanced in the literature, (b) incompatible with successful practices in the science, and (c) liable to present a less accurate picture of the current state of research and/or uncertainty than probabilistic alternatives. There are good arguments to be had about how precisely to interpret climate models but our starting point should be that the models provide evidence relevant to the evaluation of hypotheses concerning the actual world in at least some cases.
  • “How Should the IPCC Present Uncertainty?” Perspectives on Science seminar series. Helsinki, FI, October 31, 2022.
    [ Slides Abstract ]
  • At present, the IPCC has a unique two-tier method for communicating uncertainty: claims about (e.g.) future warming are qualified using both "likelihood" and "confidence" scales. Recently, however, a number of climate scientists have called attention to the weaknesses of this method, arguing that it is confusing, hard to understand, and used in different ways by different author groups. In this talk, I consider what a better alternative might look like. I begin by arguing that good science communication is like good science modeling: it highlights or emphasizes what's important by abstracting away from the unimportant. The IPCC's current approach can be thought of as emphasizing two features of the IPCC's knowledge: the degree of imprecision or uncertainty and origins of imprecision or uncertainty. I suggest that there are reasons why we should prioritize emphasizing imprecision, but that the origins of uncertainty are less important. Finally, I consider a few different options for capturing imprecision and consider some broader lessons for science communication.
  • “Making Your Audience Ignorant: Simplification and Accuracy in the Presentation of Scientific Results.” SPSP2022. Ghent, BE, July 1, 2022.
    [ Slides Abstract ]
  • It's commonplace to assume that scientists and other experts shouldn't present false or misleading information to the public. In this talk, by contrast, I argue that experts almost always have to present information that is misleading in one way or another. The question we need to ask, therefore, is how they mislead and for what reasons.
  • “Calibrating Statistical Tools: Improving the Measure of Humanity’s Influence on the Climate.” Measurement at the Crossroads. Milan, IT, June 29, 2022.
    [ Slides Abstract ]
  • Over the last twenty-five years, climate scientists working on the attribution of climate change to humans have developed increasingly sophisticated statistical models in a process that can be understood as a kind of calibration: the gradual changes to the statistical models employed in attribution studies served as iterative revisions to a measurement(-like) procedure motivated primarily by the aim of neutralizing particularly troublesome sources of error or uncertainty. This practice is in keeping with recent work on the evaluation of models more generally that views models as tools for particular tasks: what drives the process is the desire for models that provide more reliable grounds for inference rather than accuracy to the underlying mechanisms of data-generation.
  • “Climate Models and the Irrelevance of Chaos.” PSA2020/2021. Baltimore, MD, November 11, 2021.
    [ Slides (old)Slides (new) Abstract ]
  • Philosophy of climate science has witnessed substantial recent debate over the existence of a dynamical or “structural” analogue of chaos, which is alleged to spell trouble for certain uses of climate models. In this paper, I argue that the debate over the analogy can and should be separated from its alleged epistemic implications: chaos-like behavior is neither necessary nor sufficient for small dynamical misrepresentations to generate erroneous results. I identify the relevant kind of kind of sensitivity with a kind of safety failure and argue that the resulting set of issues has different stakes than the extant debate would indicate.
  • “Interpreting Probability Claims in Climate Science.” EPSA2021. Turin, IT, Sep. 17, 2021.
    [ Slides Abstract ]
  • Probablistic claims are common in climate science. To date, these claims have usually been treated as expressing subjective credences. I argue against this view, which has three major problems. First, it fails to account for how the probabilities in question are in fact generated, which often involves the use of classical statistics rather than Bayesian updating. Second and third, the presentation of subjective credences by scientists in scientific reports is both (descriptively) atypical and (normatively) inappropriate. A better view is that such claims represent the authors' best estimate of the objective “weight of the evidence.”
  • “Statistically-Indistinguishable Ensembles and the Evaluation of Climate Models,” Central Division APA meeting. Chicago, IL, Feb. 28, 2020.
    [ Slides Abstract ]
  • Over the last decade, the climate scientists James Annan and Julia C. Hargreaves have argued for a new “paradigm” for the interpretation of ensembles of climate models that promises to allow us to better justify drawing inferences from them. I argue that the apparent benefits of their view are illusory: the assumptions required by the view are strong enough to justify drawing inferences from an ensemble regardless of the choice of interpretative paradigm. I end by sketching how climate scientists might go about supporting these assumptions. [[Note: I'm no longer inclined to gloss the results in this way.]]
  • “Variation in Evidence and Simpson's Paradox,” Eastern Division APA meeting. Philadephia, PA, Jan. 11, 2020.
    [ Slides Abstract ]
  • Standard accounts of variation in evidence (e.g., Fitelson 2001) treat Reichenbach's “screening-off” condition as sufficient for varied sources of evidence to confirm better than non-varied sources. Stegenga and Menon (2017) demonstrate that violating this condition can lead to instances of Simpson's paradox and thus to surprising disconfirmations, but they fail to show that the condition is necessary. In this talk, I argue that independently-motivated changes to how these accounts treat variation allow us to demonstrate the much stronger claim: in any case in which Simpson's paradox is avoided, varied sources of evidence confirm more than non-varied sources. [[Note: I'm no longer inclined to gloss the results in this way.]]
  • “The Unity of Robustness,” Pacific Division APA meeting. Vancouver, Canada, Apr. 19, 2019.
    [ Slides Abstract ]
  • There's substantial skepticism among philosophers about the evidential value of robustness across models. By contrast, there's very little skepticism about the evidential value of robustness when what's varied over are experiments or instruments, and there's a long tradition of arguing that the success or importance of the latter cannot be employed as an argument for the success or importance of the former. The central argument against unifying our treatment of robustness goes back to Nancy Cartwright, who urges that unlike what is the case when experiments or measurements agree, different models “do not constitute independent instruments doing different things, but rather different ways of doing the same thing: instead of being unrelated, they are often alternatives to one another, sometimes even contradictory” (Cartwright 1991, 153). I'm convinced that this argument fails: robustness across models and robustness across experiments are essentially the same epistemic phenomenon and should be given the same analysis. Once we accept this position—call it “unity”—a number of the traditional criticisms of robustness in modeling contexts are shown to be much weaker than they initially appear.


    About Me

    I'm a Postdoctoral Fellow at the Minnesota Center for Philosophy of Science working with Samuel C. Fletcher on the NSF-funded project “A Modern Philosophy for Classical Statistical Testing and Estimation.” The project has two main goals: identifying the epistemological foundations of classical statistics and putting together an R package that will allow researchers to put the theoretical insights into actual practice.

    Outside of my day job, I have two main projects and a variety of side interests. The first project concerns science communication, and especially the ubiquitous use of depictions such as graphs in communicating scientific conclusions. What I call “depictive testimony” blurs many of the traditional lines between semantics and pragmatics and (partly as such) is highly sensitive to both contextual and conventional factors that deserve further exploration and that can teach us quite a bit about (scientific) testimony more broadly. For example, we can learn a lot about the demands of honesty in expert testimony by exmaining the idealizations and approximations common in graphical representations.

    The second project concerns evidence almagamation, particularly in climate science. My dissertation was on robustness in climate modeling—what should we conclude when climate models agree on a single hypothesis?—but in writing it I came to think that “robustness reasoning” is a limited framework for understanding evidence from different sources. Many of the papers that you can find to left argue that statistical tools provide a better framework—better not just for philosophers but also for researchers themselves. Roughly, statistical tools are sensitive to variation in a way that frameworks like robustness are not and thus they provide us better insight to what the evidence actually supports.

    I'm also interested in the ethical and political questions surrounding both statistics and climate science. For example, I'm currently investigating a debate within economics on the question of which probability distribution we should use when calculating the expected utility of different climate policies. Despite initial appereances, I argue that this debate is deeply philosophical—it's really about which possibilities “count” for the purposes of political decision making. Recognizing this fact has a bunch of important implications for climate ethics, which has traditionally taken the choice of probability distribution to simply be determined by the empirical evidence.

    When not practicing or teaching philosophy, I enjoy spending time baking, running, climbing, and patting my cat, Indy. On the advice of my mother, I try to never be afraid to make a fool of myself.

    Specialization

    AOS: Philosophy of climate science, philosophy of statistics, epistemology

    AOC: Environmental ethics, general philosophy of science, philosophy of biology

    Pronunciation

    My last name is originally Belgian—great-grandad was from Liège—but it's been badly Americanized in the last century and we just pronounce it “Dah chair” like “the chair.”

    Site Design

    Most of the HTML code for this webpage was lifted (with permission) from Jonah Schupbach, who in turn based his website on JC Beall's site, which was in turn based on Ted Sider's. Credit also to James Nguyen, whose website is also causally connected to Schupbach-Beall-Sider design chain and who gave me the idea to engage in the sincerest form of flattery.


    My cat, Indy.