Introduction
A common flaw in so-called “causal” explanation is that it turns into deductive wrapping. In the limit, it becomes deduction in disguise: a tidy chain of therefores that never quite says what actually licenses the expectation. In ordinary reasoning, we explain by saying: given X, we should expect Y. Sometimes that expectation is grounded in something called a law of nature. Sometimes it’s grounded in a relationship that holds reliably across a range of cases. The real work is not the derivation itself — it’s what makes the expectation earn its right to be expected. Put simply: deduction can display an expectation, but it doesn’t automatically justify it. Here I’m drawing on Woodward’s discussion of explanation as something stronger than derivability alone.[1]
But explanatory models often claim universality while neglecting context. Different domains can have different explanatory goals: biology and physics don’t always ask the same kind of “why,” and the same explanatory standard may not travel cleanly across them. Relatedly, “law talk” is messier than it sounds. What counts as a law is contested, and many sciences don’t run on exceptionless laws anyway. In practice, explanations depend more on models, mechanisms, and predictable “control” relationships — not on the assumption that the world has been neatly filed under exceptionless laws. And even when we do talk about laws, we still have to ask: “what function is the law fulfilling here”—prediction, unification, direction, control, counterfactual guidance? Different sciences use “law” talk for different purposes, and it doesn’t play the same role in every field.
Thesis: What makes an explanation explanatory is not that it slots an event under a rule — whether called a “law” or something weaker — but that it keeps our expectations honest. By “honest,” I mean: it tracks the dependencies that would continue to hold under relevant changes, and it makes clear how (and where) it would fail.
It identifies the factors that matter, states the conditions under which the claim should hold, and tells us what would change (and what wouldn’t) if we altered those factors. An explanation earns its status by surviving error-sensitive checks, not by wearing a law label. The “law” label is often unhelpful because it tempts us to assume universal stability and invariance, which can blunt exactly the scope- and failure-sensitivity that makes an explanation rigorous. In other words: explanation is not just a certificate of derivability; it is a way of making expectations disciplined and accountable. In Woodward’s sense: difference-making under specified conditions.
Argument
The Deductive–Nomological (D–N) model (often associated with Carl Hempel) pictures a scientific explanation as a deduction. Roughly: you start with some general rule(s) and some particular starting conditions, and you show — step by step — that the outcome follows.
Deductive (D): the explanation is built like a proof: from the starting claims, the outcome follows.
Nomological (N): at least one of those starting claims is meant to be a law of nature (or something close to a law).
The appeal is obvious. If the outcome follows from true starting points, the explanation looks objective: not just a plausible story, but a conclusion forced by rule plus conditions. If the rule is genuinely general, the explanation looks portable across cases, not tied to a one-off, singular event story. The model also encourages symmetry: the same pattern can predict an outcome in advance or “explain” it after the fact. It tempts us to think: once we have derivation, we have understanding.
But the same features that make the model attractive also expose its weaknesses.
Woodward discusses Bromberger’s flagpole example to show that a valid derivation can be the wrong test for a good explanation. Using a geometrical relationship plus the sun’s angle, you can derive the shadow length from the height of a flagpole. But you can also run the calculation the other way: from the shadow length (plus sun angle), you can derive the height. Both derivations are valid. Yet they do not feel like the same kind of understanding. In one direction, the pole’s height (together with the sun’s angle) fixes the shadow length—so it tells you what would change what. In the reverse direction, the same relationship is being used for estimation: the shadow length helps you recover the height, but it does not explain why the pole has that height. The lesson is simple: proof-shape does not guarantee explanatory direction. If the D–N model treats explanation as “what can be deduced,” then it blurs the difference between explaining a dependence and exploiting that dependence for estimation.
A derivation can also satisfy the model’s rules and still be explanatory junk because it does not control for relevance. Woodward uses the “hexed salt” case: define a predicate hexed and adopt a generalisation like “all hexed salt dissolves in water.” If s is a sample of hexed salt, you can “explain” why it dissolves by citing that generalisation. But being hexed has nothing to do with dissolving. It’s a label that makes no difference. The D–N pattern, by itself, doesn’t stop irrelevant properties being smuggled in as if they were doing explanatory work.
The derivation is formally fine, but the purported explanans functions as a background condition: it co-varies with what matters without doing explanatory work itself.
Relatedly, the “law” requirement is not as solid as it sounds. What counts as a law is contested, and different sciences use general statements in different ways. In many fields, explanation relies less on exceptionless laws and more on models, mechanisms, and control relationships. (Woodward cites Giere on this point.) So even if we grant the D–N recipe, the ingredient list (“laws”) is not stable across the sciences, and that should already make us cautious about treating D–N as a general criterion of explanation.
Criterion
A good explanation doesn’t just show that an outcome follows from some set of statements. It keeps our expectations honest by doing four things:
- Relevance: it tells us what matters (which factors are doing the work, and which are irrelevant to the outcome).
- Scope: it tells us when the claim should hold (the conditions and limits), including where it breaks and how it fails and where more work is needed.
- Difference-making: it tells us what would make a difference (what would change the outcome, and what would leave it unchanged).
- Direction (and route when available): not just that X and Y are linked, but which way the dependence runs for “what would change what,” and—when available—what connects them (mechanism or constraint-structure).
For that reason, the extra requirements aren’t add-ons; they’re what stop “explanation” collapsing into derivation.
That is exactly the gap the D–N picture leaves open. It gives you “proof-shape,” but proof-shape is also compatible with (i) running the dependence in the wrong direction, and (ii) treating irrelevant background conditions as if they were explanatory.
Put differently: a proof can be valid and still not tell you what’s doing the work, where the claim stops holding, or what would actually change the outcome. Valid isn’t the same as explanatory. If we treat “explanation” as satisfied by derivability alone, we risk counting as explanatory many cases that offer little guidance about relevance, scope, or dependence.
Objection
This standard may still sound too demanding, or at least too intervention-shaped. In some cases the explanatory gain is not primarily “if we wiggle X, Y changes,” but showing why many different micro-details lead to the same pattern. Pure mathematics has clear examples: a theorem can explain a regularity by proving that, under broad conditions, a certain form is the stable or limiting outcome—so the fine details “wash out.” The explanation is structural: it narrows the space of possibilities and shows what cannot vary independently, even when there is no causal mechanism on offer and no meaningful intervention story to tell. If that is right, does a “difference-making” criterion risk misclassifying this kind of structural, limit-style explanation as non-explanatory, because its force comes from invariance and robustness rather than manipulable causes?
Reply
The point isn’t to ban proofs, general principles, or abstract reasoning. The point is that “it follows” isn’t enough on its own.
The flagpole case shows you can run a tidy derivation in a direction that looks like inference or measurement rather than explanation.
The hexed salt case shows you can satisfy the formal pattern while using a property that clearly doesn’t matter.
Constraints explain by identifying what cannot vary independently under stated conditions. That is “difference-making” in the counterfactual/dependency sense, even when no practical intervention is available. They also deliver direction in the minimal sense we need: which features are explanatorily prior in the model—what would need to change for the outcome to change—even if the “route” is a constraint-structure rather than a step-by-step mechanism.
Even in domains where constraints and idealisations matter, a good explanation still has to say what is doing the work, where the claim stops holding, and what changes would change the outcome. A constraint-based explanation can meet that standard: it can tell us which constraints are responsible for the pattern, which assumptions are idealisations, and what would need to shift for the pattern to break.
Handoff – Testing the Criterion
This matters because the word explanation carries authority. If we treat explanation as a proof-shaped derivation from something called a law, we will accept too much: derivations that hide irrelevance, blur direction, or pretend to work the same way across every science. If instead we treat explanations as tools for keeping our expectations honest — showing what matters, when it holds, and what would change the outcome — we get a standard that travels better across domains and stays closer to what scientists actually need: not just a result you can derive, but a guide to where understanding sits, and where it would break. That “break” clause matters: it is where explanations become accountable rather than merely persuasive.
That’s why I want to test the view next on snowflakes. Growth, form, and emergence sit right at the seam between “constraint stories” and “difference-maker stories,” and they force the question: what does an explanation add beyond a tidy derivation — what dependencies and routes (mechanistic or constraint-based), what limits, and what it genuinely puts on the table? If I can say what varies with temperature and supersaturation — and what stays stable — then I’m not just redescribing. I’m saying what does the explanatory work, and what does not.
[1] James Woodward, “Explanation,” in The Routledge Companion to Philosophy of Science, ed. Stathis Psillos and Martin Curd (London: Routledge, 2008).