On Reducing Fallacies in Neuroscience: Critical Thinking Improved with the Scientific Hypothesis

The image is a painting of the famous French physiologist, Claude Bernard, whose quotation to the right serves as the backdrop of this essay on scientific thinking. Bernard appears in formal dress, seated in an armchair, looking off to his left; a serious man.

Thus, the verification of my inferring hypothesis, whatever its likelihood, does not blind me. I hold conditionally to it.  Therefore, I am trying as much to invalidate it as to verify my hypothesis.” [bold added]

The truth must be the goal of our studies.  Being satisfied by plausibility or likelihood is the true pitfall.”  

-Claude Bernard, 1865

(Wikipedia image; unknown date, prior to 1878)

Summary points

  • Introduction to problems in reasoning and how the scientific hypothesis can help
  • Standard of truth/falsehood
  • Right reasoning about a hypothesis to reduce fallacies
  • Multiple hypotheses to discover other interpretations
  • Multiple tests to prevent fallacies of over-interpreting one result
  • The hypothesis underlies the scientific quest for true explanations
  • Implicit hypothesis generation – a default cognitive operation
  • A few suggestions for enhancing rigor in critical thinking and communication in neuroscience.

Introduction

In his editorial, “On Fallacies in Neuroscience,” (Bernard, 2020), eNeuro Editor-in-Chief, Christophe Bernard, says that scientific papers express lots of logical fallacies because most people don’t think logically. Scientists are not trained in logical thinking and we do not consider “other interpretations” for our data. The situation is serious: critical thinking skills are essential – the sine qua non [“without which nothing”] – for doing science.  Mistaking plausibility for truth hinders “true hypothesis testing” and results in failure to report “non-supporting data.” 

Bernard says that false reasoning is a natural tendency of human minds, although people can be trained to do better. Still, he knows it’s unlikely, even undesirable, that all scientists get training in rigorous, formal logic. He’s afraid we’d become “overly critical” at the cost of making progress. It’s probably true that limited training in thinking skills contributes to the reasoning errors that we make. In a survey of hundreds of members of biological societies, including the Society for Neuroscience, I found that most (~70%, n = 444) had ≤ 1 hr of instruction on scientific thinking (Alger, 2019). Yet 92% (n=348) said that they believed that more “formal training” in these matters would be either “very” (66%), or “moderately” (26%) useful. (The number that responded to each questions varied; the percentages refer to the number answering that question.)

The scientific hypothesis at the center

What is to be done? The editorial says that we should consider “other interpretations” for our data and be more cautious in drawing conclusions. It ends with the quote above from the famous French physiologist, Claude Bernard. The quote packs a lot of wisdom that could help reduce the problems that Christophe Bernard raises (since there are two C. Bernards in this discussion, I’ll sometimes have to spell out who I mean). I propose to pick up a few threads in the quote and show how the scientific hypothesis can, if understood properly and applied judiciously, can help decrease the occurrence of fallacies in neuroscience. Obviously, hypothesis-testing is not the only way to do science, nonetheless, if we want to reduce fallacies and enhance the quality of communications, we need to be clear(er) in what we’re communicating. 

Both Bernards allude to the scientific hypothesis as a pivotal idea. Indeed, in analyses of 210 neuroscience research articles from top-tier journals (Alger, 2021), I found that the majority (113/210) tested ≥ 1 hypothesis, although ≤ 42% (47/113) of these came out and stated a hypothesis (I interpreted “statements” of hypotheses flexibly and could have justified a lower number). Thus, despite its apparent popularity, authors often shy away from laying out their hypothesis directly, maybe because the hypothesis is sometimes controversial (Alger, 2019, Chp. 10) or maybe because scientists aren’t trained in its use. I’ll suggest definitions and propose ways to understand and use the hypothesis. I’ll also point out common fallacies that can result when authors are not careful. Finally, I’ll end with a few ideas of how to nudge the big ship of neuroscience back onto a more favorable course than the one that Christophe Bernard believes we’re on. Importantly, I’ll distinguish between hypothesis-based and non-hypothesis-based research, as fallacious reasoning often results from not keeping them straight.

Scientific not statistical hypothesis

I have to stress that this essay focuses on the scientific hypothesis and not the statistical hypothesis, which is an entirely different thing. The former is a potential explanation for some aspect of nature, whereas the latter is part of a mathematical testing procedure that cannot, by itself, explain anything. So, for example, describing the goals of neuroscience in terms of framing “null hypotheses” (e.g. Buzsaki, 2020; cf, Poeppel and Adolfi, 2020) commits an unfortunate error. The null statistical hypothesis gives us a quantitative basis for judging whether two groups (of data, of subjects, etc.) differ according to some numerical parameter. The statistical hypothesis is often used to test predictions of a scientific hypothesis; however, it is not mandatory for such testing.  Conversely, a scientific hypothesis does not have a “p-value.” In fact, some authorities argue that science would not suffer if statistical null hypothesis testing were banned altogether (e.g., Gigerenzer, 2008; Calin-Jageman and Cumming, 2017). When I use the phrase “testing hypotheses,” I always mean testing scientific hypotheses.

Note that nothing in this discussion requires a full understanding of how human minds actually reason. Whether it is via “mental logic,” “mental models,” (e.g. Johnson-Laird, 2010) or the operations of “Systems 1 and 2”(Kahneman, 2011), we don’t know. These are questions for cognitive scientists to answer. Nevertheless, there is general agreement that people are basically rational, though not entirely logical. If we try, we can learn to reason in a way that agrees with formal logic. It is “as if” we were following rules, even if this is not what is truly going on in our brains. The countless successes of science and technology prove that our cognitive deficiencies are not fatal limitations. As long as the products of our minds are mostly in sync with what impartial logic calls for, we can do alright.

Standard of truth/falsehood

The goal of science, according to Claude Bernard, is “truth,” not “plausibility” or “likelihood,” but truth. Naturally, the notions of “verification,” and “invalidation,” are tightly bound up with “truth.” Strictly speaking, you can’t verify or invalidate conclusions that are merely plausible or likely. You might feel that your studies make them more plausible or more likely, but mistaking them for the main goal would be a real “pitfall.” Plainly, we need a standard of truth and ways of reasoning about results if we want to know whether we have “conditionally” verified or invalidated (falsified) a hypothesis. The distinctions between hypothesis-testing and other ways of going about science are crucial for this discussion since the conclusions we can legitimately draw will differ accordingly.

Outside of axiomatic systems, e.g. mathematics and formal logic, it is impossible to deduce all true statements from a limited set of principles (even these systems are ultimately “incomplete” in this sense). As Christophe Bernard points out, empirical science lacks the sharp constraints of Euclidean geometry. Therefore, scientific truths are never 100% certain beyond any conceivable doubt. The importance of this point cannot be overstated. Deep uncertainty, at some level of analysis, may be the hardest message to grasp when trying to understand science. Of course, as Claude Bernard implies, scientists don’t need an air-tight, all-encompassing system, or assurance of perfect certainty, to strive for truth.

Truth is an ideal. Pursuing ideals (“honesty,” “justice,” “peace”) is eminently worthwhile, even though messy reality prevents us from realizing them. Luckily, from a pragmatic standpoint, we can apply valid, logical, deductive reasoning and arrive at “conditionally” true conclusions provided that we agree on word definitions, word usage, and background knowledge.

A hypothesis is a potential explanation

Let’s start with the idea that a hypothesis is a proposed explanation for a phenomenon or observation. It tries to say why things are the way they are. A hypothesis is put forward as a (possibly) true explanation, so we can reason validly from it. A hypothesis makes, or entails, certain predictions. We test hypotheses indirectly by testing the predictions that they make; if the predictions are false, the hypothesis is false. In contrast, we test predictions directly by making measurements, which determine whether they are true or not. The logical relationship between hypothesis and predictions is what allows us reject a hypothesis when the predictions it makes are false.

As Karl Popper (1959/2002) made clear, it is impossible to demonstrate the truth of a hypothesis; only that it is false. We propose (conjecture or just guess) a hypothesis as a true explanation for a phenomenon and then test it severely, probing it for weaknesses. If the hypothesis fails the tests, we reject it. If it does not fail, we accept it provisionally (“conditionally”); it is true as far as we know. Scientific facts are “tested-and-not falsified” hypotheses. (Popper recommends calling these hypotheses “corroborated,” instead of using the awkward phrase.  Corroboration is not a synonym for confirmation, see Alger, 2019).  

Example of deductions from an empirical hypothesis

Here’s an example: our current understanding of a “chemical synapse” is that this communication junction between nerve cells has many properties that identify it: distinctive pre- and post-synaptic structures, pre-synaptic neurotransmitter release, post-synaptic receptors, calcium-dependence of release, etc. Effectively, “chemical synapse” is a much-corroborated hypothesis that summarize and explains the data that led up to it. In other words, the concept itself implies predictions that can be tested.

If we come across a new instance of neuronal communication and guess that a chemical synapse is involved, we can deduce and test (anatomically, pharmacologically, physiologically, etc.) predictions of what properties it must have. That is, pre- and post-synaptic structures with specific characteristics, calcium-dependent transmitter release, etc. If the tests confirm the predictions, then we consider that the hypothesis has been corroborated and we are free to treat it as though it were true (of course it might still be false). The ability to apply valid reasoning to scientific problems does not depend on the existence of a system mathematical postulates or axioms.

Since we can reason deductively from a hypothesis, we can determine if it is falsified or corroborated by experimental results. An explicit scientific hypothesis thus provides a standard for judging truth and falsehood; right and wrong. We can’t arrive at true statements by arguing deductively from findings that are only “plausible” or “likely.”

Moreover, non-hypothesis-based research strategies such as “asking questions,” “characterizing systems,” “discovery science,” do not offer these standards, and so it is important to distinguish clearly among various ways of doing science. Non-hypothesis-testing investigations generate massive amounts of data and are an essential arm of science, so I’m not disparaging them. The data they provide form the bases of future hypotheses. However, the distinctions among different approaches directly impact the kinds of conclusions that we can draw. It is best, says Christophe Bernard, to say that a given interpretation “may” be consistent with certain results, rather than that it is implied by them.  

Right reasoning about a hypothesis to reduce fallacies

Fallacies can arise from misunderstanding the logical connections between hypotheses and predictions. If you test a prediction that is not truly implied by a hypothesis then your test can’t say much about whether the hypothesis is true or not regardless of the results.  Doing so would be a fallacy, and yet such fallacies are common in neuroscience papers. For instance, studies of animal behavioral learning often throw in a cellular study of LTP in an in vitro brain slice preparation. They might for example show that “Drug X,” which causes behavioral learning in the intact animal, also causes LTP in the isolated brain slice. While this may be a reassuring sort of test, a positive in vitro LTP experiment is not a true prediction of a behavioral learning hypothesis. We just don’t know enough to make this connection.

In general, how can we tell if we’re dealing with a true prediction or not? One good way is to ask what happens if the test fails. Would the hypothesis be falsified? In the example, if Drug X didn’t induce or enhance in vitro LTP it wouldn’t mean that the drug didn’t cause the behavioral learning to occur. Other factors – maybe a neuronal growth process – could have been responsible. Doing this thought experiment tells you whether your experiment is superfluous or necessary. Explicitly stating a hypothesis puts your reasoning out there for all to see. This clarifies communication by highlighting the relationship between your experimental results to your conclusions and lets everyone judge their validity.

And, incidentally, laboratory lore has it that superfluous tests in a paper often signal that a journal reviewer (the infamous “Reviewer #3”?) had a heavy hand in determining the published version. If reviewers paid closer attention to the details of hypothesis-based reasoning, specifically in what tests in fact are necessary to test a hypothesis, the quality of reviews could well be enhanced.

Falsifiability as a way of telling if an argument is sound

The strategy of assessing falsifiability can also simplify problems posed by different “levels” of analysis in scientific studies. Gomez-Marin (2020) enumerates fallacies that come from attempts to draw conclusions from data that are obtained at one level (e.g., the study of synapses) to problems posed a very different level (e.g., “mind”).  Buzsaki (2020) and Poeppel and Adolfi (2020) debate similar concerns about how to “map” results from neurobiology onto psychology and vice versa. The problems sound abstract and the boundaries between the mappings are in fact indistinct, which increases the difficulties. However, the standard of falsifiability could help sharpen the boundaries and make problems more solvable. Specifically, if an experimental outcome at one level of investigation cannot, in principle, falsify a hypothesis at another level, then trying to draw conclusions across the levels will result in fallacies.

Here’s an example: Poeppel and Adolfi (2020) say that trying to understand human beliefs and desires in terms of hypotheses based on a complex psychological construct such as “greed” is reasonable. In contrast, trying to understand beliefs and desires in terms of cellular neurobiological mechanisms, say LTP, would be a “category jump” that is “not plausible.” Their segregation makes intuitive sense, yet key questions are left unanswered. Why is it implausible that studies of synaptic LTP will help in some cases and not others? How can we tell? After all, crossing physical levels of analysis isn’t always meaningless – understanding the fundamental physics of electrons can help explain why computer chips work. But the physics of electrons can’t help explain why a computer’s output is “4” when its input is “2+2 = .”

We can apply the standard of falsifiability. Tests of predictions about synapses can falsify higher level hypotheses related to some kinds of nerve cell activity, say epileptic seizures.  Synaptic analysis can’t help explain the emotional sensation of fear that some patients experience during a seizure. If tests of LTP-related hypotheses can’t falsify a hypothesis about a psychological belief, this would define the fallacy of impermissible category-jumping. Recognizing the borders set by the scope of a given hypothesis can thereby reduce flaws in neuroscience reasoning like these.

Multiple hypotheses to discover other interpretations

There are many potential ways to explain any phenomenon. John Platt (1964) says we should formulate multiple hypotheses to account for a given phenomenon. Adopting his procedure would be an easy way for scientists to get into the habit of considering other interpretations for their data. Instead of being satisfied with the first explanation you come up with, put it aside and think of another one, or more, to account for the same data.

This habit has several benefits: it forces you to think about other interpretations and so it decreases fallacies like leaping to conclusions and confirmation bias. It also improves experimental design. A good experiment will falsify one hypothesis and leave one of your alternatives standing. Generating multiple hypotheses makes you think seriously about your data and what you can rightly conclude from it. Finally, the rejection of genuine, meaningful alternative hypotheses, promotes respect for the process of disproving hypotheses.  Publication of “negative data” is a step towards reducing the fallacies associated with publication bias.

Multiple tests to prevent fallacies of over-interpreting one result

As noted, much neuroscience research involves hypothesis testing. Almost all studies report testing more than one prediction of a given hypothesis. The conclusions of such investigations are based, not on the outcome of one test, but rather on the aggregate of all of the tests of that hypothesis. Although there is no widely accepted method for summing up the various results and getting a single statistic that evaluates them all, there are statistical methods that can do this (Alger, 2020).  As more scientists appreciate the benefits of multiple tests, consensus may form around one of them.  Even without an agreed-on statistical parameter, most people instinctively accept that a diverse group of several different experimental tests is a more rigorous way of testing a hypothesis than just doing one test. In any event, testing several predictions of a hypothesis is a good way to guards against fallacies of hasty-(or over-) generalization that are based on slim evidence.

The hypothesis drives the scientific quest for true explanations.

Future developments will undoubtedly call for revising (or rejecting) currently well-established hypotheses, e.g. the standard hypothetical model of a chemical synapse that we’ve talked about. In that case, our current model was pieced together over a century. It’s been tested and corroborated many many times and has made huge  advancements of brain science possible. Despite its great power, new findings – the discoveries of “backwards” synaptic transmission, of molecular bridges from pre- to post-synaptic elements, of glial cell participation, etc., will force neuroscience to reject it.

Our quest to understand and explain the new data will call for the formation of a better and more-encompassing hypothesis. Other modes of science that emphasize data gathering, curiosity, and characterization of systems, etc., also piggy-back on powerful natural urges to understand. They generate information that contribute to the formation and testing of hypotheses. The various ways of doing science complement each other.   

Implicit hypothesis generation – a default cognitive operation

The concept of testing hypotheses with experiments capable of falsifying them, dovetails with two powerful, innate tendencies of the human mind. The first, just discussed, is to seek explanations for phenomena, and the second is to use counterexamples (i.e. falsifying cases) as its primary way of reasoning in our daily lives. Johnson-Laird (2010) finds that we evaluate a new principle or concept by trying to imagine a counterexample. That is, we try to come up with a situation that would, if it happened, disprove the rule. These are both powerful built-in tendencies. And it seems far better to embrace and work with them, rather than try to suppress, ignore, or disguise them. One snag here is that our tendency to seek explanations is ordinarily subliminal. We do it smoothly and automatically,and are frequently unaware of what we’re doing.

We can strengthen our scientific reasoning skills and improve the clarity of our communications by stating our hypotheses overtly so that everyone knows what they and do not say. To quote yet another famous French scientist, the physicist Henri Poincare (1905/2013), “Some hypotheses are dangerous: first and foremost are those which are tacit and unconscious.” If we don’t know about our tacit and unconscious hypotheses, then we’re at their mercy. Our thinking is subtlly biased or misguided by them. Much of the fallacious reasoning that Christophe Bernard’s editorial lays out arises from their hidden influences.

Conclusions

Science is a social activity; we rely on the community mind (consensus, e.g., Oreskes, 2019) for our most sweeping and lasting decisions. Individual contributions should be as well-reasoned as possible, of course, even though they’re usually less broad and solid than the group decisions are. There is therefore a communal need for agreement on what counts as sound scientific reasoning.

Christophe Bernard hints at the crucial role that journal editors and reviewers can play in limiting the spread of faulty reasoning. His informal experiment shows that flawed reasoning can get by reviewers who were either unaware of the fallacies or didn’t see them as serious problems. Ultimately, these gate-keepers have the final say over what goes into the scientific record. Achieving common-sense standards for critical thinking and communication that all (or most) members of the community can buy into should be a high priority matter.

A few suggestions for enhancing rigor in critical thinking and communications in neuroscience

1. Specify the nature of the work

  • Be up-front. Encourage authors to state whether their paper primarily tests a hypothesis(es), in whole or in part, or not. And, if not, to describe briefly what its purpose was. Few papers will embody only one mode of scientific procedure and stating plainly what’s going on will make it easier for reviewers and readers to get the major message of the work.
  • Say what you mean. For hypothesis-testing papers, authors should explicitly state their hypothesis and the predictions that they test. The statement of the hypothesis can appear after presenting preliminary, exploratory, data-gathering experiments; i.e., when there is a phenomenon or observations that need to be explained.
  • Justify your conclusions.  Conclusions summarizing hypothesis-testing work will often genuinely “follow” from the experiments. More wide-open modes of research will give rise to conclusions that “may” be compatible with the data but will also allow for other interpretations. Authors should evaluate additional interpretations in their Discussions.

2. Update reviewers and editors on appropriate reviewing standards.

  • Tests of a hypothesis may be valid even if not they are not exhaustive. Reviewers of a hypothesis-testing paper should not see their role as a contest to come up with more novel or extensive tests of the hypothesis than the authors did. Reviewers should comment on the adequacy of the tests that were done and note glaring oversights. But demanding the most comprehensive analyses imaginable should be out of bounds.
  • Non-hypothesis-testing papers should be assessed according to the substantial, solid information they provide and not on whether they test a hypothesis.

3. Review the logic

  • At least one reviewer should specifically be asked to review the internal logic of the paper – Are there fallacies in reasoning?  Do the conclusions indeed follow from the data and are they appropriate for it?  This requirement may be hard to put into practice, but it is essential. To upgrade the logical soundness of published work, as Christophe Bernard urges us to do, reviewers must consciously attend to this aspect of their reviews.

References

Alger BE (2019) Defense of the scientific hypothesis: from reproducibility crisis to big data. New York: Oxford University Press.

Alger BE (2020) Scientific hypothesis-testing strengthens neuroscience research. eNeuro. Jul 23;7(4):ENEURO.0357-19.2020. doi: 10.1523/ENEURO.0357-19.2020.

Alger BE (2021) Isaac Newton was not implacably opposed to the scientific hypothesis, no matter what he said: lessons for today’s neuroscientist. Soc. Neurosci. Abstr: 2021-J-1660

Bernard C (2020) On fallacies in neuroscience.  eNeuro. 2020 Dec 10;7(6):ENEURO.0491-20.2020. doi:    10.1523/ENEURO.0491-20.2020.

Buzsáki G (2020) The brain-cognitive behavior problem: a retrospective. eNeuro. Aug 7;7(4):ENEURO.0069-20.2020. doi: 10.1523/ENEURO.0069-20.2020.

Calin-Jageman R, Cumming G (2017) Introduction to the new statistics: estimation, open science, and beyond.  New York: Routledge.

Gigerenzer G (2008) Rationality for mortals: how people cope with uncertainty. New York: Oxford University Press.

Gomez-Marin A (2021) Promisomics and the short-circuiting of mind, March/April 2021, 8(2) ENEURO.0521-20.2021 1–5.

Johnson-Laird PN (2010) Mental models and human reasoning. Proc. Natl. Acad. Sci., 107: 18243-18250.

Kahneman D (2011) Thinking fast and slow. New York: Farrar, Straus and Giroux.

Oreskes N (2019) Why trust in science? Princeton: Princeton University Press.

Platt JR (1964) Strong Inference. Science: 146, 347-353

Poeppel D, Adolfi F (2020) Against the epistemological primacy of the hardware: the brain from inside out, turned upside down. doi: 10.1523/ENEURO.0215-20.2020.

Poincare, H (1905) Science and hypothesis, reprinted by CreateSpace Independent Publishing Platform, 2013. 

Popper, K (1959) The logic of scientific discovery. New York: reprinted by Routledge Classics, 2002.

1 thought on “On Reducing Fallacies in Neuroscience: Critical Thinking Improved with the Scientific Hypothesis

Leave a Reply