Dilemma in Modern Neuroscience: The Uncertainty Principle Revisited
Published:
Gansheng Tan1 1Washington University in St. Louis, St. Louis, MO, USA (g.tan@wustl.edu)
What is the reality? How can we understand the world? I once naively believed that scientific discovery would ultimately transcend human knowledge, assisted by instrumentation and technology that transform phenomena into human comprehensible properties. However, this idealistic belief may be just a fantasy.
The book “When We Cease to Understand the World” by Benjamín Labatut introduced academic progression in physics, including Heisenberg’s uncertainty principle, which pokes my paper-like ideology with a bamboo wooden skewer. Heisenberg proposed that the act of observation or measurement inherently alters the properties of a physical system, making simultaneous, precise measurements impossible. This foundational notion resonates beyond quantum mechanics, infiltrating various fields of scientific inquiry, including neuroscience. Simply put, observation itself determines what we observe.
At its core, neuroscience seeks precise measurements of neural activity. Yet, similar to how photons behave differently when measured, neural signals are altered by the very act of measurement. As an intuitive example that deviates from Heisenberg’s work, electrode insertion to record neural activity may perturb local physiological conditions, modifying the very neural responses being studied. Drawing parallels from quantum mechanics, the electrons transmitting recorded signals might simultaneously exist across numerous potential paths within space-time dimensions. The final signal displayed in an amplifier merely represents the present moment’s observational outcome. This raises significant doubts about whether neuroscientific measurements genuinely reflect an underlying biological reality or merely a number.
Science, and experimental neuroscience in particular, relies upon the reproducibility of conditions and the consistency of results across repeated measurements. However, under meticulously controlled experimental settings, variations in results often emerge. Some variations may be attributed to uncontrollable environmental factors, but a portion remains elusive. This unpredictability aligns philosophically with Heisenberg’s assertion that perfect reproducibility might inherently be unattainable. Consequently, this uncertainty calls into question the use of statistical error terms.
In scientific practice, for example, when employing regression analysis, error terms conveniently capture unexplained variance between variables. By attributing discrepancies to statistical errors, researchers often sidestep potential conflicts with established theories, inadvertently reducing incentives to challenge or refine prevailing scientific paradigms without actively confronting these discrepancies. Without explicitly pointing out deviations from theoretically predicted relationships, we are working to bring conflicting studies into one bulk of evidence for reproducibility and neuroscience risks stifling the pursuit of universally applicable principles. Consequently, the field becomes inundated with incremental findings disconnected from broader, transformative insights. This happened when famous statisticians and neuroscientists called for an investigation into the variation in results for similar scientific questions in the 2000s. It turns out that instead of identifying potential conflict that pumps next-generation theory, we ended up concluding the inconsistent findings are “noise”.
Historically, revolutionary scientific advancements arose precisely because researchers recognized discrepancies between experimental observations and existing theoretical frameworks, compelling them to devise entirely new explanations. In today’s practice, the habitual relegation of unexplained variance to statistical error undermines opportunities to identify and interrogate these meaningful contradictions. The result is a series of observations with a tailored narrative conforming to a scientific theory. These observations are often framed as a model, rather than a definite theory that will draw disagreement and drive theory closer to the truth, or at least a more generally applied theory.
Reflecting on this dilemma, I see two pathways to move forward. One is that we accept science as fundamentally distinct from reality. Its primary purpose, in this view, isn’t necessarily to fully understand the world but rather to devise practical “rules of thumb” to simplify human activities. I, as a determinist inclined towards deeper understanding, advocate recognizing the inherently stochastic nature of reality while pursuing the underlying principles governing these stochastic processes. My greatest hope lies in exploring interconnectedness within the universe, as exemplified by quantum entanglement, where properties remain correlated despite separation by space and time. On a practical level, enhancing human cognition through technology to allow perception of non-survival-related yet fundamental aspects of reality or properties of matter might be a way to get around the barrier in sensing/measuring matter properties.
I also encourage explicitly confronting uncertainty rather than sidestepping it, to understand neural mechanisms. To this end, I open my arms to Bayesian approaches, which explicitly quantify how each new observation contrasts with or strengthens established knowledge, which aligns with humanity’s collective pursuit of understanding. While I appreciate frequentist approaches for their elegance and practical utility in summarizing event probabilities, I caution against their misapplication, particularly defining a null hypothesis so as to contrast with one’s findings, which happens to be currently known knowledge, or using hypothesis testing to merely establish statistical difference. We need to keep in mind that scientific advancement often emerges precisely from unexplained or contrasting findings.
Ultimately, embracing uncertainty rather than obscuring it offers the potential to cultivate meaningful insights into neural mechanisms. However, comprehensively addressing uncertainty requires systematically examining an infinite set of factors given the infinite dimensionality of human-defined language-based variables. One property of human language is self-generated. There is an effort in data science to capture the relationship between variables; however, primarily by reducing these dimensions. This effort highlights an inherent tension: the philosophical need to explore the infinite set of factors and the paradoxical human drive to distill infinite complexity into finite comprehension.