Sunday, September 22, 2013

Where do questions come from, and where do they go? (Scientific R&D, Part 1.2)

Where do questions come from?

(Commonly attributed to Asimov, though I haven't yet found a credible citation)

As I argued in my previous post, the questions that animate science must make the erotetic cut, but this is only a necessary condition.  We still need to answer the question, "where do such questions come from?"  As with scientific R&D in general, there is nothing uniquely scientific about asking questions.  Humans being creatures endowed with an insatiable thirst for understanding, we are (or seem to be) predisposed to fixate on our points of confusion and gaps in knowledge as we encounter them.  Ultimately, this is where the questions that animate science come from, though not all such questions are well-suited to observational scrutiny.  As I have previously mentioned, if an idea cannot be held accountable to observation, then we are not doing science, and this applies specifically to the ideas we offer in answer to our questions, and by extension to the questions, themselves.  In this post, I will narrow my focus (for the most part) to those questions and answers that are amenable to observational scrutiny.

The questions that animate science tend to come from either of two directions, a fact that probably does not seem noteworthy until I tell you that it is the source of a surprising amount of conflict within the scientific community.  I’ll get to the conflict in a bit, but it would be helpful if we first understand what these two approaches are.  I will call them observation-driven inquiry and theory-driven inquiry.

Observation-driven inquiry begins with curious patterns detected in observational data, characteristics about some dimension of reality that we have described and that seem unique or that make us wonder why they aren't some other way instead.  That the world is working in some way is made clear by such observation, but exactly how it worked to that end remains to be determined.

For example, over the course of the last two and a half or three centuries, the global population, as well as the regional and local sub-populations that it comprises, have been undergoing a transition from a regime characterized by high mortality and high fertility rates to one characterized by low mortality and low fertility rates.  Furthermore, a lag in time between the mortality and fertility transitions has driven the explosive population growth that has led to our current world population of 7,000,000,000+, though fortunately this explosive growth seems to be on the decline and there is at least a hope that growth will level off by the end of this century.  Taken together, these trends in fertility, mortality, and population growth are known as the Demographic Transition (or DT for short).  The changes in the global population size and age structure that these changes have entailed present us with pressing concerns for the sustainability of our current social, political, economic, and medical systems, so we are compelled to understand the factors that have driven and continue to drive changes in growth rates, in other words factors that drive changes in fertility and mortality rates.  The DT thus poses a pressing target for understanding; it raises questions like, why have these mortality and fertility transitions unfolded in the way that they have? and what are the circumstances under which such changes are likely to occur, in which direction, and to what social, political, economic, and/or medical effect?  Demographers have been trying to answer these questions since the early Twentieth Century, with early guesses emphasizing Neoclassical economic principles, later ideas stressing the diffusion of cultural norms from centers of development to developing regions, and a plurality of current answers hybridizing various aspects of previous perspectives.  At the same time, we have collected better demographic data from many areas of the world that were not originally available to the early DT researchers, and of course new data continue to come in as the ongoing population dynamics of the world have continued to unfold, meaning that the explanatory target of DT research has shifted around a bit since its inception.

A second example comes from geology, regarding the extent and causes of what may have been our planet’s most dramatic ice age, the late Proterozoic Ice Age.  Beginning in the late Nineteenth Century, a series of distinctive geological deposits known as late Neoproterozoic Glacial Deposits (LNGDs) have been discovered on every continent, suggesting that virtually all of the Earth’s terrestrial surfaces, including in the temperate and tropical latitudes, were once covered by extensive ice sheets sometime between approximately 850 million and 635 million years ago.  Since that time, our planet has not experienced such extensive glaciation, making it all the more noteworthy.  The question thus rises, what unique set of factors came together to lead to such an unparalleled ice age, and could it happen again?  As with DT research, a number of competing explanations have been offered for this ice age ever since it was discovered in the mid 20th century.  One family of explanations emphasizes the interaction between changes in the Earth’s orbit around the Sun and the tilt of its axis, which could have decreased and changed the daily and annual intensity of solar radiation reaching the Earth's surface, potentially allowing for the formation and expansion of continental ice sheets.  Alternatively, the “Snowball Earth Hypothesis” suggests that a phenomenon called “albedo,” referring to the reflection of the sun by white surfaces, led to a process of positive feedback in which the initial formation of continental ice sheets in the tropics acted to deflect sunlight, leading to further global cooling and allowing for the formation of even more extensive ice sheets.  In this scenario, the runaway albedo effect propagated to such a degree that even the oceans were covered by a sheet of ice (hence the label “Snowball Earth”), or at least that they were globally slushy (“Slushball Earth”).  The jury is still out on the most credible answer to this question, though at present the Snowball Earth hypothesis is faring much better than the orbital hypotheses in the debate, as I understand it.

Unlike observation-driven inquiry, theory-driven inquiry begins with a prior belief already in mind about the way that some aspect of the world works.  In this case, questions arise particularly when such prior understandings run into observations that seem to contradict then.  The critical word here is seem, because many seeming contradictions turn out to be mere paradoxes – seemingly contradictory statements that may nonetheless be true – upon further examination.  In the case of a paradox (but not a true contradiction), the root of confusion typically lies in a flaw or a gap in our own understanding rather than a true mismatch between the two statements in question.  The goal of theory-driven research is then to come up with a solution to the puzzle that accommodates for the seeming incongruity between prior understanding and new observation, in other words by showing that the seemingly contradictory observation does not in fact violate the prior understanding.  The effect of such accommodation is that the newly observed reality is neatly subsumed under the prior belief.

For example, Neo-Darwinian theory (which synthesizes Darwin’s theory of natural selection with modern genetics) asserts that biological traits (anatomical, physiological, or behavioral) are not expected to emerge or persist in a species that would act to undercut the fitness of individual organisms within that species, i.e. that would undermine their ability to “leave more surviving offspring or more copies of their genes” (John Alcock, The Triumph of Sociobiology, p. 32).  Alcock continues on to say that
“This is a theoretical perspective, and like all useful theories, it shapes the expectations of observers in productive ways, so that they can first identify the surprising features of nature and then develop testable hypotheses to account for these surprises.  Someone who understands Darwinian theory is prepared to be puzzled by certain things, not others.
For example, the emergence and persistence of altruistic behaviors in various highly social species, including social insects, group-living mammals, and others besides, have provided one of the major puzzles for sociobiological research (the area of Darwinian evolutionary biology devoted to the study of the evolution of social behavior).  If individuals are designed by natural selection to promote the reproduction of their genes into the future, why would an individual ever invest one's own time, energy, or resources toward the well-being of another individual at the expense of one's own?

Scientists who are accustomed to approaching research from the angle of observation-driven inquiry often regard the theory-driven approach with considerable suspicion because, at first impression, it seems to embody a rather non-scientific approach to establishing belief.  Making accommodations for seeming contradictions is the business of apologists, defenders of the faith, not scientists, because such behavior short-circuits the definitively scientific act of endangering ideas.  Or does it?

As it turns out, this approach is nowhere near as unscientific as it may initially appear.  First, as advocates of the theory-driven approach would counter, successfully demonstrating that new observations continue to fall within the boundaries set on reality by old understandings speaks volumes for the continued value of those understandings in helping us to make better sense of our world.  Second and more importantly, researchers dedicated to this approach will often be the first to admit that their success in accommodating for the paradoxical is dependent on a goodness of fit between their accommodation and further rounds of observational scrutiny.  In other words, the accommodation that resolves the paradox and subsumes observation under prior understanding is itself treated as a testable hypothesis, and not all such accommodations will stand up to scrutiny.

Nor are all approaches to theory-driven inquiry so monopolistic in trying to subsume new observations under the coverage of a single understanding.  Evolutionary biologists, for example, readily concede that there are other evolutionary processes beside natural selection that can explain changes in the frequency of genes or biological traits within a species, including changes that either reduce reproductive fitness or are selectively neutral.  Thus, sometimes the solution to a Darwinian puzzle requires no special intellectual contortion to resolve a paradox but instead draws upon other, non-selective theories about evolutionary processes.  On this point, evolutionary biologists distinguish between four “forces” of evolution – mutation, selection, gene flow, and genetic drift – and further distinguish between different kinds of selection (natural, sexual, group, artificial), all of which are expected to operate under different sets of circumstances.

A similar situation holds in medicine and public health regarding the cause of diseases.  One of the most revolutionary intellectual developments in medicine over the last three or more centuries was the introduction of what we now call the germ theory of disease.  This idea suggests that many diseases are caused by infection by small critters (i.e., “germs”) like prions, viruses, bacteria, protozoa, fungi, and arthropods (worms, arachnids, insects), which parasitize the body of their hosts for their own purposes, and to the detriment of the host’s regular biofunctions.  With the emergence of microbiology, parasitology, and immunology, our ability to demonstrate the presence and adverse activities of such pathogens has revolutionized our ability to understand the source of many diseases, and to treat them…

But not all of them.  While our understanding of many diseases has improved considerably because of the germ theory of disease, this in no way changes that fact that a good number more of diseases are instead caused by genetic defects (e.g., sickle cell anemia, Tay-Sachs disease), developmental mistakes during fetal development, or exposure to various detrimental substances throughout life (smoke, smog, sugar, salt, saturated fats, carcinogens, poisons, allergens, etc.).  For this reason, pathologists and epidemiologists have hardly given up on alternative explanations of disease, just as most evolutionary biologists have not given up on genetic drift, gene flow, and mutation as drivers of evolution alongside the incredibly powerful concept of selection.

So, we might further subdivide theory-driven inquiry into two subcategories: (1) accommodation-driven inquiry, which attempts to subsume puzzling phenomena under prior understandings by showing them to be mere paradoxes, and (2) a “which theory is better?” approach, which attempts to identify which out of a set of prior understandings fits best with a given observation.  In fact, this second approach comes very close to the observation-driven mode of inquiry I described above.  For example, in their efforts to understand the climatic mechanisms that drove the late Neoproterozoic Ice Age, geologists did not simply make up new ideas to fill this gap.  Instead, they went to one of two prior understandings of glaciation, each having much broader applicability than just to the ice age in question.  Explanations that emphasize orbital parameters go back to the work of the early Twentieth Century Serbian scientist Milutin Milanković, whose ideas predated the discovery of the Neoproterozoic Ice Age and are still held in high esteem regarding the cycling of episodes of glaciation and deglaciation during the Pleistocene epoch (from approximately 1.8 million to approximate 12 thousand years ago).  Conversely, the concept of a runaway albedo effect, which serves as the backbone of the Snowball Earth hypothesis offered by Joe Kirschvink in 1992, was originally envisioned by the Russian climatologist Mikhail Budyko in the 1960s, who saw such runaway albedo only as an extreme and unlikely special case of a more general model of albedo that he developed.  So perhaps there isn’t such a huge difference between observation-driven and theory-driven researchers after all, at least not in every case.

This brings us to the contentious topic of ‘theory.’  As many readers are probably aware, evolutionary biologists, climate scientists, and their respective sympathizers frequently butt heads with unbelievers over the meaning of the word ‘theory’ (among other things), particularly when it comes to dismissive expressions like “evolution is just a theory.”  Unfortunately for everyone involved in the debate, in common usage, ‘theory’ has the meaning ‘untested idea,’ making it synonymous with ‘conjecture,’ ‘speculation,’ ‘guess,’ ‘hunch,’ or something you dreamt up after being drunk all night (another quote commonly attributed to Asimov) ... not that guesses have no place in science.  Thus, to assert that evolution is just a theory is to assert that it is a baseless speculation, and only one among many alternative ideas about the nature of life on Earth, at that.  The rehearsed response of scientists is to counter that ‘theory’ stands out from hypotheses not because theories are untested hypotheses but on the contrary because they are exceptionally well-tested and observationally well-supported hypotheses.  Thus, in scientific jargon, calling an idea a 'theory' is high praise, synonymous with 'knowledge,' not a dismissal.

But the story is a little more complicated than scientists usually let on, because in fact we use ‘theory’ in two different ways (a poorly recognized fact that unfortunately creates the potential for the related fallacies of equivocation and amphiboly; see also here and here).  The first sense of ‘theory’ is the one just defined, referring to a well-supported hypothesis, standing as the result (i.e., the end) of a research cycle.  The second sense of ‘theory’ is the one discussed earlier, referring to a prior understanding, well-supported or otherwise, that functions not to finalize research but to catalyze it.  In this case the theory is not directly tested or testable, only the accommodations that are intended to link it to observation.  This meaning of ‘theory’ comes much closer to the meaning intended when we say “the theory of evolution through natural selection,” “the germ theory of disease,” or “theoretical physics.”

Effectively, what these research-orienting theories do is provide a generic framework for stories that we might tell in our efforts to account for whatever phenomena we hope to explain.  These theories are deliberately vague in detail, asserting only a broad outline about explanations of mysterious phenomena.  For example, while the germ theory of disease tells a generic story about the invasion of host organisms by smaller organisms that then prey upon the host, causing all kinds of health problems for the host in the process (i.e., the dis-ease), the theory remains deliberately silent regarding the particular details of such infections.  This vagueness is the secret to the theory’s success, because as it turns out, there are many different kinds of infectious pathogen (prions, viruses, bacteria, protozoa, fungi, and arthropods), each relying on different routes of introduction into the host body and exploiting the host body in different ways, leading to different health outcomes: acute vs. chronic disease; nausea vs. pain vs. fever; mild illness vs. fatality; etc.

At face value, the untestability of this kind of theory may seem like a huge liability for productive debate between different communities of belief, at least when one or more of the theory's components and the questions it generates are disputed.  As I suggested in my previous post, asking any question that assumes any reality that the audience is unwilling to accept creates the problem of the loaded question.  Despite this seeming liability, however, I doubt that things are so bleak, for two related reasons:

First, many of the theories that underlie the questions that scientists ask turn out to have huge utility.  For example, in the case of the germ theory of disease, our ability to identify the infectious pathogens that cause many of the epidemic diseases we have suffered for millennia has empowered us to significantly reduce their future potential, owing to various public health measures (improved hygiene, sanitation programs, and vaccination) that interfere with the life cycle and transmission of these infectious pathogens.  (It's an important component of the mortality transition that led to the DT described above, incidentally.)  When questions and answers founded on 'absurd' concepts prove to be so eminently successful, it becomes increasingly difficult to consider them absurd any longer.

Second, while such theories are too vague to be scrutinized themselves, it is still quite damning when researchers who are dedicated to them prove unable to fit them to observations.  A track record of failed accommodation after failed accommodation constitutes its own sort of endangerment, even if it is a bit slower in the unfolding and not quite the same as falsification.  So, we have a sticky situation in which our assumptions both color our research and are challenged by it, a fact that I don't think Steven Novella would much care for:



(Of course, stubborn advocates of a failing theory might also continue to maintain that the apparent shortcoming of their theory is due to a lack of talent on their own part, not on any deficiency of the theory; these are the ones who are most deserving of Novella's censure, I think.)


Asking important questions, and asking impossible questions

Scientists are curious by trade, but this is hardly limited to us; it is a human thing that scientists just happen to indulge more than most.  Even so, we don't ask every question or pursue every answer conceivable, and especially not those that are not currently conceivable but once were or might someday be.  In part this is because we cannot ask questions that are currently absurd, questions that depend on beliefs we don't currently hold.  And of course, some of the questions we ask now will flicker out as the beliefs that underlie them fade in importance or acceptance.

Even out of those questions that we do ask, we do not approach them all with equal rigor.  We do triage, because we know that we cannot possibly address them all with the meticulous scrutiny necessary to either dismantle them or establish them as standing knowledge.

The question of what makes a question important to ask and answer is a deeply philosophical one, and I would strongly urge everyone to read this comic strip on the matter, if you have not already.  (And if you have, reread it.)  [9/25/13: and also this blog.]  We might measure the importance of a question based on the practicality of a good answer to it, and I have certainly promoted this aspect in my discussion of the germ theory of disease and the questions it spawns.  Alternatively, we might say that questions are more important than preexisting answers – the systems of belief we have accumulated over the years – because questions draw attention to the shortcomings of our beliefs and of adamantly holding to static ones, and because the same question can be asked again and again whenever previous answers to it have fallen to observation (sometimes by deliberate scrutiny, sometimes merely by accident).




Needless to say, the importance of beliefs in general, the importance of questions, and the importance of answers to such questions, are highly subjective judgments.  The smugness of scientists about their own discipline often goes part and parcel with the sense that scientists working in other disciplines are wasting their time on topics of trivial importance, though fortunately this smugness is often counterbalanced in the opposite direction.  I prefer to assume that other sciences are as fascinating as the ones I spend most of my time in (anthropology, demography), and I have a short list of other disciplines that I would pursue with enthusiasm if only I had a few more lifetimes to live.  Stepping out of the scientist vs. scientist dynamic, the opinions of the general public toward what questions are important to ask are even more variable, as are their opinions of everything else, because let's face it: the "general public" is a pretty huge and heterogeneous entity.

Subjective though such judgments may be, scientists are nevertheless obliged to justify our research to others, especially to funding agencies and to the professional journals whose choice it is to publish our work.  For example, grant proposals submitted to the United States' National Science Foundation (NSF) require that a section of the proposal be dedicated to the discussion of intellectual merits and broader impacts of the research proposed.  Not surprisingly, the promise of economic, social, political, and/or health benefits for the American public favors the funding of such research, whereas replication research is too rarely funded.  There is also a cottage industry of op-ed pieces offering lists of what different authors believe to be the most pressing questions in need of answer (for example this two-part blog series from NPR, here and here).

By the same token, there is frequent discussion (and debate) about which questions can be answered by science and which remain out of its grasp, such as is reflected in Robert Krulwich's NPR blog post here.  This question frustrates me because it glosses over two very different constraints on science, one of which is considerably more fundamental than the other.  The first and more fundamental constraint is that no question can be considered scientific if provisional answers cannot be subjected to observational scrutiny.  There is nothing really profound here, however, only the recognition that hunches that are not endangered are just hunches.  Moreover, whether there are good answers to questions (the right ones, even) that cannot be tested is the great unknown; how would you ever know whether an answer to a question is untestable simply because you have not yet found a way to test it?  elusive as such knowledge may be, there are some questions that seem on the face of it to be too broad, too generally stated, or too vague to be approached scientifically, for example most of the questions in the aforementioned NPR blog series about the "20 most important questions."  Instead, the sorts of questions that are actually scientifically approachable tend to be very topical (e.g., this "to-do list" for Parkinson's researchers), which unfortunately tends to sacrifice importance, or at least breadth of coverage, for specificity and operability.

The second constraint is the possibility that some phenomena of the world elude principled behavior.  In this case, no conjectural answer that purports to understand it will ever be correct, exactly because no understanding is possible.  but once again, even if such an impossibility were true, we would never know it, because our ability to put bad ideas on the table before we find good ones is no proof of the impossibility of good ones overall.  Many human scientists (anthropologists, psychologists, sociologists, economists, and political scientists) continue to seek better understandings of human nature, this in spite of the alternative possibility that no such understanding is possible because we don't make any sense, because we are endowed with a crazy little thing called free will (one could call it "human anti-nature").  Responsible scientists should be comfortable conceding that we can never really know if a given aspect of the world eludes understanding.  At the same time, few of us will stop seeking possible and testable answers to our questions, and when we stumble upon good ones, in other words ones that stand up to honest observational scrutiny, we will feel vindicated, full well knowing that these may be displaced by even better answers later on down the line.  As a matter of course, we approach no phenomenon as if it cannot be better understood.

The Erotetic Side of Science (Scientific R&D, Part 1.1)

In my previous post, I introduced a metaphor of science that divides the scientific enterprise into quality control and research-and-development (R&D) divisions.  Usually, scientists hold positions in both divisions (still speaking metaphorically), though some spend more time in one than in another (e.g., in physics, where experimental and theoretical physicists are often up to very different things).  I also argued that the quality-control division is what sets science apart from other efforts to produce knowledge, and I dedicated that post to a discussion of this central aspect of science.  Ironically, I find the need to split my discussion of science’s R&D up into four parts, this one and the next about the questions that animate R&D work, the third and fourth about the substance and organization of scientific theory and knowledge. 

What’s so questionable about science?

Recall, science is a process or a practice; it unfolds in real time.  It is something that we do.  We don’t always think of it that way, because when we read, hear, or watch science articles, we usually receive a story that collapses the process down into a “flat,” timeless presentation.  But this timelessness is artificial.  When we actually do science, it is an undertaking with a beginning, middle, and end.  More accurately, each individual science project has its own beginning and an end, though different projects may unfold out of phase with each other.

While science is a process, it is not the sort of process that just happens to scientists, as if we are “hapless victims” of science, who have stumbled into science already in progress.  In chemistry, there is a concept that says that many chemical reactions don’t just happen; two compounds sitting next to each other, even touching each other, may remain inert for a long time, even though they have the potential to react violently with each other.  What they need is a trigger, or a “catalyst” to use the term favored by chemists.  Well, metaphorically speaking, the scientific process needs a catalyst.  It needs something to trigger it, to ignite it, to innervate it.  It needs proactive researchers to actually take the initiative.  Most importantly, it needs researchers to ask a question, to identify a specific problem to pursue.  The question is the catalyst of the research cycle.

Specifically, the research cycle goes something like this: first, identify a problem – a deficiency in our understanding of the world – and ask a question about it:

  • Why is my throat soar?
  • Why did we get freezing rain this winter, serious snowstorms last winter, and hardly any precipitation at all the winter before that?
  • Why does this population of orangutans engage in frequent social interaction whereas that group does not?
  • Which is healthier, this organic apple, or this other one that isn’t advertised as such?

Then, brainstorm a possible explanation, or if you are really ambitious, a set of alternative answers to the question.  The conjectural answer(s) then become(s) the hypothesis/es that will anchor your research, the idea(s) that you plan to hold accountable to observational scrutiny.  Next, place the hypothesis in an “if, then” sentence having the form “if [hypothesis] is true, then [observational expectation] will also be true.”  In other words, if your hypothesis is actually true, what would you expect to observe?  Keep in mind, as I stated in my last post, the expectations you identify must be stated so that you are actually endangering your idea.  In other words, you have to pick expectations that could conceivably run counter to observation.  By extension, if you are working with multiple competing hypotheses, the task of holding them all accountable requires the identification of observational expectations that are consistent with one hypothesis but not the other and vice versa.  The final step is to actually observe the dangerous reality you have identified and to see if your observations are consistent with your expectations.  If so, yay for your hypothesis; we can add it to our collection of standing knowledge.  If not, boo for it; we need another idea, because we haven’t yet achieved the understanding that we were hoping for.

What’s so erotetic about science?



No, that's not a misspelling; you just read it wrong.  (But then again, it's hard to concentrate with Einstein giving you those smoldering bedroom eyes.)  ‘Erotetics’ is one of those obscure words that only philosophers know ... well, until I define it for you, and then the cat's out of the bag.  I would hazard a guess that scientists are as unfamiliar with it as anyone else.  So, what is it?  Erotetics is the area of logic that is concerned with questions and answers.  In some future post, I will have a lot more to say about logic in general, but for the moment I will simply say that not all logicians pay much attention to questions and answers; few are dedicated to questions and answers specifically, and the work of many logicians doesn’t concern them at all.  Logicians who study erotetics are thus a highly specialized community.  And yet, insofar as questions are the catalysts of science, we ought to be concerned with what eroteticians (? or eroteticists?) might have to say.

In a slightly bigger nutshell, erotetics is concerned with good questions and good answers, or with criteria to distinguish between good and bad ones.  What’s a good question?  To begin with, it is a question that assumes a reality that at least the inquirer believes.  In scientific context, it should also be a reality that the interrogator’s peers believe.  We have all experienced the confusion that follows when someone asks us questions based on wrong assumptions.  “Why were you awake and making so much noise all night?”  “Well, I wasn’t awake all night,” or “I was awake all night, and I heard the noise too, but it wasn’t me making it.”  The point is, the failure to agree on the reality that the question presupposes presents an insurmountable obstacle to sufficiently answering it.  In the preceding case and in cases like it, a “why…?” question begs a “because …” answer, and we quite simply cannot provide “because…” answers to “why…?” questions that assume untruths.  This is the fallacy of the loaded or complex question (see also here).  It is also a crucial topic in scientific questioning, and in the public relations of science, that I will revisit later in this post and more in my next post.

There are also some bad questions that we can’t even understand, usually involving a violation of the accepted definition of words that results in absurdity, for example, “how many meters long is the song of the morning birds?”  Of course it is possible to measure a birdsong in terms of frequency and volume, or changes therein, and of course a meter is a unit of measurement.  But sound wave frequency and volume measure sound (frequency being a measure in time), whereas the meter is a measurement of space.  The idea of measuring sound waves with a spatial unit of measurement presents a semantic mismatch that renders the question absurd.

So, what is a good enough answer?  Once again, a good answer will be one that assumes realities that we are willing to accept; there is no sense in conjuring principles, phenomena, or entities that don’t have at least some believers.  But there is also an opportunity for conflict here, because not everyone believes the same thing, so the ability to answer such a question is relative to the community of belief; some answers will not sit well with some individuals because the questions they answer are themselves unhappily loaded.  Again, this is a crucial topic that I will revisit.

A good answer should also imply no contradictions, and as with good questions, it should not put ideas together that do not go together, lest the answer assert the absurd.  If I consult a doctor about why my throat is soar, I would have some serious doubts about them if they told me “because 5 times 5 is heavier than the odor of water” ... amusing, to be sure, but I probably wouldn't go back to them for a follow-up.

Then too, there are some answers that are bad not because they assume questionable realities or imply absurd or self-contradicting realities, but because they answer the wrong interpretation of an ambiguous question: “Did it rain on Monday because of a weather system moving in from the East?”  “well, do you mean, did it rain on Monday, or did it rain because of a weather system moving in from the East?”  In cases such as this, the meaning of the question and thus the range of acceptable answers is often conveyed either by tone of voice or by the well-understood details of the context in which it is asked.  For example, if we have already been discussing the weather system from the East, the question is probably a timing question: "did we see its effects on Monday, Tuesday, Wednesday?"  On the other hand, if we are talking about the various weather systems that were lurking about the area on Monday, the question is probably about which one: "was it the one from the East, or from the South?"  Such questions usually only become problematic when they are asked out of context, though occasionally even contextual cues don’t clarify, often leading to uneasy conversations spent moving in the wrong direction.

Finally, there are some answers that make sense and refer to realities that everyone accepts as true, but are still failed answers because they don’t answer the question asked.  “Why is the unemployment rate up this week?”  “Well, let me start to answer your question with an anecdote about when my daughter was a little girl.  You see, she used to play piano and take lessons every Wednesday afternoon after school.  Eventually, though, she got bored with it.  She never really cared for it, I think, but we made her.  So, after a while she quit practicing.  Then there were the horse lessons … blah blah blah.”  This miscarriage of question-and-answer logic is an all too common strategy in political debate, but it can also be an honest mistake made by undisciplined minds, a failure to keep on point in answering the question, particularly in cases where the process of asking and of brainstorming an answer unfolds slowly.

Given the critical role that questions play in the scientific enterprise, we have a lot to gain by getting erotetic about science, in other words by asking what achievements and pitfalls we can make in science that stem specifically from the quality of the questions we ask or the goodness of fit between our answers and our questions.  Granted, all of the erotetic miscarriages described above present some risk for scientific questioning and answering just as much as they do in any other enterprise, and the best guard against them is critical thought.  But some are perhaps more relevant for our understanding of scientific success and failure than others.  Probably the most visible source of difficulty is the scientist’s ability to ask questions or to provide answers that refer to realities that their peers or the general public – or various subsets of the general public – accept as true.

Evolutionary biologists, for example, devote their labor to explaining why life on earth has evolved in the way that it has, in other words why we have the species that we have now, why these seem to be different from sepcies of the past, and why past and present species sometimes go extinct.  Why, for example, have so many species in the insect order Hymenoptera evolved such elaborate social interdependence (e.g., ant colonies, bee hives), but not all of them, and not all the same sort of social interdependence?  Are these differences the outcome of isolation between descendant subpopulations of an ancestral population, followed by the slow accumulation of random genetic mutations and/or random sampling errors in the transmission of different gene variants (‘alleles’) from generation to generation (a process that evolutionary biologists call ‘genetic drift’)?  Or are they instead the outcome of isolation between different subpopulations, followed by environmental selection for certain gene variants and against others?  What evolutionary biologists don’t ask is, “is evolution a real process?”  Nor should ‘evolutionary theory’ be mistaken to mean “the idea that evolution is real.”  Evolutionary biologists start with the basic assumption that evolution does happen (otherwise, they would not be evolutionary biologists), and evolutionary theory is instead dedicated to ideas about why it happens, both in general and in specific cases.  Darwin’s theory, for example, is a theory of evolution through natural selection, which accentuates a particular set of factors in attempting to account for particular species’ traits or variability between different species' traits.  Alternative theories of evolution instead accentuate other factors than those Darwin focused on.  For example, whereas Darwin's theory focuses on the differential reproductive benefits conferred by different biological traits (anatomical, physiological, behavioral) in light of the challenges posed by particular environments, the theory of evolution through genetic drift instead emphasizes the influence of randomness in the transmission of genes from generation to generation.  Increases or decreases in gene and biological trait frequencies can result from either process (or both in conjunction), though usually with different long-term outcomes.

The community of evolutionary biologists understands these concepts and accepts their truth, allowing them to proceed with their work, but there are also communities of belief within our (United States) population for whom such ideas are either controversial or vaguely understood, or both.  In erotetic perspective, there is thus a risk for dispute that we can all recognize, regardless of which side of the debate you might stand on.  More on this in future posts.

On the flip side, getting erotetic about scientific questions also means being prepared to extend a certain amount of charity or leeway to experts in fields that we do not fully understand.  As an anthropologist and a demographer, I know far more about evolution than I do about quantum physics, for example.  The realities that contemporary quantum physicists accept as true are largely unknown to me, but when I catch faint whispers from them, I am puzzled to say the least.  If an astrophysicist were to ask me one of the sorts of questions that animate their research, I would have little choice but to either consider it absurd (they might as well be asking how long birdsongs are in centimeters) or to concede that my puzzlement is a consequence of my own physics illiteracy.  I would tend to err in favor of the latter, but I suppose that my trust could be easily betrayed were an astrophysicist to have some fun at my expense.  That's not an invitation, by the way.

Sunday, September 15, 2013

The most important thing about science

The world of ideas in which we live

Consider the following statements:
  • If anything bad can happen, it will.  For example, the GPS will always tell me that my destination will be on the left (i.e., across traffic; in England, Australia, or Malta, this would instead be on the right).
  • Nice guys finish last.
  • Good things happen to good people.
  • Everything that has happened to me in my life  everything good, everything bad  has happened for a reason.  It is part of a divine plan.
  • My car won’t start because the alternator has failed.
  • Heavier objects fall faster than lighter objects.
  • The universe was created in six days.  Living organisms were created on the fifth and sixth days and belong to a limited number of immutable species.
  • The world is flat and exists on the back of a turtle.
  • The Sun and all of the rest of the celestial bodies revolve around the Earth.
  • The universe is almost 14 billion years old, the Earth orbits the Sun and is approximately 4.5 billion years old, life on earth began sometime before approximately 3.5 billion years, and organic species have been evolving and going extinct ever since.
  • The evolution of life on Earth is explainable in terms of natural selection and only natural selection.
  • Natural selection happens when an organism changes its traits to be better adapted to its environment, then passes these traits on to its offspring.
  • Those individuals that survive longer are more evolutionarily successful.
  • Those individuals that have the most offspring are more evolutionarily successful.
  • The evolution of intelligent lifeforms is the end goal of evolution.
  • Natural selection is the process by which species become better adapted to their environments; when confronted with environmental change, all species must change to adapt to new conditions.
  • If I pour the coconut milk slowly into the blender, I will get more out of the carton than if I pour it quickly.
  • If I play the Powerball numbers that have been drawn most frequently in the past, I will increase my odds of winning.
  • Denali is 20,320 ft./6194 m tall.
  • The distance between a lightning strike and its observer is approximately s/7 miles, where s is the number of seconds between the observation of the lightning and of its thunder peal.


To be human is to live in a world of ideas like these, ideas that assert something about the way the world looks and works, and why it works in one particular way rather than some other.  I say "ideas like these" because I do not intend this list to be exhaustive, nor to imply that every individual who has ever been born has shared all of the same beliefs.  The staggering plurality of beliefs held by the world's current population (>7 billion and growing), which underlie many of the bitter conflicts that are all too familiar to us, are proof enough that one could never realistically hope to inventory them all.  If we expand our scope to include every belief ever held in the 200,000 years since the dawn of the human mind as we know it, with its insatiable thirst for knowledge, then we have to add more than 75 billion human minds (though admittedly, many of these people did not survive infancy).  That's a lot of minds to change, and a lot of time to change our minds.

And change them we have.  I change my mind on a daily basis, though about some things more than others.  I inherited some of my current ideas from my parents, some from my teachers, some from my peers, some from strangers (books, magazines, radio, television, blogs), and many directly from my own experience of the world.  I have been accumulating beliefs for decades, but I also have not hesitated to abandon some of them whenever I have seen fit, sometimes in favor of better beliefs, but sometimes just because some have seemed so bad upon further examination.  I still have plenty of beliefs left, though.  I have so many of them, in fact, that I don't always recognize when I am holding contradictory ones. 

I hold my beliefs at every level of consciousness, from the "important" ideas that I rehearse with regularity, to the intuitions that go unspoken most of the time and that I am only vaguely aware of even when I try to think about them, to the matters that my body knows at such a basic level (motor intuitions) that I am not even consciously aware of them, nor could I ever be.

The ideas I have are about everything.  I have ideas about how old our universe, our planet, and life on our planet are; ideas about the general processes that led up to their present state and the specific changes that they have undergone along the way (quite vague when it comes to our universe and our solar system, admittedly); and ideas about how they work in the present.  I have ideas about the daily mechanics of life – about gravity, resistance, speed, inertia, temperature, etc. I have lots of ideas about human nature and about the way that human nature affects the ways that our social, economic, and political lives play out.  I have all kinds of ideas about health, disease, birth, and death.  I have ideas, and I have ideas, and I have ideas.  My cup runneth over with ideas.

Some of my ideas are quite important to me, though not all for the same reason: some are ideas that I take for granted when I make my practical daily decisions, some are the ones I have dedicated my life as a scientist to, some don’t affect me but I can’t imagine them being untrue, and some even bring me considerable displeasure, yet I believe them just the same.  Some I believe despite the fact they are demonstrably absurd (early in the morning, that thing about pouring coconut milk slowly from a finite cartonful is far more true for me than I care to admit), while others are so basic to my grasp on reality that I might have called them “self-evident” were I living in the 18th century (though it’s just not cool to describe ideas that way anymore).  The ideas that I have abandoned along the way have also varied in their importance to me.  With little more than a raised or furrowed eyebrow, I have cleared up misunderstandings about tax code, postal policy, and why the fridge has been making that funny sound (the condensor coil on the back of the fridge had been too close to the wall behind it, nothing more serious).  But others have shaken my sense of order and direction to their foundations.

If we assume that the thirst for understanding is a pervasive feature of human nature (and I do), this makes the business of abandoning beliefs puzzling, all the more when we further observe that we hold on to some longer than others.  Why can't or shouldn't we just keep the ones we already have and be done with it?  I assume that it is not for no reason, and this brings me to my main point.  It is such an important point that, if you take nothing else away from this blog, if this post is the first and last time you'll ever read it, I will be satisfied that at least it was this one.

The single most important thing about science

The single most important thing about science, the thing that sets it apart from all other ways of knowing, is that it focuses on scrutinizing beliefs, using observation of the world that these beliefs purport to understand to do so.  When I say that science is about building better beliefs, such scrutiny is at the very center of this enterprise.  Granted, building better beliefs must involve more than just scrutinizing them, particularly because scrutiny often leads to their abandonment.  To think otherwise is to be comfortable with the idea of eventually living a belief-free life, and that quite simply won’t do.  So obviously, there also has to be something additive to science, to offset the part of it that is subtractive.

Speaking metaphorically, we might think of the labor of science as being divided up into two complementary divisions: quality control, and research and development (R&D for short).  The people in quality control are charged with the task of inspecting the product, whether this be newly manufactured beliefs (i.e., the guesses or hunches that I mentioned previously) or the ones that have been on the market for a while.  On the other hand, the people in R&D are charged with coming up with new products (again, guesses), whether these be invented to explain newly discovered phenomena, or long-standing mysteries, or even to replace older, widely accepted explanations.

While it is true that these two metaphorical divisions complement each other, the quality control aspect is far more important to science.  I do not mean that scrutinizing ideas is somehow more important than coming up with new ideas, nor that coming up with new ideas is better than keeping old ones around.  I simply mean that, if we are trying to determine what sets science apart from other activities, it is the premium that science puts on scrutiny, and scrutiny based on observation of the world in particular.  Science insists that our understandings of the world be constrained by our observation of it, and this insistence is so central to it that no enterprise can rightly call itself ‘science’ if it does not take observational scrutiny seriously, if its practitioners are not sincerely open to endangering and potentially defeating ideas.  On the other hand, the activities of the R&D division are hardly unique to science.  On the contrary, there are many ways of coming into possession of beliefs, whether these be the time-honored beliefs that are passed down from generation to generation or those that have been newly invented by the revolutionaries of the avante-garde.  So, if science is to be set apart on the basis of its deeds, these will be the work of the folks over in quality control.

When I say "observation of the world that these beliefs purport to understand," what I mean by 'observation'  is "descriptions of the world, gained either through direct sensory perception or through the use of various measurement instruments (scales, measuring tapes, satellite arrays, mass spectrometers, etc.)."  Why?  Because if our understandings about the world are successful, if they actually help us to explain the phenomena we perceive, then what we actually perceive, including what we have perceived in the past and what we can perceive in the future, ought to be consistent with such understandings.  It is this risk of poor fit that makes observation so dangerous for ideas, that makes ideas so vulnerable to the risk of defeat:



In this framework, we give a new name to the idea – ‘hypothesis’ – but only if we are able to identify one or more kinds of observation that would allow us to test it.  Many scientists would also insist that we actually be able to make such observation, and I suppose that effectively, this is true; if we were never able to make one or another of the dangerous observations that we have identified, then the moment of peril would remain out of our reach.  Yet even so, it is a big step simply to be able to admit that one’s ideas are potentially vulnerable to observational scrutiny, and I consider any individual who is willing to hold their beliefs up to observational scrutiny, or who is at least dedicated to understanding the work of professionally trained scientists who do, to be a participant in the scientific enterprise.  This openness is very different from the person who is so committed to their understandings that, as a matter of principle, they refuse to let the world say otherwise.  It is also very different from the person who contends that his or her beliefs are informed by observational evidence but who chooses to emphasize only the evidence that is consistent with those beliefs.

Vexingly, it is also very different from some of the particularly smug scientists over in R&D, who are so enamored by the elegance of the ideas they have brainstormed that they find the world, not their ideas, to be in contempt when the fit between the two proves to be poor:
"On occasion, Einstein not only ignored the observational and experimental facts, but he even denied them.  Asked what would be his reaction to observation evidence against the bending of light predicted by his theory of general relativity, he answered, 'Then I would feel sorry for the good Lord.  The theory is correct anyway'"  (Hans C. Ohanian, Einstein’s Mistakes: The Human Failings of Genius, p. 5).
The tension that exists between the quality-control scientists and their colleagues in R&D is well-captured (albeit exaggerated for comedic effect) in the antagonism between The Big Bang Theory’s Dr. Sheldon Cooper (a theoretical physicist, i.e., an R&D guy) and Dr. Leonard Hofstadter (the experimental physicist, i.e., a quality control guy).  More realistically, theoretical and experimental physics are complementary operations, no less than any other scientific discipline's quality control and R&D divisions (see here for a real theoretical physicist's perspective on the matter).  I will have much more to say about the operations of the R&D division in my next post.

What is the outcome of such scrutiny?  Possibly, the failure of the idea under investigation.  The quality control operation, as I mentioned, is a subtractive one, though to be more accurate, it is a non-additive one: at the end of the operation, no new idea has been added; one has either been retained or eliminated.  Not that we should think that the dismissal of ideas is a bad thing; if we have rid ourselves of an idea, it is only that we have gotten rid of a demonstrably bad one, "vanquished the impossible" as Carl Sagan has said (though "vanquished the improbable" is a better way of putting it).  And the virtue of this elimination is amplified by the fact that it also clears up space for better beliefs.

But what if the scrutiny "fails" to defeat the idea?  On the one hand, ideas that don't fall to scrutiny start to look pretty good, all the more so the longer they stand up to scrutiny.  This is how scientists transform a practice focused on scrutiny to one capable of lending support.  On the other hand, such success should not be confused with proof.  "The hypothesis is consist with the evidence" is too weak a support to constitute an incontestable proof, so our uncertainty about the idea necessarily lingers.  Most scientists will be the first to admit that they are not in the business of proving anything, and those who do choose to use the word 'proof' either mean something different by it than 'incontestable proof' or they misunderstand the limits of their own methodology, or worse still, they are con men charading as scientists.  Most scientists would instead say that any hypothesis that has withstood scrutiny up to the present should be provisionally included as an item of standing knowledge (referring back to my earlier assertion that current definitions of knowledge are more inclusive than the stringently high standards imposed by Platonic epistemology), ever vulnerable to future scrutiny and to the risk of defeat that such entails.  By implication, we can say that it is possible to know something that is wrong, and by extension that we can know something today because it has been well-supported up to the present that we might know no longer tomorrow, once its support has been pulled out from under it.

Scrutiny based on observation is an imperfect mode of evaluation for another reason, and scientists have never claimed otherwise, but it bears repeating because it is not well enough recognized by the public at large: while remaining open to the possibility of scrutinizing our beliefs makes us all scientists of a sort, we cannot all be great or even good scientists, because not all kinds of observation are equally reliable (accurate or precise); the best ones, I am sorry to say, are technically demanding and expensive to operate.  Indeed, many of the intuitions we live by are informed by our five or so senses, yet these are considerably less reliable than we sometimes like to believe, or else we have subjected our intuitions to only the most casual of scrutiny using them.  As a wearer of glasses since age 8, I am keenly aware of the limitations of my own sense of vision, and even if I trusted it, I doubt that I ever would have come to the conclusion on my own that heavier and lighter objects fall at the same speed, all else being equal.  So, in order to improve the quality of the observations we use, we also have to scrutinize the quality of the observational methods we use to generate them.  To this end, some scientists dedicate their entire careers to identifying better and worse modes of observation.  Unfortunately, due caution regarding the limitations of science's observational methods is not always clearly conveyed to the public in the popular media.  For example, while media coverage of a recent re-measurement of the height of Denali suggests that it has shrunk by 83 ft. (25 m) over the last six decades (see also here), little attention has been paid to the possibility that this change is the result of imprecision inherent in the various methods used to estimate the mountain’s height.

In thinking in such great detail about the quality dimension of science, there are two mistakes that we should avoid making at all costs.  The first is lumping together untested ideas and bad ones.  Again, what makes an idea bad is its poor fit with the world it is supposed to explain, and this quite simply cannot be known until the idea has actually been tested.  Granted, untested ideas are a liability because they may be wrong, but this liability is counterbalanced to the degree that some of them will also eventually turn out to be good guesses.  In fact, if untested ideas were automatically deemed bad by virtue of their untestedness, this would disallow the possibility of ever having a good idea, because all of the good ideas that we have ever held, that we now hold, or that we might ever hold in the future began their lives as untested ideas, too.

The second mistake that we should avoid is the “Gee whiz, Mr. Science!” reflex, by which I mean the inclination to offhandedly dismiss any research that seems to do little more than support ideas that we already think we know.  At first site, such research does seem wasteful; if we knew it already, why not use the time, effort, and funding to explore new horizons instead?  Yet, the appearance of wastefulness is an illusion, one that persists right up until somebody's research reveals the falsehood of some confidently held, previously unexamined belief.  A more appropriate response to research that has demonstrated the "obvious" truths would be one of appreciation, because we no longer have to accept them with such blind faith.

But why abandon old beliefs, especially in favor of new guesses?

Okay, so science is in the business of snooping out ideas that seem to fit poorly with the world they are supposed to illuminate and of recommending these ones for termination, but again, the question is not so much whether we can scrutinize or refute beliefs but why we would ever want to do so.  Granted, for many the answer is no more complicated than a desire to possess only the highest-quality beliefs available; their appetite for belief is conditional on the quality, not about the quantity.  For others, however, the stakes in giving up beliefs are high enough  the sense of order that their beliefs afford them is salient enough  that the gamble entailed by scientific scrutiny is just too dangerous to accept.

The beliefs that afford a sense of purpose and/or hope for the future are an especially sore subject, and yet science's most outspoken advocates have not been particularly bothered by this matter.  Carl Sagan is well-known for his insistence that there are no ideas important enough to elude its scrutiny.  On the contrary, he said, science is intent on the pursuit of truth no matter where it leads, in other words no matter how psychologically disconcerting its revelations might be.  Likewise, Richard Feynman declared that he is less afraid of doubt than of the prospect of accepting beliefs that might be wrong.  In "vanquishing the impossible" (Sagan again), science is no less likely to scrutinize the hope-filled beliefs, nor to leave them in its wake if found wanting, than any other.

The scientific refusal to compromise on what beliefs we are willing to subject to scrutiny is an understandably unsettling prospect for many, but a few words should be said in rejection of the occasional claim that we are disporportionately invested, therefore persecutoral, in challenging the sacred.  I won't deny that this is a possibility on a small scale; scientists are humans, and this means that individual scientists are capable of dedicating their life's work to the assault on the sacred.  But we are not talking about potential, and for the most part scientists are no more mean-spirited than anyone else.  On the contrary, many of us care a great deal for the well-being of our fellow human beings, and on occasion scientists distress even ourselves by undermining our own sacred beliefs.  Even conceding that a good many sacred beliefs concern cosmic, planetary, and biological origins and that a good many scientists are dedicated to the study of such topics, it cannot be stressed enough that the overwhelming majority of such scientists do so out of sincere curiosity regarding such matters, not because they are spitefully compelled to inflict life-altering emotional trauma upon others, even if it does happen as a side effect.

It would also be wrong to limit our understanding of "important beliefs" to the sacred tenets of religious faith.  If the importance of a belief is measured by the trauma we experience upon its defeat, then it is easy to demonstrate that a good many of them have nothing at all to do with what we would consider sacred, for example this amusing anecdote from "The West Wing" (a fictional one, but one that probably feels familiar to many):


(In fact, no cartographic projection is a perfectly accurate representation of the Earth's or any other celestial body's surface, because all of them are equally guilty of the crime of representing spheroid surfaces as flat ones.  My favorite mind-blowing projections are the azimuthal equidistant projections, like this one centered on Hana, Hawai'i, in which Africa is wrapped around the entire perimeter of the map.)


I am probably not alone in the fact that, as an 8th grader, I was baffled to learn that heavier and lighter objects fall at the same speed as each other, all else being equal.  (Conversely, I am amused by the general indifference of pretty much everyone, mountaineers included, to the demotion of Denali's stature.)

Alternatively, if the importance of a belief is measured by its relevance to our daily lives and decisions, then once again we find many more examples that fall outside the realm of the sacred.  Not surprisingly, the scientific scrutiny of these beliefs is frequently met with much the same zealous fervor and vehement defense as when scientists challenge the sacred  the recent food safety firestorm set off by an NPR story discouraging people from washing their chickens as a matter of good hygiene is an excellent example, with similar revolts lurking just below the surface regarding male circumcision, the safe cooking of pork, and the mixing of hot water and bleach – and yet accusations of persecution are peculiarly absent in such cases (or I have missed them).  Similar revolts are waiting in the wings.

However disinclined we may be to seeing our cherished beliefs defeated, the stakes in continuing to hold onto them may be high, as well.  As a consequence, we often find ourselves confronted by a dilemma of conflicting urgencies: to risk the psychological trauma of being left without previous hope or direction, or to accept the high physical, emotional, or financial tolls resulting from our continued acceptance of the false ones.  As we might expect, we encounter a mixed response among religious communities when it comes to the scrutiny of sacred dogmas: some certainly experience the scrutiny as persecution, as previously suggested, while others will be too attuned to the perils of embracing untrue beliefs to continue to indulge them with unremittingly blind faith:

(See the Dalai Lama's op-ed piece in the New York Times for the original quote.)

Similarly, while the sense of persecution is considerably diminished when scientists challenge the unreligious beliefs we live by (for example, the belief that washing a raw chicken is "a safer thing to do" than not to), the reception has nonetheless been quite mixed.  I don't wash chicken when I cook it, but apparently some people are really quite bothered that people like me exist.

In any case, it is probably fair to say that we are drawn to the pursuit of better beliefs not because of the intrinsic value of having good ideas but because of the practical value of good ideas.  The folly of embracing a bad idea is perhaps never more clear than when you turn the key to your car and it fails to start.  Is something wrong with the ignition system, or is it a dead battery?  If it is a dead battery, is something wrong with the alternator, is the battery old, or did you just leave the car door ajar overnight?  You begin to panic because you are going to be late to work, and you start considering what to do next to solve your predicament.  Jumping to any conclusion at such an early juncture would be folly; you certainly wouldn't refuse to consider the possibility that it is an ignition problem just because that would be more serious than a dead battery, but you also wouldn't run out and buy a new alternator out of blind faith that it must be an alternator problem.  When it comes to automotive problems, there is no room for sacred truths.

What would you do instead?  If you are inclined to automotive mechanics, you might attempt to diagnose the problem yourself.  Conversely, if you are like me, you would be better served to defer to people with such inclination.  In either case, the most beneficial approach to identifying the problem  not just coming up with any idea about what is wrong with your car, but coming up with a good one  will involve much the same sort of observational scrutiny as scientists advocate:
The "scientific method" is familiar enough so that it can be used intuitively.  In fact, all people, not just scientists, use it regularly.  Just listen to Click and Clack, the mechanics on Car Talk on National Public Radio, as they try to figure out what is causing a 1987 Volvo station wagon to stall unexpectedly as its driver, Bill from Beford, Massachusetts, motors down the highway.  (John Alcock, Animal Behavior 6th ed., p. 11)

Nor are the practical matters that we might address in such a manner limited to the personal ones.  The way that we approach our local, national, and global energy, health and safety, transportation, shelter, and food needs are influenced to a significant degree by our beliefs about them, so we have good reason to employ the best methods of critical thought available to size up our beliefs, including observational scrutiny.

Finally, the blind acceptance of many beliefs provides a potential for our disempowerment, to be exploited by con artists and tyrants.  The textbook example of belief-based exploitation is the 'divine right of kings,' which asserts the divinity or at least divine election of rulers and which has consequently been used to legitimate such rule at least since the dawn of written language (and probably well before it), to the detriment of hundreds of generations of loyal subjects.  In Enlightenment political thought, this belief was eventually replaced by the competing 'self-evident truth' that "all men are created equal," but in the United States at least, the belief in such equality was not legally extended to former slaves or their offspring, to women, or to Native Americans for a century or more.  It has often been observed that knowledge (or, to be more accurate, belief) is power, and those who are able to control it often do so with self-promoting agendas in mind, almost always to the detriment of others.  As Neil Degrasse Tyson has pointed out, our ability to scrutinize such beliefs thus affords us a powerful means to guard against the disempowerment and exploitation that blind faith entails.