Saturday, January 11, 2014

Is science all GUTs and TOEs? An anatomy lesson in science as it never was (Scientific R&D, Part 2.1)

Apologies to readers for my quarter-year hiatus.  Both of you.  As it turns out, actually trying to get research projects off the ground while simultaneously making a living gets in the way of writing about science, though fortunately not of thinking about it.  In the interim since my last post, I have taken a consulting position at a center that provides computing resources (and support from consultants like me) for social scientists, which is pretty cool.

To get back up to speed, previously I argued that the scientific enterprise exhibits a division of labor similar to a manufacturing industry, with separate divisions dedicated to Research and Development (R&D) and Quality Control (QC).  In this metaphor, the "products" that the R&D people develop are explanations of the phenomena that we encounter as we go about our daily affairs (although scientists in many fields now regularly encounter phenomena that we would never in a million years encounter without deliberately seeking out strange new things ... peculiar bioluminescent critters at the bottoms of the oceans; bizarre goings-on at the farthest reaches of our galaxy and of our universe; subatomic particles with outlandish names like "neutrino," "quark," and "boson"; and so on).  The people in the QC division, on the other hand, evaluate just how good these explanations actually are, whether they be freshly minted by R&D or whether they've been "on the market" for a while.  More literally, QC tests these explanations for their goodness of fit with the world they were designed to explain, by subjecting them to further observation.  Ideally, when QC discovers that a given explanation does not fit well with observation, this explanation is discontinued, and it's back to the drawing board for R&D.  Obviously, the labors of the two divisions are complementary (in an antagonistic sort of way), because the R&D people are great at brainstorming new explanations but incapable of getting rid of bad ones, while the QC people are incapable of generating new explanations but great at making room for better ones.  (As with any metaphor, this one falls apart if pushed too far.  Importantly, most researchers do not exclusively work in one division or the other; most of us spend our time bouncing back and forth between the two labors.)

I also argued that R&D's operations are not distinctively scientific; people have been in the business of coming up with ideas about how our world works for a long, long time, long before scientists achieved any sense of scientific self-awareness, and long before navel-gazers like me began to think so hard about what scientists really do or how this compares to/contrasts with what other people do.  Instead, it is the operations of the QC division that makes the scientific enterprise definitively scientific.  what is unique about the scientific production of knowledge is that it endangers ideas about the way the world works - hunches and sacred beliefs alike - by pitting them against observation.  In other words, if you aren't willing to endanger your understandings of the world in the crucible of observation, you aren't being a scientist.

While it is QC's activities that make science science, this in no way implies that we should remain indifferent to the operations of the R&D division.  On the contrary, how the R&D people conduct their business has a major influence on the quality of ideas that eventually make it into our belief systems.  Consequently, an audit of R&D's standard operation procedures is worth doing.  In my last two posts, I began to audit R&D by exploring what it is that catalyzes the division's operations in the first place.  The answer to this question is that R&D scientists begin by being puzzled about the world's peculiarities, leading them to explicitly ask questions about such things.  The understandings that R&D produces and passes along to QC are initially generated as answers to such questions.

But that is only the beginning of the life of R&D's ideas, and we are arguably more concerned with what happens to them after they are born (or put on the market, to go back to the earlier metaphor). Given that some of these ideas are going to stick around for a while if they make it through QC, what qualities do we want them to have?  In this post, I come at this question somewhat negatively, by dismantling a widely held misunderstanding about what kinds of explanations R&D scientists should be putting out there.  This misunderstanding is so widely held, in fact, that many scientists share it, particularly new ones, and some even regard it as a necessary condition for scientific knowledge production.


How should scientific knowledge be organized?

Way back in my first post, I stated that one of the goals of scientific practice is to achieve better-organized understandings of our world.  This goal has tremendous appeal if for no other reason than that good organization makes for the clear and efficient communication of scientific understandings to potential consumers.  Of greater importance (in the minds of many of science's most outspoken advocates) is the assertion that the organization of our knowledge about the world ought conform to
the order characterizing the world itself; if the world is governed by a relatively well-organized and simple set of laws or principles, then our belief systems ought to be, as well, and preferably in the same way.

This is a pretty huge and important claim, not only because of the frequency with which it is presented as one of the fundamental axioms of scientific practice, but also because it is not a salient feature of many other approaches to knowledge production.  Given the quickness with which some scientists assert the mandate of seeking better-organized beliefs, we ought to understand what we actually mean when we say 'well/better-organized.'  As it turns out, as with so many other important and commonly used words, we think we have a pretty good grasp on what 'organization' means right up until we start thinking about it.

What is worse, when we actually examine how scientists use the word, we discover that their usage conceals a few important alternative meanings, increasing the opportunity for equivocal or amphibolous arguments.  One of the most well-developed outlooks on organization in science, called "unificationism" (because no label is worth using if it has any less than seven syllables), collapses three separate meanings under this single heading, imposing all three mandates on scientific practice in a single breath.  First, there is the minimal requirement that no two understandings about how the world works can contradict each other, because the world quite simply cannot both be one way and its opposite (this is what logicians refer to as the "law of non-contradiction").  Second, there is the requirement that scientific knowledge ought to be well-integrated, meaning that that multiplicity of beliefs that populate our belief systems should be tied together as networks of belief, with a clear linkage tying any given belief to at least one other.  And third, there is the requirement that scientific knowledge ought to simplify these networks of belief, either by reducing the number of distinctive beliefs that populate them, or by imposing a specifically hierarchical structure upon the network which collapses the diversity of beliefs down as merely specific cases of increasingly general principles.  The relevance of these three dogmas for scientific practice is that they put the burden of unification on R&D scientists: if a freshly minted hunch about how the world works is to leave the drawing board for trial by QC, it should be the sort of hunch that would fit well with the preexisting array of knowledge, and preferably simplify it a bit.

Considered in isolation, each of these three requirements already makes a bold assertion about the ideal structure of scientific knowledge.  Thus, considered together, their collective boldness is too great to continue accepting them without further reflection and scrutiny.  On the contrary, we need to ask two questions of each requirement: first, is it practically achievable, and second, will it guarantee a more realistic understanding of the world if achieved?  My first contention is that well-integrated belief systems are more realistic than less-integrated ones, but these are not always practically achievable.  Even if such integration remains to be achieved, however, my second contention is that the much weaker criterion of internal consistency should be maintained at all costs.  ('Integration' and 'internal consistency,' by the way, are often indiscriminately lumped under the common label 'coherence,' so it might be useful to think of integration as a strong variety of coherence and internal consistency as a weak variety, but more on that below.)  My third contention is that the unrelenting push for simplification is not only not always feasible (for the same reason that network integration is not) but more importantly is an unwarranted rule of thumb, with the potential to yield less realistic understandings rather than more.  In short, if we as scientists are being honest with ourselves, then we need to confront the fact that the unificationist agenda is not only impracticable but also unlikely to yield the realistic understandings we desire.


Unification as integration

Scientific inquiry is undeniably oriented toward clarifying the relationships that exist between the entities that populate nature/the universe, their properties and behaviors, and variability therein.  Scientists are not satisfied with the vague intuition that everything is connected.  Instead, we want to understand the particular details about how any two things or sets of things are connected, as well as what the strength of this connection actually is.  So, it is hard to deny that integration is at the heart of scientific practice.  Almost always, the new beliefs that R&D posits are beliefs about previously unexpected linkages between entities that we want to understand better.  All else being equal, then, the addition of such understandings to a prior collection of beliefs will tend to increase its degree of integration.  There are, however, a couple of factors that work in the opposite direction.  First, there are instances of subtraction, which can occur in either of two ways: (1) previously accepted items of knowledge are challenged and ultimately dismantled by QC work; or (2) some items of knowledge are simply forgotten: their currency diminishes as they age until eventually, they succumb to their senescence and vanish out of mind.  By either route, the result is the dissolution of a previous linkage, fragmenting the network of beliefs to a degree.

Second, there is a logistically problematic dynamic, in which the relentless outward/downward/upward/inward expansion of science into new frontiers occurs at a pace that, from all indications, is outstripping R&D's finite capacity to integrate incoming items of knowledge.  On this point, it is perplexing that some of science's most well-known spokespeople are on the one hand keenly sensitive to the vastness of the unknown frontiers (there are more quotes on this topic than I have the room to recite) and on the other so insistent that we might one day hope to integrate all of the knowledge that will eventually come in about these frontiers.  As it is, our capacity to hold items of belief in memory is limited enough that no single individual can claim to possess their own personal and exhaustive copy of current scientific knowledge; the global body of knowledge about the universe (some of it scientifically sanctioned, some gained by other means), resides not in the memory of any individual but instead in an abstract, collective memory, comprising human memory as well as the various external information storage devices that we rely upon so heavily.  As one philosopher of science points out, we are obliged to take a piecewise approach to our understanding of the universe - i.e. in which one set of understandings applies to some aspects of our universe while other sets apply to other aspects - as a result of our inherent limitations.  Thus, in conjunction with the proliferation of scientific research, our intrinsic cognitive limitations make the prospects of synthesizing scientific knowledge bleak indeed, more so with each passing day.  Because of this and because of the relentless give-and-take of the R&D/QC antagonism, the notion of a perfected integration of scientific knowledge is quite simply antithetical to scientific practice, not the inevitable end of it.  (See these two articles, [1] and [2], on the state and future of particle physics in the wake of the discovery of the Higgs' boson and this one on the idea of culmination in physics more generally.)  In short, we will likely never reach a point where we can actually test the oft-cited claim that everything out there is related to everything else out there, but we can at least say, with considerably greater confidence, that our current state of knowledge is most certainly not one in which our beliefs about everything are connected to our beliefs about everything else.

Giving up on the quest for global synthesis, however, should not discourage us from seeking out more local connections.  Unlike global synthesis, small-scale integration is the stock and trade of science - we actually do it, all the time! - so it is anything but futile.  Ironically, it is also these small, self-restrained projects, that reassure us that we are not hopelessly adrift in a sea of disjointed ideas or a universe of unrelated things.  In my opinion, it is particularly exciting when we identify compelling linkages that cross-cut the conventional disciplinary boundaries.  (Though I am biased in this respect, because my two chosen disciplines - archaeology and demography - both celebrate their interdisciplinarity, more so than, say, physics.)

Similarly, learning to accept the eclecticism that inevitably characterizes our belief systems should not be mistaken for also accepting contradictions between locally networked subsets of belief, for two reasons: first, the potential to integrate presently unrelated sets of belief in the future remains open provided that they do not possess contradictory elements.  And second, we axiomatically accept that the universe itself does not possess any self-contradicting elements: if something is true of the world, it cannot also be false of it (again, this being the law of non-contradiction).


Unification as simplification

While the requirement of integration is problematic in its own right, this problem is relatively minor compared to the additional requirement that we seek to simplify our belief systems through the imposition of hierarchical structures upon them.  Given the preceding critique of global integration, we can already conclude that global simplification is all the more impracticable.  On the other hand, just as with research focused on achieving local integrations, many researchers conduct research focused on local simplifications, and sometimes these do stand up to the scrutiny of QC.

To expand a bit, what this kind of hypothesis does is reduce the number of phenomena that require unique explanation, instead accounting for multiple outcomes as if redundantly governed by the same principle.  These one-cause-to-many-effects statements are sometimes called "covering laws," while research that is conducted in their pursuit is sometimes labeled "nomothetic" (from Greek 'nomos' [custom, law] + 'theto' [to place]) or "nomological" (from Greek 'nomos' [custom, law] + 'logos' [discussion, discourse]) ... though these labels don't have as much currency now as they did a few decades ago.

Hiding under the label of "nomothetic/nomological research," however, we can distinguish between two subtly different kinds of activity: first, there are those research activities that posit and test statements that treat a particular set of phenomena as if commonly governed by a precise rule, which can then be tested as a hypothesis against observational data.  For example, I could propose that my car's average MPG for a given period of time (call it TMPG) conforms to the relationship

    TMPG = HMPG(p) + CMPG(1-p)

where p is the proportion of miles spent on highways during the study period, HMPG is the average MPG for highway driving, and CMPG is the average MPG for city street driving.  While this is a relatively low-level law pertaining to a rather trivial corner of all of the amazing things that go on in nature/the universe, it is nonetheless a law insofar as it asserts that TMPG values for different time periods conform redundantly to a single sort of relationship: every single TMPG is related to p, HMPG, and CMPG in the same sort of way as every other TMPG.  This approach is remarkably different from any other approach that would assert that TMPG values for different periods are each dictated by their own equation, involving different mathematical operators, different orders of operation, and/or different variables than p, HMPG, or CMPG.  I can scrutinize the accuracy of this lawful understanding of TMPG if I also have some prior knowledge of HMPG and CMPG for my car and if I can measure p for particular driving periods.

The second variety of nomothetic research is what I referred to in my previous post as "theory-driven inquiry": theories provide a very general, skeletal framework of a story within which the researcher might try to fit many particular phenomena.  In fact, it is their lack of detail that makes theories so flexible.  For example, modern medicine has greatly benefited from the rise of the germ theory of disease, which suggests that the failure of biofunctions in organisms is sometimes caused by the invasion of those organisms (the hosts) by other, smaller organisms (the "germs" or parasites), which then exploit the body of the host as a resource, disrupting the host's normal biofunctions in the process.  The germ theory of disease is general enough that it tolerates different sorts of germ - prions, viruses, bacteria, protazoa, fungi, arthropods - each of which exploit their hosts in different ways.  Unlike formalized laws (as in the TMPG example), theories cannot be directly tested.  Additionally, the failure of a researcher to convincingly fit a case to a theory does not indict the theory, which on the contrary may go on to make sense of lots of other phenomena.  For example, while our understanding of certain diseases fit well within the bounds of the germ theory of disease, others instead fit under toxicological, congenital, or genetic theories of disease.  Similarly, evolutionary biologists now distinguishing between four separate kinds of mechanism leading to evolutionary change - mutation, drift, gene flow, and selection - yet the fact that some cases fit better with selection and others with mutation, drift, or gene flow does nothing to undercut the credibility of any one of them.

The distinction between these two different kinds of nomothetic research is important for a number of reasons, one which will matter below, others of which I will revisit in future posts.

What makes simplification so controversial, however, is not whether we can achieve it at either a global or a local level (no and yes, respectively) but instead whether the dynamics of nature/the universe conform to such simple, hierarchically structured principles, globally or locally.  The argument goes that, if the universe itself is so governed, then so should our belief systems be.  Under this assumption, Richard Feynman even went so far as to assert that research that is not directed toward formulating and testing simplifying hypotheses is not scientific.  As an example of this sort of simplifying research, many particle physicists are on a quest in pursuit of a "grand unified theory" (GUT) which would collapse three separate forces governing how different bits of matter interact with each other as variants of a single force (the electromagnetic, strong nuclear, and weak nuclear forces, respectively).  Assuming that such a unification can ever be achieved, a "theory of everything" (TOE) would further collapse gravity and this three-in-one force into a four-in-one force.  This has not yet happened.

Physicists are hardly alone in their advocacy of the simplifying agenda; researchers in many other fields espouse it, as well.  The authors of Principles in Virology, for example, offer a quote from Nobel Prize-winning bacteriologist Al Hershey in defense of their unificationist approach to textbook-writing: "The enduring goal of scientific endeavor ... is to achieve an intelligible view of the universe.  One of the great discoveries of modern science is that this goal cannot be achieved piecemeal, certainly not by the accumulation of facts.  To understand a phenomenon is to understand a category of phenomena or it is nothing."  (The authors discuss the importance of teaching general principles further in this Microbe World podcast, beginning approximately 31 minutes in.  Al Hershey comes up at 44 minutes in.)

The most ambitious unificationists are those who insist that this simplifying operation should aspire to such grand proportions that it dissolves all disciplinary boundaries.  Overwhelmingly, it is the physicists who are posturing to subsume all other disciplines:


Not long ago, I ran across a particularly self-assured advocate of this viewpoint (a recent graduate of physics, I suspect) in an online comments thread, who asserted that
In the scheme of things physics deals with much more basic, lower abstraction level, phenomenon, that does allow for much better controlled experiments than social sciences can achieve. Hypothetically, with knowledge of physics, enough data collection along with sufficient levels of computing power, one could calculate everything, including animal and human behavior. We are very far from being able to what I just suggested but my point is that physics is the science upon which all others are built. Physics deals with the most basic forces or nature, that all other sciences must deal with at some level. ("Strato Man")
(Note the inconsistency in verb tense between his implication that physics is presently "the science upon which all others are built" on the one hand, and his future-tense assertion that such a subordination might some day be possible, though we are very far from it currently.)

As if the physicist's arrogant ambition in this matter weren't bad enough, it is made all the more insulting when it is accompanied by smug condescension regarding other disciplines' inferior ability to accomplish their own research goals.  The human/social sciences in particular bear the brunt of such condescension, for example,
Any physicist threatened by cuts in funding ought to consider a career in the social sciences, where it ought to be possible to solve the problems the social scientists are worked up about in a trice (John Gribbin, from The New Scientist, quoted by Duncan J. Watts in Everything is Obvious*).
Similarly, in Feynman's reflections on the centrality of laws in science (video linked above), he also asserts that "social science is an example of a science which is not a science," whereas he is privileged with the
advantage of having found out how hard it is to get to really know something, how careful you have to be about checking your experiments, how easy it is to make mistakes and fool yourself.  I know what it is to know something, and therefore, I can't ... I see how they get their information, and I can't believe that they know it.  They haven't done the work necessary, they haven't done the checks necessary, they haven't done the care necessary.  I have a great suspicion that they don't know this stuff.
(This attitude also reveals Feynman's failure to recognize that scientific knowledge is spectrum rather than a binary opposition, spanning all the way from untested hunches, through proofs of concept, to well-supported items of knowledge.)  The over-confident physics graduate I mentioned above echoed Feynman in labeling himself "a physicist who understands the concept of testable theories" and in asserting that slapdash research by social scientists (in this case including sociobiologists) is to blame for public distrust of science in general.  It is as if we (social scientists) are children playing dress-up in our parents' (hard scientists') clothes and accessories (smocks and goggles, I suppose).

Nor does the condescension stop there: not only are social scientists regarded as pseudoscientists for our avoidance of covering laws (not always true, by the way), then treated as dilettantes for our inability to conduct rigorous tests and checks, but our chosen subject matter is itself trivialized, for example in Feynman's dismissal of ethnobiology:



Curie is similarly disinterested in people:


And Rutherford has no interest whatsoever in anything that is not physics:


Not surprisingly, the arrogance of physicists is so pronounced that it has become an easy and frequent target for caricature, for example Sheldon Cooper's dismissal of the social sciences as "largely hokum" (The Big Bang Theory, Season 2, Episode 13: "The Friendship Algorithm"), and also this:



Critiquing simplification

Before I get into the meat of my critique, I'd like to start off by addressing a word-choice pet peeve that comes up frequently in this discussion: unificationists love to describe simple laws as 'elegant' ("pleasingly graceful and stylish in appearance or manner"), usually in lieu of 'simple' as if the two were synonymous.  To my mind, this is a really perplexing and idiosyncratic usage, given that 'sophistication' is also frequently associated with elegance.

Now, to the meat of it.  To begin with, it bears repeating that global or other far-reaching simplifications are prohibitively impractical.  Short of possessing a computer as powerful as Douglas Adams' Deep Thought, it is doubtful that we could ever "calculate everything, including animal and human behavior" as derivatives of basic principles of physics.


Nevertheless, those committed to the simplification program employ one or more work-around strategies to make it so, though unfortunately, these are without exception fallacious.  The first of these strategies focuses on the 'explanandum' (i.e., the target of explanation, the phenomenon to be explained; plural 'explananda'), while the second focuses on the 'explanans' (i.e., the phenomenon or phenomena invoked as causes of the explanandum; plural 'explanantia').  The first strategy involves a bait-and-switch presentation of the explanandum:
  • Bait: describe the collection of explananda in detail, emphasizing all of those variables about them that make them so puzzling and worthy of curiosity.
  • Switch: without explicit announcing it, narrow one's explanatory focus to only a small handful, or even a single puzzling variable, about these explananda.  Attempt to explain this reduced set rather than the larger original set, without letting on that you're overlooking the rest.
This sort of reasoning commits the fallacy of redefinition.

The second strategy involves attempting to explain the explananda with reference to only one explanans.  To go back to the gas mileage example from earlier, a researcher employing this second strategy might pose TMPG as a response solely to the time spent idling the vehicle (call it i):

    TMPG = i(C)

where C refers to some constant factor by which the variable i is multiplied, their product being TMPG.  As statisticians know, and as you should know too, it is often empirically possible to assess the likelihood that proposed relationships like this one are true and, assuming that they are, what the value of constants like C are.  What makes this strategy so problematic is that it disregards the residual amount of variation that is not accounted for by the simple formulation.  If the research is being honest, they will supplement the simple formulation with an additional term describing this residual variability, e.g.

    TMPG = i(C) ±e

where ±e denotes this residual variation (or 'margin of error' or 'error term' as it is commonly called).  Furthermore, most researhers will not be satisfied to leave the explanation at that.  Error terms are thorns in the side of researchers, and only the most devout unificationist would recoil from the idea of revising this equation if it means making it more complicated, even if doing so would reduce the size of ±e.  The more appropriate response, most researchers would concede, is to continue to make revisions to the equation that do minimize ±e.  For example, in thinking about the number of miles I have driven on the highway, the number of miles I have driven on city streets, and the amount of time I have spent idling, I might suggest the following:

    TMPG = TM / [TM(p)/HMPG + TM(1-p)/CMPG + i(C)] ±e

where TM is the total number of miles driven over a given time period and all other variables are as above.  Nor is this revised law exhaustive or final; in the future, I might add in additional terms for tire pressure, which is in turn influenced by ambient temperature, and further note that HMPG and CMPG will vary between automobile models or even between individual vehicles.

This second strategy is founded on the notion that simpler = more realistic, in which case we might indeed regard it as an epistemological virtue.  Yet, as trivial as the law of gas mileage may be, it brings us face-to-face with a contrary reality: the pursuit of realism in our understandings comes at the expense of simplicity, not part and parcel with it.  If this is so, then this second strategy instead falls afoul of the fallacy of reductionism (also known as 'the reductive fallacy' or the 'single cause fallacy').  Of course, some degree of simplification is a necessary trapping of research, and researchers often choose to start the research cycles of their careers by testing simple hunches, fully expecting that these are not going to pan out.  But what makes unificationist simplification so problematic is its insistence that what looks to most like a partial explanation is a complete one, or at least one that has been unburdened by the inclusion of unimportant minor details.  David Hackett Fischer describes this fallacy in the context of historiography:
The reductive fallacy reduces complexity to simplicity, or diversity to uniformity, in causal explanations.  It exists in several common forms, none of which can be entirely avoided in any historical interpretation.  As long as historians tell selected truths, their causal models must be reductive in some degree.  But some causal models are more reductive than others.  When a causal model is reductive in such a degree, or in such a way, that the resultant distortion is dysfunctional to the resolution of the causal problem at hand, then the reductive fallacy is committed.  (Historians' Fallacies)
Many of the tentative GUTs and TOEs that have been conceived to date run afoul of this fallacy, which probably accounts for why there are no successful GUTs or TOEs yet, at least in part.  Physicists would do well to heed the cautionary tale that an uncompromising push toward simplicity will result in simplistic rather than realistic understandings, commonly attributed to Einstein (one of their own).

It is also worth noting that, even if a given explanandum can be reduced to a constant law as simple as something like "TMPG = i(C)", this explanation still leaves unexplained the variability in i driving the variability in TMPG.  Unificationists often make the mistake of thinking that the algebraic specificity of the law and the constancy of the constants it invokes together constitute a complete explanation, when in fact the variability of its variables is also an integral part of the explanation.  "Complete explanations" are mythical and necessarily elusive because any complete explanation is one in which variability in the causal variables is also explained.

Opposite the simplification agenda is the 'emergence' perspective, which argues that the properties exhibited by macroscopic entities or processes cannot be wholly reduced as the mere sum of their microscopic parts.  (Here, I use the terms 'macro-' and 'microscopic' not to refer to things that can be seen with the naked eye versus with the aid of a microscope, but instead to entities or phenomena existing at different scales: elementary particles to atoms, atoms to molecules, molecules to compound units of matter, ..., individual organisms to societies, and up and up and up.)  At best, the variables that account for higher-order entities or phenomena may include attributes about the microscopic things that purportedly constitute them, but this is rarely enough to wrap up a compelling account.

It should also be noted that there is one faction in the emergence community, sometimes going under the label 'weak emergence,' that is in fact nothing more than a clandestine unificationist attempt to reign emergence back in, working from the inside to bend it back to the same old agenda.  The change of label, however, does absolutely nothing to undo the multiple criticisms that unificationism suffers.

So, the simplification program is conceptually bankrupt, and not surprisingly, the arrogance of those who have long advocated it (I am looking at you, outspoken physicists) has proven premature:
I think I can say that the problems sociologists, economists, and other social scientists are "worked up about" are not going to be solved in a trice, by me or even by a legion of physicists.  I say this because since the late 1990s many hundreds, if not thousands of physicists, computer scientists, mathematicians, and other "hard" scientists have taken an increasing interest in questions that have traditionally been the province of the social sciences - questions about the structure of social networks, the dynamics of group formation, the spread of information and influence, or the evolution of cities and markets.  Whole fields have arisen over the past decade with ambitious names like 'network science' and 'econophysics.'  Datasets of immense proportions have been analyzed, countless new theoretical models have been proposed, and thousands of papers have been published, many of them in the world's leading science journals ....  Entire new funding programs have come into existence to support these new research directions.  Conferences on topics such as "computational social science" increasingly provide forums for scientists to interact across old disciplinary boundaries.  And yes, many new jobs have appeared that offer young physicists the chance to explore problems that once would have been deemed beneath them.

The sum total of this activity has far exceeded the level of effort that Gribbin's offhand remark implied was required.  So what have we learned about those problems that social scientists were so worked up about back in 1998?  What do we really know about the nature of deviant behavior or the origins of social practices or the forces that shift cultural norms ... that we didn't know then?  What new solutions has this new science provided to real-world problems, like helping relief agencies respond more effectively to humanitarian disasters in places like Haiti or New Orleans, or helping law enforcement agencies stop terrorist attacks, or helping financial regulatory agencies police Wall Street and reduce systemic risk?  And for all the thousands of papers that have been published by physicists in the past decade, how much closer are we to answering the really big questions of social science, like the economic development of nations, the globalization of the economy, or the relationship between immigration, inequality, and intolerance?  Pick up  the newspaper and judge for yourself, but I would say not much. (Duncan J. Watts, Everything is Obvious*)
It is also refreshing, though no less misguided, to see the unificationist agenda aimed against physics

("The Big Bang Theory", Season 4, Episode 3)

In the end, the main virtue of simplifying research is the ease with which its products can be transmitted both among fellow researchers and between the scientific community and the general public.  Simplification undeniably leads to the production of easily packaged, short communications, hence its practical appeal.  Nevertheless, in light of the preceding critique, we do have to ask, "is it better to effectively communicate a message that is inaccurate in its denial of its incompleteness, or to suffer the burden of trying to communicate complicated but accurate - though still incomplete - messages?"  In approaching simplification as an empowering enterprise, we have to wonder whether it is the kind of power that we would actually want to wield.


Exploiting disunities

Disunity, as it turns out, is empowering in its own right.  As I have previously mentioned, QC approaches its scrutinizing labor in an open-ended manner, periodically revisiting accepted and well-supported beliefs from new angles, usually by bringing new lines of evidence to bear on the review.  QC's philosophy in doing so is this: if new lines of evidence eventually topple a long-standing hypothesis in a way that previous evidence could not, then we are not only successful in ridding ourselves of a bad idea but also in identifying a shortfall of that previous evidence.  On the flip side of the coin, when multiple lines of evidence appear consistent with an accepted understanding, this mounting evidence reinforces the strength of the belief - a "miracle argument" or "argument by convergence" as it sometimes called.  Here is the problem: well-integrated networks of belief can often though unexpectedly work against our ability to accomplish such multi-line testing.

For arguments by convergence to be valid, it is absolutely necessary that each new line of evidence is independent of those employed previously.  If they are not independent - if the new line of evidence is correlated with older lines of evidence - then it is unlikely that the new evidence will yield results that run counter to the older evidence.  The sense of renewed endangerment and of mounting evidential support that result from the use of such evidence are both illusory.  The worst violations of the criterion of independence are those in which separate mathematical summaries of a single body of evidence are offered as if novel, when in fact they are simple mathematical transformations of each other.  For example, if we were to present the frequency of successes observed in some experiment as a proportion of successes out of all trials, then re-present the same evidence as a ratio of successes to failures, we are saying nothing new at all; a proportion and a ratio on a single run of trials will tell exactly the same tale, using slightly different terms.  Fortunately, this latter kind of behavior is rare in the sciences, a rookie mistake rather than a deliberately deceitful act, largely because the oversight of more experienced scientists and peers keeps us honest.  However, it is far easier to unwittingly present interdependent lines of evidence as if independent.

This is where well-integrated networks of belief may sometimes become a liability.  In many cases, the logical connections that tie well-integrated networks together operate to increase the degree of interdependence between the lines of evidence that we might otherwise have employed.  Conversely, where two or more locally integrated clusters of belief are minimally integrated with one another, opportunities for independent lines of evidence abound provided that they both bear on the phenomenon we are interested in explaining.  Disunity is a boon for QC.

For example, in my own field of research - archaeological demography - one of the things we frequently try to do is estimate the growth rate dynamics of past human populations.  In this undertaking, we have an opportunity to double-check our estimates by studying both the age-at-death distributions of human skeletal populations and the changing frequency of archaeological sites that past populations have left behind them over time.  The first line of evidence draws upon the biology of human fertility and mortality as well as of skeletal aging, whereas the second draws on anthropological understandings of human land use and waste disposal practices, as well as the geology of landscape formation processes.

These two lines of evidence are admittedly not entirely unconnected, though the connections between them are loose and indirect.  For example, both datasets in question need to be anchored to a common timeline, and in doing so, we often (usually) use identical dating methods for both.  Most of archaeology's most well-known and widely used dating methods draw on knowledge of nuclear physics, environmental chemistry, and biochemistry.  (We do get along with physicists, I promise.)  Even so, the connection that these dating efforts introduce into our two-lines approach is incidental, not the sort that undermines the effective independence of the two lines of evidence, because our ability to estimate the ages at death of skeletal remains and to infer population growth rate estimates from these is not dependent on the timestamps we put on them.  (Amusingly, however, our ability to count the changing frequency of archaeological sites over time is dependent on these timestamps, so if our dating methods are flawed, then our two lines of evidence will actually tell conflicting stories, one of them false.)

(Alison Wylie, a philosopher of archaeology, has much more to say about disunity and its usefulness in her various works. Anyone interested in the matter should pick up a copy of her collection of essays, Thinking from Things.)


The essential tension: the life of science after global unification

Scientists are unarguably engaged in an enterprise intent on identifying linkages between different aspects of nature/the universe (but also of ridding ourselves of believing in linkages that do not exist).  These connections make for interesting opportunities to communicate with the public, particularly when they are of a simplifying sort.  Nevertheless, the unificationist hypothesis of global integration and especially of global simplification seems less likely to succeed now than ever.  First, there is a steep logistical challenge: science is growing and pushing out into new frontiers at a fast and accelerating pace, vastly outpacing the ability of researchers to accomplish the sorts of sweeping integration and simplification that unificationists envision.  Second, while the unificationist program heavily implies a culminating moment of research, in which all the pieces will finally fall into place under the coverage of a grand and "elegantly simple" law of everything, in practice the give-and-take between science's R&D and QC divisions on the contrary entails an endless process of turnover in our beliefs.  (On this count, one might even argue against Feynman that the pursuit of grand unifying laws, not the rejection of law-oriented research, is pseudoscientific.)  Third, the unificationist agenda supposes that the fabric of nature/the universe is woven out of a single thread, following a simple stitch throughout, whereas it may on the contrary be a tapestry of high thread count, woven together with a sophisticated stitch and/or many different kinds of stitch.  Realistic belief systems may therefore favor sophistication rather than simplicity, and what unificationists say we do is not matched with what we have actually done, can do, or ought to do.

The burden of these multiple criticisms suggests that the scientific community is overdue in reevaluating our relationship with the organization of belief systems.  We have seen it as a straightforward and unproblematic operation, and it is not.  It is a relationship that is tense in its very core: on the one hand, scientific R&D does generate new connections between ideas where these had not previously existed, usually at a local scale, while on the other hand the rapid expansion of science, in conjunction with the subtractive effects of QC and of forgetfulness, imposes a sort of entropy on belief system organization.  Nor, it seems, is this tension a perfect counterbalance.  If anything, the state of disarray of scientific knowledge is increasing daily.

What makes this antagonism so vexing, I think, is that not everyone is well-equipped to handle antagonistic realities.  All too often, when we are confronted with dilemmas, we err on the side of one out of the competing goals at the expense of the other, rather than seeking any sort of optimal balance between them.  Yet, it is possible - at least, I hope that it is possible - to learn to be at peace with essential tensions like this one.  On the one hand, we may fret at the vanished hope of any grand unification of belief (particularly those of us who are fond of asserting that everything is connected with everything else), but on the other hand, we may celebrate the gain in realism that seemingly comes from accepting the irreducible complexities of reality.  We may mourn the loss of efficient communication that comes with the presentation of simple understandings, but on the other hand we may embrace disunity as it empowers the ongoing work of QC.  Living with the essential tension means accepting these mixed feelings, and as undesirable as this may be, it is far better than continuing to live under the self-soothing illusion that the unificationist program once afforded.