Thursday, March 22, 2012

We're Just Getting Started: A Glimpse at the History of Uncertainty


We've had our cerebral cortex for several tens of thousands of years. We've lived in more or less sedentary settlements and produced excess food for 7 or 8 thousand years. We've written down our thoughts for roughly 5 thousand years. And Science? The ancient Greeks had some, but science and its systematic application are overwhelmingly a European invention of the past 500 years. We can be proud of our accomplishments (quantum theory, polio vaccine, powered machines), and we should worry about our destructive capabilities (atomic, biological and chemical weapons). But it is quite plausible, as Koestler suggests, that we've only just begun to discover our cerebral capabilities. It is more than just plausible that the mysteries of the universe are still largely hidden from us. As evidence, consider the fact that the main theories of physics - general relativity, quantum mechanics, statistical mechanics, thermodynamics - are still not unified. And it goes without say that the consilient unity of science is still far from us.

What holds for science in general, holds also for the study of uncertainty. The ancient Greeks invented the axiomatic method and used it in the study of mathematics. Some medieval thinkers explored the mathematics of uncertainty, but it wasn't until around 1600 that serious thought was directed to the systematic study of uncertainty, and statistics as a separate and mature discipline emerged only in the 19th century. The 20th century saw a florescence of uncertainty models. Lukaczewicz discovered 3-valued logic in 1917, and in 1965 Zadeh introduced his work on fuzzy logic. In between, Wald formulated a modern version of min-max in 1945. A plethora of other theories, including P-boxes, lower previsions, Dempster-Shafer theory, generalized information theory and info-gap theory all suggest that the study of uncertainty will continue to grow and diversify.

In short, we have learned many facts and begun to understand our world and its uncertainties, but the disputes and open questions are still rampant and the yet-unformulated questions are endless. This means that innovations, discoveries, inventions, surprises, errors, and misunderstandings are to be expected in the study or management of uncertainty. We are just getting started. 

Sunday, February 19, 2012

Accidental Education


"He had to take that life as he best could, 
with such accidental education as luck had given him". 

I am a university professor. Universities facilitate efficient and systematic learning, so I teach classes, design courses, and develop curricula. Universities have tremendously benefitted technology, the economy, health, cultural richness and awareness, and many other "goods".

Nonetheless, some important lessons are learned strictly by accident. Moreover, without accidental surprises, education would be a bit dry, sometimes even sterile. As Adams wrote: "The chief wonder of education is that it does not ruin everybody concerned in it, teachers and taught."

An example. I chose my undergraduate college because of their program in anthropology. When I got there I took a chemistry course in my first semester. I was enchanted, by the prof as much as by the subject. I majored in chemistry and never went near the anthro department. If that prof had been on sabbatical I might have ended up an anthropologist.

Universities promote lifelong learning. College is little more than a six-pack of knowledge, a smattering of understanding and a wisp of wisdom. But lifelong learning doesn't only mean "come back to grad school". It means perceiving those rarities and strangenesses that others don't notice. Apples must have fallen on lots of peoples' heads before some clever fellow said "Hmmm, what's going on here?".

Accidental education is much more than keeping your eyes and mind open (though that is essential). To understand the deepest importance of accidental education we need to enlist two concepts: the boundlessness of the unknown, and human free will. We will then understand that accidental education feeds the potential for uniqueness of the individual.

As we have explained elsewhere, in discussing grand unified theories and imagination, the unknown is richer and stranger - and more contradictory - than the single physical reality that we actually face. The unknown is the realm of all possible as well as impossible worlds. It is the domain in which our dreams and speculations wander. It may be frightening or heartening, but taken as a whole it is incoherent, contradictory and endlessly amazing, variable and stimulating.

We learn about the unknown in part by speculating, wondering, and dreaming (awake and asleep). Imagining the impossible is very educational. For instance, most things are impossible for children (from tying their shoes to running the country), but they must be encouraged to imagine that they can or will be able to do them. Adults also can re-make themselves in line with their dreams. We are free and able to imagine ourselves and the world in endless new and different ways. Newton's apple brought to his mind a picture of the universe unlike any that had been imagined before. Surprises, like dreams, can free us from the mundane. Cynics sometimes sneer at personal or collective myths and musings, but the ability to re-invent ourselves is the essence of humanity. The children of Israel imagined at Sinai that the covenant was given directly to them all - men, women and children equally - with no royal or priestly intermediary. This launched the concept and the possibility of political equality.

The Israelites had no map of the desert because the promised land that they sought was first of all an idea. Only after re-inventing themselves as a free people created equal in the image of God, and not slaves, only after finding a collective identity and mission, only then could they enter the land of Canaan. Theirs wanderings were random and their discoveries were accidental, but their formative value is with us to this day. No map or curriculum can organize one's wandering in the land of imagination. Unexpected events happen in the real world, but they stimulate our imagination of the infinity of other possible worlds. Our most important education is the accidental stumbling on new thoughts that feed our potential for innovation and uniqueness. For the receptive mind, accidental education can be the most sublime.

Saturday, January 28, 2012

Genesis for Engineers

Technology has come a long way since Australopithecus first bruised their fingers chipping flint to make knives and scrapers. We are blessed to fruitfully multiply, to fill the world and to master it (Genesis 1:28). And indeed the trend of technological history is towards increasing mastery over our world. Inventors deliberately invent, but many inventions are useless or even harmful. Why is there progress and how certain is the process? Part of the answer is that good ideas catch on and bad ones get weeded out. Reality, however, is more complicated: what is 'good' or 'bad' is not always clear; unintended consequences cannot be predicted; and some ideas get lost while others get entrenched. Mastering the darkness and chaos of creation is a huge engineering challenge. But more than that, progress is painful and uncertain and the challenge is not only technological.

An example of the weeding-out process, by which our mastery improves, comes to us in Hammurabi's code of law from 38 centuries ago:

"If a builder build a house for some one, and does not construct it properly, and the house which he built fall in and kill its owner, then that builder shall be put to death. If it kill the son of the owner the son of that builder shall be put to death." (Articles 229-230)

Builders who use inferior techniques, or who act irresponsibly, will be ruthlessly removed. Hammurabi's law doesn't say what techniques to use; it is a mechanism for selecting among techniques. As the level of competence rises and the rate of building collapse decreases, the law remains the same, implicitly demanding better performance after each improvement.

Hammurabi's law establishes negative incentives that weed out faulty technologies. In contrast, positive incentives can induce beneficial invention. John Harrison (1693-1776) worked for years developing a clock for accurate navigation at sea, motivated by the Royal Society's 20,000 pound prize.

Organizations, mores, laws and other institutions explain a major part of how good ideas catch on and how bad ones are abandoned. But good ideas can get lost as well. Jared Diamond relates that bow and arrow technologies emerged and then disappeared from pre-historic Australian cultures. Aboriginal mastery of the environment went up and then down. The mechanisms or institutions for selecting better tools do not always exist or operate.

Valuable technologies can be "side-lined" as well, despite apparent advantages. The CANDU nuclear reactor technology, for instance, uses natural Uranium. No isotope enrichment is needed, so its fuel cycle is disconnected from Uranium enrichment for military applications (atom bombs use highly enriched Uranium or Plutonium). CANDU's two main technological competitors - pressurized and boiling water reactors - use isotope-enriched fuel. Nuclear experts argue long (and loud) about the merits of various technologies, but no "major" or "serious" accidents (INES levels 6 or 7) have occurred with CANDU reactors but have with PWRs or BWRs. Nonetheless, the CANDU is a minor contributor to world nuclear power.

The long-run improvement of technology depends on incentives created by attitudes, organizations and institutions, like the Royal Society and the law. Technology modifies those attitudes and institutions, creating an interactive process whereby society influences technological development, and technology alters society. The main uncertainty in technological progress arises from unintended impacts of technology on mores, values and society as a whole. An example will make the point.

Early mechanical clocks summoned the faithful to prayer in medieval monasteries. But technological innovations may be used for generations without anyone realizing their full implications, and so it was with the clock. The long-range influence of the mechanical clock on western civilization was the idea of "time discipline as opposed to time obedience. One can ... use public clocks to summon people for one purpose or another; but that is not punctuality. Punctuality comes from within, not from without. It is the mechanical clock that made possible, for better or for worse, a civilization attentive to the passage of time, hence to productivity and performance." (Landes, p.7)

Unintended consequences of technology - what economists called "externalities" - can be beneficial or harmful. The unintended internalization of punctuality is beneficial (maybe). The clock example illustrates how our values gradually and unexpectedly change as a result of technological innovation. Environmental pollution and adverse climate change are harmful, even when they result from manufacturing beneficial consumer goods. Attitudes towards technological progress are beginning to change in response to perceptions of technologically-induced climate change. Pollution and climate change may someday seriously disrupt the technology-using societies that produced them. This disruption may occur either by altering social values, or by adverse material impacts, or both.

Progress occurs in historical and institutional context. Hammurabi's Code created incentives for technological change; monastic life created needs for technological solutions. Progress is uncertain because we cannot know what will be invented, and whether it will be beneficial or harmful. Moreover, inventions will change our attitudes and institutions, and thus change the process of invention itself, in ways that we cannot anticipate. The scientific engineer must dispel the "darkness over the deep" (Genesis 1:2) because mastery comes from enlightenment. But in doing so we change both the world and ourselves. The unknown is not only over "the waters" but also in ourselves.

Monday, January 9, 2012

The Age of Imagination


This is not only the Age of Information, this is also the Age of Imagination. Information, at any point in time, is bounded, while imagination is always unbounded. We are overwhelmed more by the potential for new ideas than by the admittedly vast existing knowledge. We are drunk with the excitement of the unknown. Drunks are sometimes not a pretty sight; Isaiah (28:8) is very graphic.

It is true that topical specialization occurs, in part, due to what we proudly call the explosion of knowledge. There is so much to know that one must ignore huge tracts of knowledge. But that is only half the story. The other half is that we have begun to discover the unknown, and its lure is irresistible. Like the scientific and global explorers of the early modern period - The Discoverers as Boorstin calls them - we are intoxicated by the potential "out there", beyond the horizon, beyond the known. That intoxication can distort our vision and judgment.

Consider Reuven's comment, from long experience, that "Engineers use formulas and various equations without being aware of the theories behind them." A pithier version was said to me by an acquisitions editor at Oxford University Press: "Engineers don't read books." She should know.

Engineers are imaginative and curious. They are seekers, and they find wonderful things. But they are too engrossed in inventing and building The New, to be much engaged with The Old. "Scholarship", wrote Thorstein Veblen is "an intimate and systematic familiarity with past cultural achievements." Engineers - even research engineers and professors of engineering - spend very little time with past masters. How many computer scientists scour the works of Charles Babbage? How often do thermal engineers study the writings of Lord Kelvin? A distinguished professor of engineering, himself a member of the US National Academy of Engineering, once told me that there is little use for journal articles more than a few years old.

Fragmentation of knowledge results from the endless potential for new knowledge. Seekers - engineers and the scientists of nature, society and humanity - move inexorably apart from one another. But nonetheless it's all connected; consilient. Technology alters how we live. Science alters what we think. How can we keep track of it all? How can we have some at least vague and preliminary sense of where we are heading and whether we value the prospect?

The first prescription is to be aware of the problem, and I greatly fear that many movers and shakers of the modern age are unaware. The second prescription is to identify who should take the lead in nurturing this awareness. That's easy: teachers, scholars, novelists, intellectuals of all sorts.

Isaiah struggled with this long ago. "Priest and prophet erred with liquor, were swallowed by wine."(Isaiah, 28:7) We are drunk with the excitement of the unknown. Who can show the way?

Tuesday, January 3, 2012

Mind or Stomach? Imagination or Necessity?

"An army marches on its stomach" said Napoleon, who is also credited with saying "Imagination rules the world". Is history driven by raw necessity and elementary needs? Or is history hewn by people from their imagination, dreams and ideas?

The answer is simple: 'Both'. The challenge is to untangle imagination from necessity. Consider these examples:

An ancient Jewish saying is "Without flour, there is no Torah. Without Torah there is no flour." (Avot 3:17) Scholars don't eat much, but they do need to eat. And if you feed them, they produce wonders.

Give a typewriter to a monkey and he might eventually tap out Shakespeare's sonnets, but it's not very likely. Give that monkey an inventive mind and he will produce poetry, a vaccine against polio, and the atom bomb. Why the bomb? He needed it.

Necessity is the mother of invention, they say, but it's actually a two-way street. For instance, human inventiveness includes dreams of cosmic domination, leading to war. Hence the need for that bomb. Satisfying a need, like the need for flour, induces inventiveness. And this inventiveness, like the discovery of genetically modified organisms, creates new needs. Necessity induces inventiveness, and inventiveness creates new dangers, challenges and needs. This cycle is endless because the realm of imagination is boundless, far greater than prosaic reality, as we discussed elsewhere.

Imagination and necessity are intertwined, but still are quite different. Necessity focusses primarily on what we know, while imagination focusses on the unknown.

We know from experience that we need food, shelter, warmth, love, and so on. These requirements force themselves on our awareness. Even the need for protection against surprise is known, though the surprise is not.

Imagination operates in the realm of the unknown. We seek the new, the interesting, or the frightful. Imagination feeds our fears of the unknown and nurtures our hopes for the unimaginable. We explore the bounds of the possible and try breaking through to the impossible.

Mind or stomach? Imagination or necessity? Every 'known' has an 'unknown' lurking behind it, and every 'unknown' may some day be discovered or dreamed into existence. Every mind has a stomach, and a stomach with no mind is not human.

Sunday, December 18, 2011

Jabberwocky. Or: Grand Unified Theory of Uncertainty???


Jabberwocky, Lewis Carroll's whimsical nonsense poem, uses made-up words to create an atmosphere and to tell a story. "Billig", "frumious", "vorpal" and "uffish" have no lexical meaning, but they could have. The poem demonstrates that the realm of imagination exceeds the bounds of reality just as the set of possible words and meanings exceeds its real lexical counterpart.

Uncertainty thrives in the realm of imagination, incongruity, and contradiction. Uncertainty falls in the realm of science fiction as much as in the realm of science. People have struggled with uncertainty for ages and many theories of uncertainty have appeared over time. How many uncertainty theories do we need? Lots, and forever. Would we say that of physics? No, at least not forever.

Can you think inconsistent, incoherent, or erroneous thoughts? I can. (I do it quite often, usually without noticing.) For those unaccustomed to thinking incongruous thoughts, and who need a bit of help to get started, I can recommend thinking of "two meanings packed into one word like a portmanteau," like 'fuming' and 'furious' to get 'frumious' or 'snake' and 'shark' to get 'snark'.

Portmanteau words are a start. Our task now is portmanteau thoughts. Take for instance the idea of a 'thingk':

When I think a thing I've thought,
I have often felt I ought
To call this thing I think a "Thingk",
Which ought to save a lot of ink.

The participle is written "thingking",
(Which is where we save on inking,)
Because "thingking" says in just one word:
"Thinking of a thought thing." Absurd!

All this shows high-power abstraction.
(That highly touted human contraption.)
Using symbols with subtle feint,
To stand for something which they ain't.

Now that wasn't difficult: two thoughts at once. Now let those thoughts be contradictory. To use a prosaic example: thinking the unthinkable, which I suppose is 'unthingkable'. There! You did it. You are on your way to a rich and full life of thinking incongruities, fallacies and contradictions. We can hold in our minds thoughts of 4-sided triangles, parallel lines that intersect, and endless other seeming impossibilities from super-girls like Pippi Longstockings to life on Mars (some of which may actually be true, or at least possible).

Scientists, logicians, and saints are in the business of dispelling all such incongruities, errors and contradictions. Banishing inconsistency is possible in science because (or if) there is only one coherent world. Belief in one coherent world and one grand unified theory is the modern secular version of the ancient monotheistic intuition of one universal God (in which saints tend to believe). Uncertainty thrives in the realm in which scientists and saints have not yet completed their tasks (perhaps because they are incompletable). For instance, we must entertain a wide range of conflicting conceptions when we do not yet know how (or whether) quantum mechanics can be reconciled with general relativity, or Pippi's strength reconciled with the limitations of physiology. As Henry Adams wrote:

"Images are not arguments, rarely even lead to proof, but the mind craves them, and, of late more than ever, the keenest experimenters find twenty images better than one, especially if contradictory; since the human mind has already learned to deal in contradictions."

The very idea of a rigorously logical theory of uncertainty is startling and implausible because the realm of the uncertain is inherently incoherent and contradictory. Indeed, the first uncertainty theory - probability - emerged many centuries after the invention of the axiomatic method in mathematics. Today we have many theories of uncertainty: probability, imprecise probability, information theory, generalized information theory, fuzzy logic, Dempster-Shafer theory, info-gap theory, and more (the list is a bit uncertain). Why such a long and diverse list? It seems that in constructing a logically consistent theory of the logically inconsistent domain of uncertainty, one cannot capture the whole beast all at once (though I'm uncertain about this).

A theory, in order to be scientific, must exclude something. A scientific theory makes statements such as "This happens; that doesn't happen." Karl Popper explained that a scientific theory must contain statements that are at risk of being wrong, statements that could be falsified. Deborah Mayo demonstrated how science grows by discovering and recovering from error.

The realm of uncertainty contains contradictions (ostensible or real) such as the pair of statements: "Nine year old girls can lift horses" and "Muscle fiber generates tension through the action of actin and myosin cross-bridge cycling". A logically consistent theory of uncertainty can handle improbabilities, as can scientific theories like quantum mechanics. But a logical theory cannot encompass outright contradictions. Science investigates a domain: the natural and physical worlds. Those worlds, by virtue of their existence, are perhaps coherent in a way that can be reflected in a unified logical theory. Theories of uncertainty are directed at a larger domain: the natural and physical worlds and all imaginable (and unimaginable) other worlds. That larger domain is definitely not coherent, and a unified logical theory would seem to be unattainable. Hence many theories of uncertainty are needed.

Scientific theories are good to have, and we do well to encourage the scientists. But it is a mistake to think that the scientific paradigm is suitable to all domains, in particular, to the study of uncertainty. Logic is a powerful tool and the axiomatic method assures the logical consistency of a theory. For instance, Leonard Savage argued that personal probability is a "code of consistency" for choosing one's behavior. Jim March compares the rigorous logic of mathematical theories of decision to strict religious morality. Consistency between values and actions is commendable says March, but he notes that one sometimes needs to deviate from perfect morality. While "[s]tandard notions of intelligent choice are theories of strict morality ... saints are a luxury to be encouraged only in small numbers." Logical consistency is a merit of any single theory, including a theory of uncertainty. However, insisting that the same logical consistency apply over the entire domain of uncertainty is like asking reality and saintliness to make peace.

Sunday, December 11, 2011

Picking a Theory is Like Building a Boat at Sea


"We are like sailors who on the open sea must reconstruct their ship
 but are never able to start afresh from the bottom." 
Otto Neurath's analogy in the words of Willard V. Quine

Engineers, economists, social planners, security strategists, and others base their plans and decisions on theories. They often argue long and hard over which theory to use. Is it ever right to use a theory that we know is empirically wrong, especially if a true (or truer) theory is available? Why is it so difficult to pick a theory?

Let's consider two introductory examples.

You are an engineer designing a robot. You must calculate the forces needed to achieve specified motions of the robotic arms. You can base these calculations on either of two theories. One theory assumes that an object comes to rest unless a force acts upon it. Let's call this axiom A. The other theory assumes that an object moves at constant speed unless a force acts upon it. Let's call this axiom G. Axiom A agrees with observation: Nothing moves continuously without the exertion of force; an object will come to rest unless you keep pushing it. Axiom G contradicts all observation; no experiment illustrates the perpetual motion postulated by the axiom. If all else is the same, which theory should you choose?

Axiom A is Aristotle's law of inertia, which contributed little to the development of mechanical dynamics. Axiom G is Galileo's law of inertia: one of the most fruitful scientific ideas of all time. Why is an undemonstrable assertion - axiom G - a good starting point for a theory?

Consider another example.

You are an economist designing a market-based policy to induce firms to reduce pollution. You will use an economic theory to choose between policies. One theory assumes that firms face pure competition, meaning that no single firm can influence market prices. Another theory provides agent-based game-theoretic characterization of how firms interact (without colluding) by observing and responding to price behavior of other firms and of consumers.

Pure competition is a stylized idealization (like axiom G). Game theory is much more realistic (like axiom A), but may obscure essential patterns in its massive detail. Which theory should you use?

We will not address the question of how to choose a theory upon which to base a decision. We will focus on the question: why is theory selection so difficult? We will discuss four trade offs.

"Thanks to the negation sign, there are as many truths as falsehoods;
we just can't always be sure which are which." Willard V. Quine

The tension between right and right. The number of possible theories is infinite, and sometimes it's hard to separate the wheat from the chaff, as suggested by the quote from Quine. As an example, I have a book called A Modern Guide to Macroeconomics: An Introduction to Competing Schools of Thought by Snowdon, Vane and Wynarczyk. It's a wonderful overview of about a dozen theories developed by leading economic scholars, many of them Nobel Prize Laureates. The theories are all fundamentally different. They use different axioms and concepts and they compete for adoption by economists. These theories have been studied and tested upside down and backwards. However, economic processes are very complex and variable, and the various theories succeed in different ways or in different situations, so the jury is still out. The choice of a theory is no simple matter because many different theories can all seem right in one way or another.

"The fox knows many things, but the hedgehog knows one big thing." Archilochus

The fox-hedgehog tension. This aphorism by Archilochus metaphorically describes two types of theories (and two types of people). Fox-like theories are comprehensive and include all relevant aspects of the problem. Hedgehog-like theories, in contrast, skip the details and focus on essentials. Axiom A is fox-like because the complications of friction are acknowledged from the start. Axiom G is hedgehog-like because inertial resistance to change is acknowledged but the complications of friction are left for later. It is difficult to choose between these types of theories because it is difficult to balance comprehensiveness against essentialism. On the one hand, all relevant aspects of the problem should be considered. On the other hand, don't get bogged down in endless details. This fox-hedgehog tension can be managed by weighing the context, goals and implications of the decision. We won't expand on this idea since we're not considering how to choose a theory; we're only examining why it's a difficult choice. However, the idea of resolving this tension by goal-directed choice motivates the third tension.

"Beyond this island of meanings which in their own nature are true or false
lies the ocean of meanings to which truth and falsity are irrelevant." John Dewey

The truth-meaning tension. Theories are collections of statements like axioms A and G in our first example. Statements carry meaning, and statements can be either true or false. Truth and meaning are different. For instance, "Archilochus was a Japanese belly dancer" has meaning, but is not true. The quote from Dewey expresses the idea that "meaning" is a broader description of statements than "truth". All true statements mean something, but not all meaningful statements are true. That does not imply, however, that all untrue meaningful statements are false, as we will see.

We know the meanings of words and sentences from experience with language and life. A child learns the meanings of words - chair, mom, love, good, bad - by experience. Meanings are learned by pointing - this is a chair - and also by experiencing what it means to love or to be good or bad.

Truth is a different concept. John Dewey wrote that

"truths are but one class of meanings, namely, those in which a claim to verifiability by their consequences is an intrinsic part of their meaning. Beyond this island of meanings which in their own nature are true or false lies the ocean of meanings to which truth and falsity are irrelevant. We do not inquire whether Greek civilization was true or false, but we are immensely concerned to penetrate its meaning."

A true statement, in Dewey's sense, is one that can be confirmed by experience. Many statements are meaningful, even important and useful, but neither true nor false in this experimental sense. Axiom G is an example.

Our quest is to understand why the selection of a theory is difficult. Part of the challenge derives from the tension between meaning and truth. We select a theory for use in formulating and evaluating a plan or decision. The decision has implications: what would it mean to do this rather than that? Hence it is important that the meaning of the theory fit the context of the decision. Indeed, hedgehogs would say that getting the meaning and implication right is the essence of good decision making.

But what if a relevantly meaningful theory is unprovable or even false? Should we use a theory that is meaningful but not verifiable by experience? Should we use a meaningful theory that is even wrong? This quandary is related to the fox-hedgehog tension because the fox's theory is so full of true statements that its meaning may be obscured, while the hedgehog's bare-bones theory has clear relevance to the decision to be made, but may be either false or too idealized to be tested.

Galileo's axiom of inertia is an idealization that is unsupported by experience because friction can never be avoided. Axiom G assumes conditions that cannot be realized so the axiom can never be tested. Likewise, pure competition is an idealization that is rarely if ever encountered in practice. But these theories capture the essence of many situations. In practical terms, what it means to get the robotic arm from here to there is to apply net forces that overcome Galilean inertia. But actually designing a robot requires considering details of dissipative forces like friction. What it means to be a small business is that the market price of your product is beyond your control. But actually running a business requires following and reacting to prices in the store next door.

It is difficult to choose between a relevantly meaningful but unverifiable theory, and a true theory that is perhaps not quite what we mean.

The knowledge-ignorance tension. Recall that we are discussing theories in the service of decision-making by engineers, social scientists and others. A theory should facilitate the use of our knowledge and understanding. However, in some situations our ignorance is vast and our knowledge will grow. Hence a theory should also account for ignorance and be able to accommodate new knowledge.

Let's take an example from theories of decision. The independence axiom is fundamental in various decision theories, for instance in von Neumann-Morgenstern expected utility theory. It says that one's choices should be independent of irrelevant alternatives. Suppose you are offered the dinner choice between chicken and fish, and you choose chicken. The server returns a few minutes later saying that beef is also available. If you switch your choice from chicken to fish you are violating the independence axiom. You prefer beef less than both chicken and fish, so the beef option shouldn't alter the fish-chicken preference.

But let's suppose that when the server returned and mentioned beef, your physician advised you to reduce your cholesterol intake (so your preference for beef is lowest) which prompted your wife to say that you should eat fish at least twice a week because of vitamins in the oil. So you switch from chicken to fish. Beef is not chosen, but new information that resulted from introducing the irrelevant alternative has altered the chicken-fish preference.

One could argue for the independence axiom by saying that it applies only when all relevant information (like considerations of cholesterol and fish oil) are taken into account. On the other hand, one can argue against the independence axiom by saying that new relevant information quite often surfaces unexpectedly. The difficulty is to judge the extent to which ignorance and the emergence of new knowledge should be central in a decision theory.

Wrapping up. Theories express our knowledge and understanding about the unknown and confusing world. Knowledge begets knowledge. We use knowledge and understanding - that is, theory - in choosing a theory. The process is difficult because it's like building a boat on the open sea as Otto Neurath once said.