Belling the Cat & Other Great Ideas

An outside consultant is someone who borrows your watch and charges you a fee to tell you the time.

People have been trying to sell us information research, outreach or new media services for a long time.  They are good people, usually smart guys with impressive credentials and great sounding programs.  But they remind me of stray cats trying to become house cats. They are very friendly and offer a lot, but once they get a steady supply of cream I am not sure they won’t become a nuisance.  I understand the need to work with outside experts, but I have some simple concerns.

The first is a simple sourcing question.  Whenever someone comes with really impressive and precise information, I have to ask where he got it.  Conclusions are no better than the source materials/data they are based on and the soundness of the method with which they were collected, but a clever consultant or academic can build impressive castles on the shifting, soft sand of supposition.  No matter how impressive the tower, the foundation is what matters.

A second question has to do with our own motivations.  We should use outside experts to “rent” expertise we don’t want to buy/develop permanently.  We should not use them as CYA, trying  to outsource decision making or creating/buying systems that will run on auto-pilot. Of course, some things are routine and well enough understood that we can just have a procedure. The hard decisions are hard precisely because they do not fall into that category.  We cannot abdicate responsibility for these decisions. The systems should be decision support, not decision substitution.

A third factor comes as a result of both of the above considerations.  It is possible to create an impressive looking expert-system that leads you inexorably to a wrong decision. We have to guard against it and always consider the inputs and sources.  Maybe the sources are flawed or the analysis in error, but the system is so beautiful and elegant that it creates the impression of greater certainty than the information permits.   If not for the system, you might see that for yourself, but what would have been an obvious flaw is obscured by the impressive and beautiful system built around it.

An important reason for this is the effect of aggregation, which is a fourth factor.  I might make a reasonable guess.  You might too and so might ten others.   Each of us has made a reasonable estimate with a degree of risk.  When we aggregate our guesses, they seem much more certain, but may have introduced all sorts of biases.   The collective judgment may be worse than any of the individuals.   Let me hasten to say that reasonable aggregation of diverse information is a great way to arrive at good decisions.  But when someone creates a model and then runs it, there is a good chance of introducing bias, maybe unintentional, and a significant risk of faulty aggregation.  I have seen lots of examples of information cascades, where the first (wrong) guesses influence the others.  (I have even created a few as experiments.  It is not hard.)  If the model is opaque, as they often are, we can be easily fooled. The worst case is when the model sort of works but because of random events or factors not property accounted in the model.  Arbitrary coherence.

It is not what you don’t know that is most dangerous.  It is what you know that isn’t true. 

A fifth factor is a kind of Heisenberg uncertainty principle of human affairs.  The very fact that we are doing something, or even just observing, alters the underlying reality.   This is especially true of a big player like the USG.   We need to take account of the effects of our actions and recognize the developing situations.  The correct answer today may well be the worst solution six months from now, w/o either answer being wrong.  That is why I am a great believer in iterative research and programs.  You have to see how things develop and then take the next step.  Of  course you need an overall context, but system-building consultants often become too vested in their peculiar models. They want to continue to apply it even when it has become inappropriate.

Which brings us to my sixth concern: an important reason why we do programs is to create the knowledge and relationship base among our own people.  If we outsource activities, we also outsource or give away the relationships and intimate knowledge of what we are doing. It is sort of like a student hiring another kid to write his term paper.  We become dependent on the models and reports and may be misled when we let our own powers atrophy.  We get the big bucks because of our experience, judgment and knowledge.  If we outsource the tasks that require them, we are not only avoiding the important value we add, we are also giving away the things that build future human capital.

Finally, I always have to ask if the service or research is useful. This seems an obvious question, but it often goes unasked.  We get so bedazzled by the graphs, fascinated by the immensity of a problem and/or baffled by the bull shit, that we never ask, “So, what do I do with this?” 

For something to be useful, it must be capable of being used – AND used by us, not some theoretical all-powerful actor.  When I hear something could be done, I want to know by whom and who has already done it.   I am a little leery of someone trying to tell me that I will be the first one ever to achieve something.  There is often silence at this point.  Many consultants are so honestly in love with their own products that they are not ready for the disconfirming question. Remember the fable of the mice who thought it would be a good idea to put a bell on the cat?  The plan was great until they asked who could do it.

Excuse me if I slip into hyperbole, but if I know there is a vast civilization on a planet of the Alpha Centauri system, but I have no way to contact them or get there, it is very interesting, but not useful information.  It is momentous and I want to know, but it is not useful. Among the compelling but useless information people often try to sell is polling data about whether or not people in X country like the U.S.  This is interesting information, but even assuming it doesn’t fall into one or more of the traps mentioned above, it is useless unless there is something I, we, the USG can do about it. 

For that I need more granular information.  Anyway, I don’t have to pay for that kind of general information.  I can get it free from Pew Research, Brookings, Heritage or many of the others who study such things.  (I found 33 official or authoritative studies on the subject.  I am sure there are more.)   Useful means actionable.   Most of what people are peddling is not.

I learn a lot from listening to these presentations, and I am glad they invite me to hear them. I feel a little bad for them.  They seem honest and earnest, but the chances they will sell much are slim.  I can often think of very good uses for particular parts of the product line, but I doubt I will ever find an acceptable whole solution.  If I do, I will advocate that we buy that system, and I can retire.