Can structured methodology reduce intelligence failure?

If there is one single common “concept” that occupies the minds of all four authors, Jervis, Betts, Marrin and Clemente, this “concept” would be uncertainty. All four authors present their views on uncertainty applied to intelligence analysis, with varying degrees of optimism, pessimism and fatalism. It seems that uncertainty has become quite a fashionable concept, and is currently used profusely in a variety of disciplines from complexity to intelligence to military strategy to corporate management to the social sciences.

The temporal aspect of this fascination with uncertainty has striked me as rather exaggerated in that it is hardly specific to modernity. The examples from history from the 19th c. Prussian general Carl von Clausewitz (I take this as an arbitrary starting point; any other one would do as well) to as far back as the Delphic oracle are pertinent illustrations of mankind’s preoccupation with uncertainty and the desire to eliminate the latter, be it by military force or strategy (>Gr. Strategos, ‘general’) or by means of spiritual evocation of an intermediary oracle to interpret – call it what you will – God’s, fate’s, Chance’s “intentions”.

The negative aspects that all four authors attribute to uncertainty and particularly Betts’ fatalistic approach, are precisely a result of our modern obsession with certainty, security and risk elimination. In my view, uncertainty is not only not an obstacle but the very heart of opportunity, which in turn is more a source of optimism than of pessimism. The statement “failure is inevitable” (substitute failure with success or any other noun) sounds little more than a false truism. It is reminiscent of the type of Parmedian philosophy which asserts that all is one and change is impossible. I would go as far as to argue that this type of reasoning is a product of conscious and/or subconscious Christian rhetoric of the type that burns scientists and philosophers at the stake.

Being better at ease in Herakletian water, I would argue that nothing is inevitable because everything is in a flux. This flux, which is often ambiguous, uncertain, now visible, now not, sometimes linear, more often not, is precisely the strategic place of opportunity, the forking of the path, and the place where the potential for great leadership can emerge. To go back to Clausewitz, it is precisely in times of great uncertainty that great leaders are born, he argued.

All that said, there are a number of points that Betts raises, with which I concur. First of all, the idea that intelligence reform, whether procedural or product-oriented is based on trade offs, seems to me a logical observation in a world view that is based on polarities. What’s more important (Betts makes a mere mention of this but Marrin takes the argument a few steps further) is how, or rather where, such trade offs can be optimally utilized when the binary system of polarities assumes a more complex and amorphous form through the injection of sub-polar categories, i.e. when we are presented with additional “circumstantial evidence” that dilutes the black and white picture.

This place and agent of change is, I believe, correctly identified by all four authors as the margin, or the periphery. It is particularly fitting to think of strategy in spatial, not only temporal terms. The periphery, not the center, is often the space of pragmatic change. The center of gravity, to use a military term, represents not only the strength of a system but also its vulnerability. Again, history is rich with precedents of the shifting dynamics between the center and the periphery. Bolko von Oetinger, a strategist for Boston Consulting, argues in “Constructing Strategic Spaces”(Nov 2006) that “Strategy requires regular visits to the periphery in order to explore and learn” precisely because the center does not always remain the center and because “outsiders on the periphery are happy to traverse the distance to the center and conquer it.” He provides a fitting example of a center-periphery dynamic, with consequences the center could not have anticipated at the time. He takes 31 October 1517 as a temporal indicator of radical spatial change and asks (rhetorically): could Pope Leo X in Rome have anticipated that the 95 Theses Marthin Luther nailed to a church door in Wittenberg (the periphery’s ends by Roman standards) on that day would eventually result in Rome’s loosing its privileged position as the center of Christianity?

Going back to Betts article, I found his descriptions of patterned behavior in the face of strategic surprise well thought of and instructional to an intelligence analysis student. I agree with his evaluation of the difficulties and ultimate small benefit of applying worst-case scenario methods as particularly ineffective in terms of operations. Further, if multiple advocacy increases rather than decreases ambiguity and uncertainty, I would argue that this method should only be used in cases where decision-making and good leadership go hand-in-hand, i.e. when political leadership is synonymous with intellectual rigor, courage, and a dose of entrepreneurship.

The Devil’s Advocacy method is to me an intellectual exercise, which should be limited to academia. While guilty of the pleasure of playing this game myself, I believe it does little else than encourage mistrust among the “wrong people” (decision-makers) at the “wrong time” (time for decisive action rather than intellectual speculation).

Jervis identifies more or less the same intelligence failure causes – uncertainty, ambiguity and deception – and offers valuable practical examples for improvement of the intelligence processes and products. One thing that struck me as rather unique in his paper was his emphasis on human resources. In my professional function as chief knowledge officer, I’m confronted with similar HR issues that shape the internal environment. Particularly worthy of note were the sections on multidisciplinary training and the vertical-horizontal organizational structures, and how the former inhibits quality analysis/performance at the expense of organizational politics.

With regard to Perrow’s “error-inducing system”, which Jervis chooses to support through what he calls “informal norms and incentives of the intelligence community”, my response is the same as with Betts’ argument. Providing alternative competing hypotheses should be done with caution, depending on the customer. Further, the idea that intelligence analysis should borrow academic method of testing hypotheses by drawing predictions, is theoretically sound, and perhaps even applicable to long-term strategic analysis (or, on a second thought, maybe not as the more distant the future, the harder to make accurate predictions, except in Black Swan cases, where prediction becomes irrelevant), but has the operational trade offs of time and money. Therefore, I don’t think that any analytic method on its own can improve the analytic product. As Jervis argues himself, interlocking and supporting factors must reflect the requirements imposed by appropriate style: length of the analysis according to the consumer’s requirements, peer review processes among the analysts, and a horizontal hierarchical structure, to name but a few.

Another point that struck me as particularly apt was Marrin’s observation that: “The CIA’s Directorate of Intelligence – the home of analysts – appears to operate according to a culture that rewards service to policy makers but does little to distinguish between information and conceptual products.” Jervis expresses a similar opinion when he criticizes analysts for producing political reporting rather than political analysis. My personal opinion here is that this shortcoming is due to a certain type of an educational system that promotes knowledge over learning; the statement format over the question format. If I’m correcting in thinking so, it would take a long time to overhaul the fundamental principles that form our didactic processes. Another explanation could be a cognitive one, i.e. it takes less mental effort to produce a statement than to come up with a question. And, finally, it could be psychological: reporting facts is largely an anonymous activity that more people would be comfortable with than making a prediction or asking a question, which is an expression of individualism, and by extension more open to criticism.

In this light, I read the final Marrin and Clemente article with great enthusiasm because the medical analogy is a comparative method I’ve spent some time musing over myself in its application to religion, philosophy, writing, and memory. My only concern with this method is that it would attract a certain breed of human, be it an academic dilettante  or professional, whose passion for comparative analysis (of any type), would be the emphasis of theory over practice. And while I think a certain amount of theory can be beneficial to intelligence, its main purpose is and should remain actionable.

The analogy provided by Marrin and Clemente between the process of arriving at a medical diagnosis by medical professionals and articulating an intelligence analysis assessment by intelligence professionals provides an alternative way of looking at the discipline of intelligence analysis, and it is for the most part, useful.

First, the authors identify parallels between the two disciplines. In terms of collection practices, they draw attention to the similarities of employed techniques to gather information upon which different hypotheses can be identified. They compare the medical history questionnaire a doctor first compiles in the diagnostic process to what can be roughly summarized as a situation assessment in intelligence, i.e. any known historical precedents or patterns of events and relationships between actors.

Second, the “review of systems”, i.e. the assessment of specific organs, can be viewed as similar to the individual steps in a country profile assessment, including foreign policy, domestic policy, politicians and political leadership, diplomatic relations, cultural, socio-economic relations, etc. Marrin and Clemente claim that the stage involving the physical examination itself is least conducive to analogy, except in the form of overseas visits aiming at gathering first-hand knowledge of the area, or alternatively, cables from government representatives stationed in the given area.

Finally, additional information provided by various technological systems, such as MRI and IMINT respectively, further reinforces the analogy.

What was interesting to note was the observation that “90% of all diagnoses are made by clinical history alone, 9% by the physical exam, and 1% by laboratory tests and imaging studies such as CT and MRI scans.” (p.710) This finding has interesting implications for at least two reasons. First, if we extend this analogy to the intelligence field, it would seem that in the science vs.intuition debate, intuition is the clear winner in practice. Secondly, the finding poses a serious concern regarding collection requirements, needs and spending. If a situation assessment will be the core component of the final intelligence product, and most of the data can be obtained by open sources, this would necessarily minimize requirements and costs. Further, as Marrin and Clemente observe, the human element, i.e. the experience and the developed intuitive capabilities of a professional from either field, will be indispensable in interpreting the raw data gathered from MRIs, IMINT, SIGINT, and other technical subfields.

In the analytical process, there seems to be a strong argument in favor of comparing how a medical doctor arrives at a diagnosis by examining alternative hypotheses and the way an intelligence analyst might employ Heuer’s method of analysis of competing hypotheses (ACH) .

Parallels also exist in the examination of causes of inaccurate diagnosis vs. intelligence failure: inevitable limitations in the collection and analysis; cognitive limitations of the practitioner/analyst, such as biases, stereotypes, etc; and failure in the application and implementation of scientific methods.

Marrin and Clemente also identify limitations to the proposed analogy between medicin and intelligence. Three key differences are worth acknowledging here. First, medicine has an advantage as a scientific discipline over intelligence in the sheer length in existence of the field, which offers medical practitioners a much wider empirical and theoretical knowledge base. Second, the difference in degrees of denial and deception are also noteworthy of the medical field’s advantage in that, only rarely, do patients conceal or deliberately manipulate their ailing symptoms, whereas in the intelligence field, denial and deception is standard practice. Finally, the doctor-patient relationship does not sustain a parallel to the intelligence analyst-decision/policymaker relationship in that “National security decisionmakers, however, do not make decisions only after receiving finished intelligence analysis (i.e. what a doctor would do prior to initiating treatment), in many cases they are their own analysts, and they have entirely separate sources of information.” (p.722)

The lack of trust between intelligence professionals and decision/policymakers, and the inadequate feedback mechanisms are a well known problem, which makes the doctor-patient relationship closer to a symbiotic one while leaving the latter incomplete at best, parasitic at worst, or even self-destructive.