Cultural Revolution in Intelligence

The piece below is my contribution to a special report on the revolution in intelligence affairs and was originally published by the International Relations and Security Network. Particularly insightful is the editorial by Kris Wheaton and a topic piece by Ken Egli on the potential role of academia in intelligence collection and analysis.

 

Cultural Revolution in Intelligence: From Government to Business Enterprise

Earlier this year, the Office of the Director of National Intelligence published a document entitled Vision 2015: A Globally Networked and Integrated Intelligence Enterprise. The first part of this bold intelligence community statement begins with an evaluation of the “shifting strategic landscape,” the defining characteristic of which is said to be uncertainty:

“We live in a dynamic world in which pace, scope, and complexity of change are increasing. The continued march of globalization, the growing number of independent actors, and advancing technology have increased global connectivity, interdependence and complexity, creating greater uncertainties, systemic risk and a less predictable future.”

Uncertainty has become one of the trendier concepts over the past few years, and is currently used profusely in the jargon of a variety of disciplines from intelligence to complexity and network sciences to corporate and risk management. The intelligence community is not the trendsetter. Originally stemming from the physical and natural sciences, the emergence of the concept of uncertainty has been accompanied by the development of a homogenized lexicon to talk about “new risks” generated and driven by globalization and network growth to a point where domains previously falling outside the scope of intelligence and security have been securitized. These domains run the gamut from society and culture to demographics and health to economics and finance to innovation and technology to natural resources and the environment. Regardless the domain, we now talk about complex adaptive systems whether we are examining conceptual physical models, bio organisms, tribes and clans, financial markets, terrorist or organized crime networks, or corporate knowledge management.

The list of globalized mashed-up vocabulary is long. It would appear that whichever way we turn, we find researchers, analysts and managers trying to detect emergence patterns, spot uncertain and unstable environments, aggregate and mine various types of data, develop systemic and holistic strategies and approaches, build resilient models, integrate systems within systems, collaborate and share knowledge across domains, form strategic partnerships, build agile infrastructures, transform organizational cultures and mindsets, and win the war for talent.

So how, apart from adapting to a new vocabulary, is the intelligence community going to achieve the transformation it so vehemently advocates? How is a largely static government enterprise to turn into a dynamic business enterprise? What is actually happening in the process of transforming the culture and mindset of the intelligence community so it may accomplish its mission to create decision advantages? What kind of education is needed to kick start the transformation? Is descriptive qualitative analysis obsolete and should the intuition-led approach be substituted with formal structured methodologies?

Vision 2015 proposes that in order for the intelligence community to transform into an enterprise able to provide decision advantage to policymakers, it must transform from a government enterprise into a “globally networked and integrated intelligence enterprise.” In other words, the intelligence community must start thinking and acting like a business. How well does the business metaphor hold in the government/national security context?

Government, critics of the business analogy have argued, is not comparable to business because it cannot be responsive to market forces since it has a higher purpose: public welfare. These critics also see the competitive advantage of intelligence in the community’s ability to “steal secrets”, which further implies a stronger emphasis on collection over analysis. Such an argument epitomizes the mentality and culture that the new vision is trying to counter. It is a snapshot, a still life if you will, of the Cold War mindset as to what characterizes intelligence. This mindset envisages a centralized national customer, promotes the obsession with secrecy, places value on the finished intelligence product rather than the process of intelligence, and treats flexibility as a foreign word.

Applying a business metaphor to intelligence processes in the national security context is not only valid; it is highly desirable. What the market is to business, international relations is to government. Are we to believe that government should not pay attention to the forces driving the developments on the international arena and respond accordingly? With globalization, where once particular domains were immune to changes outside their immediate environment, and cause-effect analysis had a more linear dimension, the interconnectedness and resulting complexity of drivers cutting across disciplines, calls for non-linear approaches both in terms of collection and analysis.

For at least two decades now it has widely been acknowledged that the so-called intelligence cycle (the process of collection, analysis and dissemination) is an idealized Platonic model that is not only obsolete in today’s environment, but also dangerous and misleading. The first step toward transforming the intelligence community from a creeping and decrepit government apparatus to a dynamic enterprise is providing whatever education necessary to curb the old mindset. Business and national security intelligence share the same strategic objectives: avoid surprises, identify threats and opportunities, gain competitive advantage by decreasing reaction time, and improve long- and short-term planning. With this in mind, the intelligence community should most certainly be responsive to market forces. It should allow for the formation and dismantlement of processes on a need basis. If a process is recognized to be “unprofitable”, it should not be allowed let to drag on for decades because government institutions have a “higher calling”!

Vision 2015 recognizes that the most difficult part of implementing the envisaged transformation is cultural change:

“The first and most significant impediment to implementation is internal and cultural: we are challenging an operating model of this vision that worked, and proponents of that model will resist change on the basis that it is unnecessary, risky, or faddish.”

Yet the real challenge of transforming the culture lies neither at the top (the Cold War veterans of the intelligence community who by the sheer force of nature are on their way out) nor at the bottom (the fresh-off-college Generation Y recruits who may have the “right” attitude and ideas but too little real world experience to know how to best apply them). The challenge lies in the lack of mid-level leadership as this is the level at which bottom-up generated ideas are filtered to form strategic direction at the top and get the buy-in from the customer. Inability to recruit and sustain competent middle management will translate into either empty rhetoric and a hodge-podge of recycled vocabulary, or in stagnation, lack of flexibility, and death by a thousand paper cuts.

If the intelligence community is serious about winning “the war for talent” (an expression around which its human capital strategy  is fixed), it should aim at developing its mid-level capabilities. “Investing in our people” is a nice enough sounding cliché. This does not mean, however, ensuring competitive compensation and providing competitive benefits because in the war for talent, there will always be someone ready to offer bigger, better, more competitive compensation packages. Adequate compensation should not be a strategic human capital goal. It should be a given. Strategically speaking, investing in people should translate into offering them the opportunity to grow their potential through continuous learning, which in turn, will increase their sense of ownership and loyalty. True, one can change a culture by throwing money at it, but the resulting culture is hardly the type that is likely to stand up to the values set in Vision 2015: commitment, courage, and collaboration.

Winning the war for talent is not a silver bullet for a successful cultural transformation. If we think of information as the currency in the world of intelligence affairs, then surely we must observe fluctuations in this currency as the external environment changes. The relative scarcity of information during the Cold War era resulted in putting a high price tag on information. Not only was there a lot less information available in contrast to today’s web- and telecommunications-networked world, but this information was collected secretly by means of human intelligence (HUMINT). Hence the culture in which the intelligence community operated was one that first, placed far greater emphasis on collection than analysis; and second, created a glamourous, cult-like image of secrecy.

The “information tsunami” as information overload is figuratively referred to, together with proliferation of telecommunication and media technology, has clearly devaluated not only information as a currency, but also its attribute – “secret”, thereby creating a shift from the emphasis on collection to that of analysis. More value is now placed on sorting out relevant information from the ubiquitous noise, which has resulted in the creation of a grey area somewhere between collection and analysis, namely synthesis. Yet synthesis is no new fad. It is an analytic process that every person in academia, from a freshmen to a graduate researcher to an established professor engages in daily. While some more progressive elements of the intelligence community have supported “outsourcing” the synthesis of open source information (the most voluminous type of information) to knowledge workers outside the community, be that academic institutions, think tanks, or in some ultra-progressive cases – crowdsourcing, such initiatives are still in the single digit count.

There is some evidence of cultural change in the intelligence community of acknowledging the value of open source intelligence (OSINT) such as the creation of an Open Source Center at the Office of the Director of National Intelligence (ODNI) and the ODNI sponsored Open Source conferences in 2007 and 2008, which served as an outreach activity to bring together intelligence professionals, academic institutions, think tanks, private sector intelligence providers and the media. Nevertheless, a successful cultural transformation from obsession with classified information to a wider use (not just acknowledgement) of OSINT has not been achieved.

While one of the key design principles upon which Vision 2015 rests is adaptability and the document duly declares: “The keys to adaptability are active engagement and openness to outside ideas and influences.” The implementation plan fails to mention either OSINT exploitation or openness to collaboration and contribution by non-community members, such as think tanks and academia, where a large volume of vetted OSINT resides. Failure to take actionable steps in this regard will not serve the community well in its attempts at cultural transformation. Promoting ideas without an actionable plan is like taking one step forward and two steps back; worse – it creates a “cry-wolf” image.

All that said, it should be acknowledged that the United States is a pioneer in promoting the use of OSINT among intelligence professionals. The OSINT discussion at  the EU-level is lagging behind. As for countries with alternative understanding of democracy, transparency and accountability, such a discussion is not only non-existent, but very likely sends ripples of cynical laughter in the midst of planning the next black PR campaign.

Another due acknowledgement in this discussion should be the fact that cultural transformation rarely occurs with a swipe of a blade, but undergoes various phases over a period of time. Following a re-evaluation of the definition of intelligence in the post-Cold War environment, the type of human capital the community wants to attract and retain and a makeover of inward and outward-looking operation models, is a re-evaluation of what constitutes quality intelligence products and the development of quality benchmarks. In this respect, Vision 2015 provides a bullet point under the section of adaptability actions, which reads as follows:

• Build the organic capability to conduct exercises and modeling and simulations throughout our processes (e.g., analytics, collection, mission management, etc.) to innovate and test new concepts and technologies.

For the reader unfamiliar with the intelligence community’s internal debates, the above provision might sound somewhat surprising. What? Doesn’t the community already have such capabilities? Aren’t collection and analysis done according to structured methodologies? Stephen Marrin, a CIA analyst from 1996 to 2000, reveals a different picture. In an article for the American Intelligence Journal (Summer 2007), he clearly outlines what is known in the community as the “intuition vs. structured methods” debate:

“Even though there are over 200 analytic methods that intelligence analysts could choose from, the intelligence analysis process frequently involves intuition rather than structured methods. As someone who worked at the CIA from 1996 to 2000, I possess firsthand knowledge of the kind of analytic approaches used at the time. While I was there, the reigning analytic paradigm was based on generalized intuition; an analyst would read a lot, come up with some analytic judgment, and send that judgment up the line without much focus on either the process involved in coming to that judgment, or making that process transparent to others. No one I knew – except for maybe the economic analysts – used any form of structured analytic process that was transparent to others. No quantitative methods; no special software; no analysis of competing hypotheses; not even link charts.”

For the sake of clarity, it should be said that “intuition” is meant here not in the sense of some extrasensory paranormal activity. It simply refers to arriving at a judgment by means of extensive experience that cannot be clearly demonstrated. Another word commonly used to describe this process is heuristics, or a rule of thumb. The preference of old school intelligence analysts for using intuition rather than structured methodologies stems from the historical Cold War mindset that was described above, and the reasons for its perpetuation are to be found in…human nature.

During the Cold War, the intelligence community operated in an environment characterized by opposing ideologies, the bulk of analysis constituted political analysis: political situation assessments, profiling of political leaders, etc. To attempt to quantify such analysis would rightly be considered pseudo-science. Qualitative analysis, which is often based on intuition (that is opinion vs. fact) is suitable to such an environment and to the problems it is tasked to analyze. However, with the securitization of domains previously not on the agenda of national security professionals such as energy security, environmental issues, proliferation of networked non-state actors, qualitative analysis falls short in its ability to provide the type of rigorous analysis the new vision outlines. Perhaps even more importantly, in the aftermath of 11 September, analysis based on non-structured methodologies evades both the transparency of how the analytic judgment was formed and the ensuing accountability.

Significantly, a number of academic intelligence programs have sprung up during the past decade offering advanced education in the field of Intelligence Studies. It is interesting to note that most of the advanced degrees they offer are Master of Arts degrees rather than Master of Science degrees. This indicates that the debate whether intelligence is an art or a science persists. A cultural change will not follow until people in the community stop thinking along black and white lines. Intelligence is both an art and a science. Resistance to implement structured methodologies stems from habit, from “this is not the way we do things around here” mentality, from the numerical illiteracy inherent in the Humanities and many Social Sciences, and a “if it were so great, why do you have to always prove it to me” attitude. Countering such deeply ingrained habits will take time and there are no quick fixes to this problem other than investing in people’s learning on the job. The intelligence community’s return on investment will be nothing short of realizing its lofty vision.

Advertisements

Late Night Thoughts on the Patriot Act: Size Matters

With George W. Bush paying his last tribute to European allies, and the potential for a strategic inflection point with the choice for a new American president, here are some thoughts for more benign supporters of the current administration (i.e. myself) on what I consider the President’s most enduring legacy of his mandate, namely The Patriot Act of 2001. It’s just my musings on intentions.

My aim here is to demonstrate how a single action can be perceived (and proven to be) both a success and a failure. First, I will outline the a priori argument, which shows the Patriot Act as an example of successful Congressional oversight. Second, I will look at the problem from an a posteriori approach, and try to show how and why the Patriot Act can also be perceived as a failure of Congressional intelligence oversight.

An a priori approach emphasizes the honorable intentions that prompted an action in the first place. An a priori argument is the notion that bad things are sometimes the right thing to do, not because the end is good, but because the motive is good. This argument falls in the domain of ontological ethics developed by the philosopher Emanuel Kant. In analyzing ulterior motives, Kant points out that something could look good, and really be bad, and vice versa. He argues that only if an action springs from a desire to do good with no expectation of reward or benefit, is the action truly good, and goes on to discuss under what conditions will people do good without ulterior motives and/or expectations. The answer is “duty”. Leo Strauss has summarized Kant’s doctrine succinctly: “The moral worth of an action proceeds from the goodness of the will by which that action is animated, which in turn means the purity of that will – the goodness of the will in its abstraction from every empirical end. Purity of will implies purification of the will of all substantive intention, the animation of the will only by its self-respect, its respect for the formal principle of will in general, in other words respect for law as such. Duty itself means the necessity of performing an action out of respect for law.”

When the Patriot Act was signed into law in 2001, it met with almost unanimous support from all directions of the political spectrum. It was seen as “the right thing to do”, as the nation’s “duty” to protect the ultimate raison d’etre of a democratic state – the rule of law (or in Kantian terms, Rechtstaat). Congress approved the Patriot Act in order to try to better deal with the new challenges emerging in the geopolitical arena: the rise of non-state, transnational and asymmetric threats, most notably exemplified by the 9/11 terrorist attacks. While terrorism is not a new phenomenon, 21st century transnational terrorism poses an unprecedented challenge. It challenges the institutional and legal foundation of intelligence by combining features of both internal and external threats because it operates both inside and outside boundaries. The Patriot Act was successful in identifying this new challenge and saw to it that former strict divisions between external and internal intelligence, i.e. between the CIA and the FBI, erected in the aftermath of Watergate in the 1970s, were less strictly enforced. It allowed for freer and better information exchange and collaboration between the different intelligence agencies, including the FBI. Further, it allowed non-citizen terrorist suspects to be detained for up to 7 days without a hearing, and in some cases where an individual was certified to be a threat, it allowed that the individual could be held indefinitely. Moreover, non-citizens, who were found to have raised funds for terrorist activities could be deported. The FBI was granted greater freedom in accessing electronic communications. The Treasury was authorized to order banks to reveal sources of suspicious accounts and sanction uncooperative institutions.

Having looked at the Patriot Act from an a priori point of view, which shows that Congressional oversight was, for all intents and purposes – doing one’s duty in the name of national security – good in its inceptions, I would like now to turn to an a posteriori approach and show the results, and to a large extent unintended consequences, of the passing of the Patriot Act. It could be argued that the Patriot Act has, in a number of ways, infringed upon the notion of good governance as understood in a democratic society.

An a posteriori argument is a classic example of Aristotelean teleological ethics, according to which the end justifies the means. In other words, sometimes bad things need to happen because the consequences are good. Timing plays a major role in teleological ethics, with a focus on short-term consequences sometimes taking precedence over long-term. If we want to speak of the Patriot Act as a failure of Congressional intelligence oversight, it is precisely the failure of focusing on the short-term rather than the long-term consequences. Thus, while the Patriot Act enabled greater access to information and inter-agency cooperablity, it did so at the expense of departing from constitutional norms and civil rights and liberties which characterize democracy, with dire consequences for the U.S.’ public image both on the national level and abroad.

An a priori approach is one essentially based on retribution, eye for an eye, and is incompatible with the rule of law. It postulates that people who lie, deceive, or otherwise treat other people as a means to an end, deserve to be treated in exactly the same way. Since the war on terrorism is not a traditional military one, but rather politico-ideological and psychological, the aim of terrorist groups to destroy Western values, namely, to erode and destroy democratic institutions and governance, the Patriot Act has done little to reinforce democratic values. Instead, it has focused on revenge and the application of tactics which may have sufficed to deal with Cold War threats, but have clearly come short in addressing 21st century challenges and threats. Blurring the division between external and internal intelligence, while honorable in intent, has produced a number of unintended bad consequences for civil rights: uninformed surveillance of individuals, not always justified infringements on the “rights” of immigrants, asylum seekers an ethnic and religious groups, limiting or altogether eroding the rule of law that everyone should be treated equally before the law.

In conclusion, since it is mainly the internal intelligence activities that have come under attack in light of the civil rights infringement debate, Congress would do well to focus on those provisions of the Patriot Act which deal with foreign intelligence, allowing greater freedom of action, while enforcing stricter control, supervision and oversight internally. It should be careful to avoid extreme positions which leave little room for flexibility and adaptability. The rigidity of categorical imperatives are simply ill-suited to deal with the threat of transnational terrorism. Whatever approach Congress decides to adopt in the future, it should leave enough space for manoeuvring as the external and internal environments change. It should also develop the capability to respond as close to real-time as possible as an inability to do so will undermine even the most successful policy or approach. I came across a blog entry recently, entitled “Is your watchdog a lapdog or an attack dog?” Humor aside, let us not fool ourselves: neither a Bolognese nor a Rottweiler make for good watchdogs; size matters, but so does personality. Personally, I would feel safer with a German Shepherd.

Possible Sources of Future Intelligence Failure

Russ Travers’ article written in the eve of 9/11 does sound sinister in retrospect in that he points to shortcomings in the intelligence community, which did, sure enough, manifest themselves in a major calamity. This, at least, was my impression after reading his article the first time. However, on a second read, I began to question his Cassandric powers because for every shortcoming he identified, I could think not only of a historical precedent, but also of a current analogy. This makes me question to what extent we are truly fooling ourselves that an event such as 9/11 can ever be predicted and/or avoided. Further, to what extent were measures to carry out an intelligence reform a knee jerk reaction to the dramatic tragedy of 9/11? Are we, 7 years later, talking about major intelligence reforms as a response to the 9/11 “failure”, or is the question more intrinsic in nature? What if 9/11 hadn’t occurred? Would the same intelligence reforms be important to implement? Are the reforms we are talking about reactive or proactive in nature?

The shortcomings Travers discusses in his 2001 article seem to me to have changed very little. So little, in fact, that today we’re having the same debates and still trying to convince – the community? the executive branch? the legislative branch? ourselves as individuals? – that more radical change is needed. We seem to be stuck in limbo land, in the Death valley described by Grove, in a deadlock argument that falsifies the past and obscures the future. I will enumerate here a few examples from Travis that seem to be particularly pertinent, and I would like to refer to them as

Unfinished Business

“Data was there,” he says, “but we failed to recognize fully their significance and put them into context.”

This is hardly an original excuse. Examples of this kind abound from myth, to history, to literature, to our own personal relationships. Essentially, this is a cognitive problem and solutions to it might be physically limited. Seen from an anthropological perspective, in primitive cultures, it is the role of god(s) to encode a message. The message is then recognized (best case scenario, but not a given) by mortals as a portent of something (usually ominous), and interpreted or decoded by an oracle (often in such a way that the interpretation is equally ambiguous), whereupon the mortal, fearful of the divine message but emboldened by the oracle interpretation, makes a decision and acts on it. Sometimes it is the right decision; other times it is not.

I apologize for the following diversion, but I include it here because I think it illustrates the difference in cognitive processes between encoding, decoding and interpreting a message.

In his essay “Sema and Noesis: Some Illustrations”, G. Nagy [NAGY, G., “Sema and Noesis: Some Illustrations“, Arethusa 16, 1983, pp. 35-55.] examines the etymology and use of cognition vocabulary in Homer. He establishes the word sema ‘sign’ as a cognate to the Indic dhyama ‘thought’. Sema is found in the roots of our modern English words ‘semiotic’ and ‘semantic’, pointing to a relation with the mental process of thinking. In Greek, this connection appears in the words noos ‘mind, sense, intelligence’ and its derivative verb noeo ‘perceive, take note, think’, along with the derivative noun noesis. The etymology of noos has been traced back to the Indo-European root *nes- meaning something like ‘return to light and life’. Nagy points out that sema is “the key to a specific aspect of cognition, namely recognition.” (p.36) Most frequently sema is used in Homeric epic in the context of the recognition of Odysseus by his philoi ‘those near and dear’. The activity which denotes the recognition of the sema is the verb anagignosko. What is more important is that the recognition of the sema is an act of interpretation. On several occasions when Zeus sends a lightning (sema), its interpretation is different according to who the interpreter is (Il. 2.353, 9.236, 13.244, 21.413, etc.) or in the words of Nagy: “a code bearing distinct messages that are to be interpreted in context by both the witness and the narrative itself.” (p.36)

The place where recognition occurs is the noos. Thus Alki-noos ‘notices’ that twice the disguised Odysseus weeps whenever the bard sings about the Trojan War (Od. 8.94, 8.533), which enables him to recognize the true identity of his guest. By contrast, the leader of the suitors is named Anti-noos, as both he and his comrades fail to recognize the many signs (semata) signaling their doom. (Od. 22.8-30)

A sema can be properly interpreted only in the context of knowing its relation to other semata in any given situation. The example Nagy provides is that the recognition of the Dog Star as a sema (Il. 22.30) depends on the knowledge of the position of the other stellar semata.

Two further examples:

In book 6 of the Iliad, Proetus sends Bellerophon to Lycia carrying a tablet, inscribed with “murderous signs”. Bellerophon cannot read what spells his death, but the king, for whom the message is intended, does, and upon reading the instructions, sends Bellerophon to death:

He quickly sent him off to Lycia, gave him tokens,
Murderous signs, scratched in a folded tablet,
And many of them too, enough to kill a man.
(Il. 6. 198-200)

When the king receives the “fatal message” (210), he identifies it as a sema (217) and kills Bellerophon.

In book 7 of the Iliad, Ajax and other Greek heroes decide to draw lots among themselves to see who will meet Hector in a single combat. The horseman gives the command for the lots to be shaken “and each soldier scratched his mark on a stone and threw it into Atrides Agamemnon’s helmet” (202-3). After the lot is drawn and the herald takes it through the ranks, none of the heroes recognize the mark except for Ajax, to whom it belongs. This is important also because it establishes a connection between the two passages in that the manner in which the “inscription” communicates the message independently of whatever graphs it may contain. Regardless of the way it is spelled, the message can be encoded only by the one who knows the relation between this particular sema and its context. To everyone else, it is meaningless.

The problem of noise vs.signal in the context of intelligence analysis is, I believe, rather similar to the examples above. These days, we might not rely on oracles to interpret the divine significance for us, but our faith in science and technology to do that are not that different. I have no formal training in cognitive science, but through amateur interest in the subject, I remember reading that the thought process is a lot more demanding on the brain than the process(?) of belief. Our cognitive biases are precisely that: beliefs that are easier to hold onto than the energy required to make a couple of neurons rub and produce a spark.

The word “context”, so often featured in intelligence debates, has always struck me as rather strange. What does it mean to put something into context? It means to see where a particular piece fits in the whole. It is to have, yet another cliché, a “big picture” view of the problem. A noble enterprise. Have we become that self-delusional that we think we are capable of playing God? Or is it that we aspire to create an intelligence community that is nothing short of a conglomerate of Olympians? Viewed through the prism of 9/11 or any other dramatic historical event, perhaps a better way of reform would be to recognize that there are limitations to our cognitive capabilities, and that there will be times when we will not be able to predict the future, short or long-term. In fact, in front of events of such magnitude as 9/11, phrases such as “the near future”, “the foreseeable future”, etc. are utterly irrelevant. Chances are, on 9/10 the analysts preparing the PDB for the following day, as well as anybody else, would have most likely seen no threat in the “foreseeable future”.

Another shortcoming of the intelligence community Travers talks about is the lack of adequate response to increased complexity of military, social and cultural factors. He says that not only was there no agency “postured” to conduct integrated analysis that would reflect this increasing complexity and interconnectivity, but that a deliberate “division of labour”, i.e. commissioning separate military, economic and political analyses, will lead to failure because such divisions do not reflect the external environment. True enough, such artificial divisions present a Platonified view of reality, and as Travers maintains, they result in Balkanization of competencies, leaving little room for competitive analysis while making the exposure to risk and failure ever greater. I do not know to what extent reforms toward fusion analysis have been implemented in the intelligence community since the writing of this article. From the on-going debates one reads about in declassified sources, it would seem, progress has been slow. On the other hand, it seems that the need to give up the silo structure and replace it with more holistic methods, has been clearly understood and supported by academic institutions geared toward preparing a new crop of analysts. Taking courses at the Mercyhurst Institute for Intelligence Studies has made this evident to me for the past year.

The lack of fusion as a result of division of labor has even more important consequences still. It results, as Travis says, in the view that the whole is less than the sum of its parts, which I think brings my argument full circle to the notion of “the big picture”: no big picture, no context.

Finally, Travers speaks of a lack of a systematic national security policy. He claims that security policy is conducted on an ad hoc basis, in response to whatever happens to trigger a given administration’s knee jerk reaction. This is another intrinsic problem which, although not stemming from the intelligence community, it greatly affects it. How? We can call Grove to our help here. Clarity and determination are qualities that Groves associates with a successful transition period of a company. In my opinion, the intelligence community still lacks both the clarity and the determination to transform itself. On one hand, lack of systematic security policy (clarity) results in unclear prioritization. On the other hand, lack of strong leadership to show the way forward through personal example undermines the ability to carry out significant reforms. Moreover, all these factors – lack of vision and clarity, and weak determination, contribute to demoralizing the work force. Grove points out that “Demoralized organizations are unlikely to be able to deal with multiple objectives in their actions. It will be hard enough to lead them out with a single one.” (p.151)

I saved the point of leadership till the last because I believe it is the one of most significant importance in the circumstances we are facing in 2008. Thinking of potential dooms day scenarios to answer this week’s contextual assignment question, I thought of various Armageddon options, but none rings as devastating as what Roger Kimball describes as “cultural suicide” in an essay entitled “What We Are Fighting for: The Example of Pericles”. In this essay, Kimball compares the 5th century B.C. Athens of Pericles with the U.S. today, using Pericles’ Funeral Oration to illustrate the similarities of fighting for freedom and democracy – intrinsic characteristics of both Athenian society in antiquity and American society today. Among the attributes he praises are the vigilance and sense of responsibility of the citizens who under a democratic government enjoy certain common privileges, but also share common responsibilities, as well as the impulse to achieve, to excel, and to surpass. Kimball laments that the concept of democracy today has been abused by people “fighting for their rights”, but giving little back to society. He calls democracy a substitute for mediocrity, a “shorthand for…lowering standards and pursuing them as instruments of racial or sexual redress or some other form of social re-engineering.” (p.73) In that sense, Pericles’ oration is a refreshing departure from mediocrity in that it calls the people of Athens to mobilize their spirit and act in defense of excellence and a “healthy competitive spirit”. The speech, of course, is set up against the backdrop of the enemy – Sparta, whose way of life and political system is in contrast to that of Athens. For Kimball, and I concur with his analogy completely, “The spectacle of radical Islamists dancing joyfully in the street when news broke of the September 11 attacks on New York and Washington should remind us of that fact.” More than a surprise attack or intelligence failure, more than an attack on capitalist symbolism or American citizens, 9/11 was an attack on Western value systems as described by Pericles two and a half thousand years ago.
Kimball goes on to identify some shattered illusions in the West as a result of 9/11, more notably shattered fantasies of academic multiculturalists, the illusion that the world is a benevolent, peace-loving place, and that the use of power by the powerful is by definition evil. I think he is being optimistic about the shattering of these illusions. I think they are still here today, more persistent in some sectors than others, but widely ubiquitous in academia (especially the social sciences) and the media. The intelligence community, and the whole security sector, needs the type of Periclean leadership that will drive it forward toward achieving excellence and taking responsibility for its actions while weeding out the elements of mediocrity, complacence and resignation. Starting with a clear definition of what intelligence can and cannot do, this new leadership will have to define realistic parameters for transformation.