97 billion pieces of intelligence. That is the quantity of information collected in just one month, March 2013, by the US’ National Security Agency. The phone calls, instant messages, and emails of millions of people worldwide, all sifted, counted, and categorised. ‘Boundless Informant’, the agency’s internal analytics tool, can even present this data country-by-country on a heat-map of surveillance in “near real-time”. The sheer scale of the operation is not the only thing to come to light since Edward Snowden’s revelations about the secretive practices of the US’ biggest intelligence agency. Perhaps more alarming is the way in which the NSA went about its data-mining. Without warrants, legal backing or effective oversight the NSA was able to surveille wherever and whoever it pleased, limited only by the scope of its technical abilities – the fight against terrorism being the oft-cited, and only, justification necessary.
Despite having Boundless Informant, the agency repeatedly claimed not to have accurate figures for its collection practices and (incorrectly) denied having monitored its own people. Not only did the NSA act contrary to law in many of its operations, but did not itself understand the scope and scale of what it was undertaking. Indeed, during legal action to declassify related documents, intelligence officials said that
no one at the NSA fully understood how its own surveillance system worked at the time so they could not adequately explain it to the court…the NSA’s surveillance apparatus, for years, was so complex and compartmentalized that no single person could comprehend it.
Outside of the agency itself, it has become clear that those in charge of policy decisions and overseeing the legitimacy of operations, lack the technical knowhow to raise concerns over suspect practices – indeed they often don’t fully understand the alleged threats being protected against. It is no wonder then that these masters of a new digital order of security and surveillance have been called “Dr Strangeloves of Dataveillance” (http://www.polity.co.uk/book.asp?ref=9780745645117) and “alchemists” of the sort of new “visionary” techniques for monitoring and profiling risky subjects that the NSA is at the spear-head of (http://books.google.co.uk/books?id=JPHTHYDehqcC&pg=PR1&lpg=PR1&dq=amoore+risk+and+war+on+terror+book&source=bl&ots=DG8nerhiH1&sig=dbsAdwTBxSv4UfGfIp2bt05bAFM&hl=en&sa=X&ei=s7GFU8iDE8uw7AbfoYDwCQ&ved=0CIUBEOgBMAU#v=onepage&q=amoore%20risk%20and%20war%20on%20terror%20book&f=false).
Such techniques are designed to anticipate low-probability but high-impact occurrences – your 9/11’s and 7/7’s. Hard to anticipate but devastating in effect. These risks have resulted in a modern day security apparatus that addresses itself to threats that are irregular, incalculable, and in important ways unpredictable. An understanding of contemporary threats as dispersed and uncontrollable phenomena has fostered a mode of security that aims to identify threats at an early stage, and to intervene accordingly. The central claim here is the notion that our current understanding of terrorism diverges from traditional means of risk management based on causal and calculative knowledge in that it functions on the very limits of knowledge. Here, security intervention involves a more imaginative orientation toward the future and ‘anticipatory work’ that includes storytelling, scenario planning, and the performance of exercises.
But whose stories are being told? It is true that amongst the most prevalent of these technologies are those explicitly deployed in the name of ‘security’. From the ubiquitous CCTV cameras on our streets or in our public transport to massive databases deployed at the transnational level in order to sort traveller details and profile potential risks (the Schengen Information System in the EU, and Automated Targeting System in the US are but two examples). However, our average day-to-day interactions with the security apparatus are not on such a grand scale. Whether it is swiping an ID card at the start of the working day, an Oyster Card on the tube, or entering our login credentials to access our emails, we are engaged in micro-level security practices on a regular basis, as an unthinking part of everyday routine. Beyond these overtly security-orientated processes, modern technology has driven us to increasingly self-surveille, blurring the lines between what is and what is not part of this apparatus. Every time we check-in on Facebook or Twitter, use Google Maps to plot a route, or search the internet for a local restaurant, we knowingly (and sometimes unknowingly) exchange personal metadata for the expediency of localised, personalised responses that make our tasks faster, easier and more streamlined.
Indeed, technological solutions themselves provide many of the ‘next big things’ that we so hanker after. Whether it is the latest smartphone, the newest tablet, or ‘smart’ TV, new technologies have become objects of desire far beyond their basic utility as devices capable of completing the tasks for which they were initially intended. They have in many instances become more than productivity aids, but actual extensions of our lifestyle, helping us make claims about our personalities, fashion tastes or interests. Thus, this new logic of surveillance has become embedded in everyday life as a matter-of-fact part of our existence – normalised in our most mundane of interactions with technology and one another. These interactions are themselves nothing revelatory, after all technology pervades almost every aspect of modern life. From how we bank to how we communicate or entertain ourselves, technological innovations are always present, sold as the indispensable solution to all of the problems and desires thrown up by modernity. However, what is revealing here is what affects these interactions and mediations produce, and how they contribute to our understanding, and acceptance, of the modern-day security doxa.
Yet, the majority of these tools were not designed with the NSA in mind. Even ignoring for now the fact that they could be, just because these technologies were not deployed with the express purpose of connecting to an over-arching security apparatus, does not mean they do not share the same logic. These are systems designed to collect even the most trivial looking of data, to categorise and profile, to pre-empt and predict. Whether it is assigning a risk level or simply the likelihood that we could be upsold the Tesco Finest range, these technologies are busy constructing a digital mosaic of our normal lives, concerned not with who we are as an individual, but rather how we compare with the rest of the crowd. Even when such a mosaic is not being built, new technologies often encourage the voluntary collection and sharing of exactly the information required to construct one.
In short, technological fixes for security and surveillance issues are everywhere, progressively more interconnected, integrated, and in ever increasing numbers. More and more they are indistinguishable from, or even an indispensable part of, the consumer and business-orientated technological artefacts that have become synonymous with modern life. More and more these are things to be desired, self-surveillance increasingly becoming a lifestyle choice.
Behind these tools we have a growing band of security entrepreneurs bringing us the latest and greatest technological security solutions, ever more integrated and inseparable from the mundane minutiae of everyday life. These are the new Dr. Strangloves, not of Dataveillance, but of the Day-to-Day. Nothing encapsulates this better than a recent blog post by IFSEC, the International Fire and Security Exhibition and Conference – one of the biggest and longest running exhibitions of its type, where the industry meets to display, discuss and buy just the sorts of tools already mentioned. In it, they present The Periodic Table of Security, with the tagline
Security professionals now have hundreds of elements to consider – from Fire to Facilities, Cyber Security to Safe Cities – it’s an increasingly complicated world.
The table certainly demonstrates the proliferation of modern-day security concerns. But in doing so it represents a new kind of alchemy – the art of fusing the most trivial of day-to-day business activities with our overarching concerns about security. Fire alarms, card readers, and filing cabinets alongside surveillance, intelligence and counter-terror – all in a days work for the security professional, all important, and all part of his remit. Don’t question the logic or the need, this is security we’re talking about. Perhaps not alchemy at all then, but ‘Pataphysics.
Coined by French write Alfred Jarry in the early 1900’s, ‘Pataphysics can be thought of as the science of imaginary solutions, that which lies beyond metaphysics, although it itself resists definition:
‘Pataphysics (the apostrophe is meant to be there) is probably best understood as that which lies beyond metaphysics. Correct definitions are equivalent to wrong ones; all religions are on a par as imaginary and equally important; chalk really is cheese. It’s an escape from reality – reminding us of just how idiotic the rules that dog our everyday existence are.
The periodic table of security aptly symbolises a growing ‘pataphysics of security. Looking for a definition of security, or your role as a security professional? Choose one, all are equal. Right and wrong does not factor in. Surreal, absurd? All are welcome. Imaginary solutions indeed, but to imaginary problems? It’s an escape from reality, but one which by its very nature seeks to redefine that reality, imposing the rules at the same time as it reminds us just how idiotic they are. Security is becoming all things to all men. Or should that be everything and nothing? Both the purveyors of security technologies and ‘pataphysicists make claims to science as the justification of their pursuits. The irony of this should not be lost on us.