A Social Approach to National Security

by Patrick Cheetham

Ignoring social approaches to national security analysis severely degrades our understanding of adversaries’ intentions and capabilities.  The only way to understand the intentions and technological capabilities of a state or non-state actor is by using a holistic approach that analyzes social aspects, such as the creation of ideas, management, and characteristics of an organization.  These critical pieces of knowledge can be overlooked without deep knowledge of a culture, people, and region.  Some recently studied examples of these oversights cover the bioweapons program of the Soviet Union.

In the case of the Soviet’s bioweapons program, Sonia Ben Ouagrham-Gormley argues “that the success of a bioweapons program also depends on ‘intangible factors,’ such as work organization, program management, structural organization, and social environment, that affect the acquisition and efficient use of scientific knowledge.”   For example, the director of one Anthrax bioweapon facility broke Soviet rules of compartmentalization by having scientists and weaponeers collaborate and communicate openly with each other, ultimately helping lead to success of the program.  Kathleen Vogel’s recent work, Phantom Menace or Looming Danger?, describes US bioweapons threat assessments that have excluded “from analytic and policy attention (1) a serious consideration of the social dimensions of biotechnology and its associated bioweapons implications; and (2) the social practices surrounding analytic work that can introduce biases into bioweapons assessments.”  Vogel’s work depicts the “biotech revolution frame” and its ramifications for overestimating technical capability.  Both cases show a hindsight view of capabilities that were “knowable” at the time.  Thus, by including in national security assessments a better understanding of the social aspects, the US has the potential to better predict the nature of threats.

To ensure the security of the US through knowledge of intentions and capabilities, the social aspect must become an integral part of the analytical toolkit. The social approach to national security should be used more robustly, but not at the expense of other quantitative approaches.  Material, resources, and expertise impact social and technical spheres of knowledge creation just as organization, management, and ideation do.  A toolbox of approaches including ones that emphasize social dynamics of an adversary and data-driven ones should be utilized together.  Current practices in the US Intelligence Community and the Department of Defense need to take the take this approach more seriously in order to produce rigorous analytic products that accurately identify and predict our adversaries’ intentions and capabilities.

If you want to understand 21st Century ‘Electioneering’ look to Cicero

If you want to understand 21st Century ‘Electioneering’ look to Cicero
Jennifer McArdle
 
In the first century BC, Marcus Tullius Cicero ran for consul, the highest office in the Roman Republic. His younger brother, Quintus, sought to advise his elder brother on how to effectively ‘social engineer’ the electorate. In Quintus Tullius Cicero’s The Commentariolum Petitionis, Quintus directs Marcus to wage a campaign based on micro targeting, delivering targeted campaign messages (which often contradicted each other) to various members of the Roman populace, in order to gain their support. Quintus’ campaign strategy delivered Marcus victory, demonstrating the power of tailored messaging. 
 
The use of behavioral science and big data by campaigns to effectively model voter behavior is adding new relevance to Cicero’s 2000 year-old campaign strategy—micro targeting is once again in vogue.
 
The 21st century has witnessed the emergence of ‘data driven campaigns.’ Campaigns are combining big data with behavioral science and emergent computational methods to model individual voter behavior. By combining the data located in public databases, which include information such as party registration, voting history, political donations, vehicle registration, and real estate records with those of commercial databases, campaigns have been able to effectively target individuals. This micro targeting extends beyond the ability to identify which voters to contact, but to the content of the message as well. Philip N. Howard in his book, New Media Campaigns and the Managed Citizen, notes that in the weeks prior to the 2000 presidential election, two middle-age, conservative, female voters logged on to the same Republican website, from different parts of the country. The first, a voter from Clemson, South Carolina saw headlines about the Republican commitment to 2nd Amendment protections and their pro-life stance. The second, based in Manhattan, was never shown those headlines. The website’s statistical model suggested that the former female would respond positively to those headlines, while the latter likely supported some measure of gun control and a woman’s right to choose.
 
While micro targeting in Rome arguably made the process more democratic—Marcus was not a member of the nobility and would have typically been eliminated from the candidacy—today’s use of micro targeting has the potential to erode democracy. These computational models allow parties to acquire information about voters without directly asking those same voters a question. With this information in hand, campaigns can opaquely micro-target individuals, selectively providing information that fits their partisan and campaign issue bias, while removing platforms that may not align with their interests. Essentially, campaigns are able to generate filter bubbles, which reinforce individual viewpoints, while removing differing ideas or philosophies from their search results. Voters are not even aware that micro-targeting has occurred.
 
While it is unlikely that micro targeting can be removed completely from politics, there may be a mechanism to ensure the integrity of the democratic process in politics. While difficult, given the opaque nature of micro targeting, attempting to create a ‘sunshine movement’ during campaigns by creating non-partisan sites that highlight each candidates’ individual platforms could help to ensure that voters know each candidates true views. ‘Data driven campaigns’ need not erode democracy, but should they remain as is, they may do just that.

Are we Icarus?

By Jennifer McArdle

 

Has our pursuit of scientific and technological knowledge led us to become ‘Icarus’? In some dystopian science fiction futures, such as Jack McDevitt’s Odyssey, humanity’s pursuit of scientific knowledge brings us to the brink of universal destruction. It is humanity’s hubris, our desire to be ‘all knowing’ that leads us to this point.

 

In Greek mythology, Icarus, and his father, Daedalus, in an attempt to flee Crete fasten feathers to wax, creating wings to soar away over the sea. Daedalus warns Icarus of the folly of first complacency and then arrogance. The former, complacency, flying too low, would cause Icarus’ wings to be moistened and him subsequently to be swallowed by the sea. The latter, arrogance, flying too high, would melt away the wax of his wings, causing Icarus to tumble into the sea below. Not heeding his father’s warning, Icarus curiously ascended to the sky above. His ambition was his destruction, and as he sailed closer to the sun, the wax sealing his wings in place melted. Icarus fell to his death.

 

One does not need to read science fiction to understand the hubris of some scientific thinking, some of our most visionary scientific thinkers have exemplified Icarus’ same curiosity and ambition. Indeed, Robert Oppenheimer emerged from the Manhattan project and the ensuing mass destruction of Hiroshima and Nagasaki staunchly opposed to more lethal nuclear weapons. However, once learning that the hydrogen bomb was a real technical possibility, his curiosity got the better of him, and his position famously changed:

 

The program we had in 1949 was a tortured thing that you could well argue did not make a great deal of technical sense. It was therefore possible to argue that you did not want it even if you could have it. The program in 1951 was technically so sweet that you could not argue about that. The issues became purely military, the political, and the humane problems of what you were going to do about it once you had it.

 

It seems, today, as if we are in a very similar position to Icarus receiving his wings. Stephen Hawking famously quipped earlier this month that the creation of artificial intelligence (AI) may be the greatest triumph of human history, but it may also be our last. And Stephen Hawking is not alone in this potentially bleak forecast; Bill Joy, the Founder of Sun Microsoft Systems in his influential Wired article has noted that 21st century technologies—genetics, nanotechnology, and robotics (GNR)—could yield not just weapons of mass destruction but knowledge enabled mass destruction, with the power to self replicate. More disturbing to Joy was that these scientific and technological advances would arise gradually and humans would become increasingly socialized to them.

 

 

The question becomes: When is the tipping point? When does our scientific and technological curiosity turn to hubris? Will humanity acknowledge the inherent risk involved in AI and GNR research, and thus tread carefully, or will we be Icarus and soar into the sky?  

 

The Repercussions of Bad Science

By Mark Ridinger

 

One of the core missions at CReST is separating out dogma from the process of informing policy.  Our goal is the advancement of sound policy based on science. And by that we mean, of course, good science. Science is not immune to conflict of interests, to shoddy methodical practices, and yes, even fraud. To wit, in the book Best Available Science, co-written by the Potomac Institute’s CEO Mike Swetnam, the authors correctly assess that “many individuals—some with good will, others with more malevolent desires—have misused scientific information…”

 

Last month, a comprehensive review published in the Annals of Internal Medicine, found that “current evidence does not clearly support cardiovascular guidelines that encourage high consumption of polyunsaturated fatty acids and low consumption of total saturated fats.” Surprised? A recent article published in the Wall Street Journal  picked up on this, and chronicled the poor science that took place, almost unbelievably over decades, that claimed saturated fats cause cardiovascular disease. The studies, which date all the way back to the 1950’s, were terribly flawed on many levels.

 

The author of the article, who has a book coming out on the subject, concludes that they believe this to be so, because “nutrition policy has been derailed over the past half-century by a mixture of personal ambition, bad science, politics and bias.” The repercussions are significant. The American Heart Association, and then later the US Department of Agriculture, rolled out dietary guidelines that eventually took the country by storm. The result was that carbohydrates began replacing saturated fats in Americans’ diets, and that has contributed to obesity and Type 2 diabetes, conditions now becoming of almost epidemic proportions.

 

Bad science can also be in the form of outright fraud, such as the now infamous paper published in the prestigious medical journal, The Lancet, in 1998 that claimed that the measles, mumps and rubella (MMR) vaccine caused autism. The article wasn’t completely retracted until 2010, but not before the MMR vaccine had taken a beating in the court of public opinion, and worse, resulted in parents avoiding vaccinating their children with resulting significant morbidity and mortality. The New England Journal of Medicine concluded that the resultant backlash against vaccinations included “damage to individual and community well-being from outbreaks of previously controlled diseases, withdrawal of vaccine manufacturers from the market, compromising of national security (in the case of anthrax and smallpox vaccines), and lost productivity.”

 

In our efforts to resist dogma as a means to inform intelligent policy, we must however still stay vigilant to bad science, as—although both can be extraordinarily damaging—bad science is worse. Why?

 

For one, dogma is easier to spot—much more so than bad science. Science has become a “new god” to some, and thus it is all too easy to view science as infallible (no one likes to argue with a god). It takes specialized education and skill to identify bad science, which few have in today’s complex times. Furthermore, bad science is much more likely than dogma to be adopted and institutionalized. Last, the ability to profit from bad science is much greater than from bad dogma, and that can be a motivation to overlooking inconvenient outcomes, or even worse, to making fraudulent claims that can adversely affect millions.

Intelligent Science

By Charlie Mueller

We need intelligent science.  Far too often we sacrifice intelligent science for arrogant science.  Arrogant science is where we ignore certain possibilities and only focus on scenarios that fit our views; when science is driven by the fantasies of man and not the realities of nature. It is driven by fear, money and egos. 

Arrogant science is what has led to our current crisis with antibiotic resistance and climate change.  In each of these cases, we used the findings of science to develop technology that appeared to change our lives for the better in the moment.  There is no denying that the development and use of antibiotics has saved millions of lives, but it seems our strategy may also end up causing millions of people their lives in the future.  It begs the question, “Did they really help or did they just make the problem worse?” 

The industrial revolution and modern globalization has brought the quality of life standards up for billions of people around the world.  It has enabled a way of life that was science fiction only a century ago.  However, in doing so we changed the very architecture of our ecosystem in a way that may be irreversible.  Were these changes preventable?  Were they predictable?  Did we consider the long term effects of these strategies designed to improve our lives?

With both antibiotic resistance and climate change, the problem seemed to be that we used science and technology to change a particular system before we understood how that system worked.  Maybe one of the best examples, where we implemented a strategy without understanding the system, is in regard to the US policy concerning forest fires.  In the 1930s, the idea was that forest fires needed to be prevented and put out as quickly as they were found.  It seemed logical and certainly couldn’t be a bad thing, right?  Apparently it was.  We didn’t understand that there was a natural order to forest fires and that they weren’t “bad” even though they appeared this way in the moment.  By not letting fires burn the forests became very thick and the ground became “primed” with deadwood and brush.  The stage was set for the fires to become stronger than ever, something we’ve seen recently over the last few years

The problem here again was that we didn’t take the time to understand the role forest fires had in keeping a forest healthy and as a result we implemented policy that actually did the opposite.  We chose to act arrogantly and pretend we understood the issue.  Now we are paying the price.  What will happen if in the future we can’t pay that price?  Are we about destroy planet in the blink of an eye with the Hadron Collider experiments that have a very small probability of forming a black hole? Do we really understand the risks we are taking in science?  We need intelligent science.

Intelligent science requires patience and an informed public.  The goal of intelligent science is to understand the systems being studied and how they work.  Intelligent science is not driven by money; it is driven by knowledge.  When we take the time to understand how a system works, we can properly develop strategies that change the system in ways that we want and in ways that we can predict.  We can prepare for the “side effects” and truly use science and technology to make our lives better.  Intelligent science is what leads to the impossible becoming possible.

In order for us, as a society, to practice intelligent science we need a public that is capable of identifying and supporting intelligent science.  Science is a dialogue, it is an ongoing conversation made up by the people who speak, practice and study it.  Anyone can be a scientist.  Like any conversation there are multiple points of views and many ways in which a particular issue can be framed.  The beauty of science though is that this conversation relies on evidence for a particular point of view to hold.

If there are only a handful of people who can determine if the evidence is good or not, then there are only a handful of people who can decide if this point of view is valid or not.  This is why we need an informed public.  Without an informed public, the scientific dialogue will be framed according to the beliefs of the few that understand its findings.  We need as many people as possible scrutinizing the latest scientific findings, asking the questions that need to be asked and making sure that science stays true to its pursuit of knowledge.  We don’t need a precautionary principle in science, we need intelligent science.

A Marred Mother’s Day

by Mike Swetnam

 

This Mother’s Day is marred by the abduction of several hundred Nigerian schoolgirls.  The only thing their mother’s wanted was an education for their children.

 

Nigeria is a secular country that has been trying to provide a balanced and secular education to its population.

 

Boko Haram, the terror group that kidnapped the children, like all Al Qaeda groups, opposes any form of secular education especially for girls.  This group is a prime example of extremism and extreme acts to prevent the rational and thoughtful functioning of society.  Extremism, particularly religious extremism, is destructive and wrong.  We cannot let this faulty, old thinking continue.

 

Many have called for the elimination of groups like Boko Haram and Al Qaeda because these group practice terror and the killing of the innocent.  But we need to eliminate these groups for an even more fundamental reason.  They advocate government dictated by religion.  They demand the subjugation of woman.  They are dedicated to the destruction of secular and rational governments and societies.

 

Nothing has fueled the progress of the human race more than rational secular education and governments based on these ideals.  Clearly the founders of the USA understood the fundamental nature of this concept when they mandated a separation of church and state as a key part of our Constitution.

 

Our future depends on a free and open school system not just for American citizens but also for all world citizens.

 

Freedom, democracy, and security of any form are dependent on an informed and educated population.  When society fails to create and maintain that informed and rational population, it FAILS.

 

We need to destroy groups like Boko Haram and the ideology that fuels them.  Not just because they represent a ruthless terror group that kidnaps children, but also because they stand in the way of human freedom and progress.

 

Why Internet Personalization Could Erode Democracy

by Jennifer McArdle

 

Internet sites and social media platforms like Google, YouTube, and Facebook, have amassed immense amounts of data on individual users, compiling, in essence, individualized virtual footprints. Combining each persons’ virtual footprints—their clicks, downloads, purchases, ‘likes’, and posts—with psychology and neuroscience, allows search engines or social media platforms to model human behavior and predict current and future interests. 

 

The power of personalized prediction has already been demonstrated within the advertising world. In a much-publicized 2012 media storyTarget was able to identify a pregnant teenager before her father, simply based on her Internet search history.  Internet search data when combined with the power of behavioral science can reveal very unique things about individuals, even life-changing events, like pregnancy. Corporations use personalized predictions to increase corporate profits. In 2011, Facebook and Google, alone, made $3.2 and $36.5 billion, respectively, by selling personalized advertising space to corporations based on user data. Personalized advertising works, and the market for it is steadily on the rise. 

 

Media personalization, however, extends beyond corporate advertising to the news. In a New York Times article, Jeff Rosen of George Washington Law investigated what ‘personalized’ news meant for democracy. After clearing cookies from two of his Internet browsers, Safari and Firefox, Rosen created a ‘democratic Jeff’ and a ‘republican Jeff.’ Within two days, his two different browsers with his two different ‘identities’ began returning search results that fundamentally differed based on platform predictions of partisan interests. Similarly, Eli Pariser in The Filter Bubble ran an experiment with two left-leaning, female colleagues from the Northeast. Pariser asked both colleagues at the height of the 2010 Deepwater Horizon oil spill to run searches of ‘BP.’ The first page of search results markedly differed, one woman’s results returned news of the oil spill, while the other’s search only returned investment information on British Petroleum. For the latter of the two, a quick skimming of the front-page search results would not have confirmed the existence of the ongoing environmental crisis. Google’s predictive, personalized algorithms delivered fundamentally different news results.

 

In a 1982, Shanto Iyengar highlighted the impact of media on personal perception. In his study, “Experimental Demonstrations of the ‘Not so Minimal’ Consequences of Television News,” Iyengar demonstrated that media exposure to various issue areas, tended to raise the perception of issue importance in subject minds. Iyengar called this ‘accessibility bias’. In a world of personalized search engines, news, and social media, it is likely we will fall prey to this ‘accessibility bias’. However, unlike past ‘accessibility biases’, today’s ‘accessibility biases’ will be constructs of our own beliefs.

 

As political philosopher Hannah Arendt wisely noted, democracy requires a public space, where citizens can meet and exchange diverse opinions—it is within this common space that a robust democratic dialogue takes place, and a commonality can emerge through the differences. Internet personalization erodes these common spaces, making it increasingly unlikely that our ‘democratic and republican Jeffs’ will encounter differing ideas or philosophies.

 

If Internet personalization today seems somewhat troubling from a democratic standpoint, the future only seems more problematic. Yahoo’s CEO and Google’s former Vice President Marissa Meyer has expressed hope that the company could eventually render the search box obsolete. Eric Schmidt, an executive chairman of Google, has forecasted that “the next step of search is doing this automatically…When I walk down the street, I want my smartphone to be doing searches constantly—‘did you know?’ ‘did you know?’ ‘did you know?’”  In the future, your phone, as Peares notes, will be doing the searching for you.

 

A future of ubiquitous personalization could be a future of ubiquitous confirmation biases—a world where our beliefs and perceptions are constantly confirmed by our personalized media, entering us into an endless confirmation loop with no real feedback. In psychology and cognitive science, confirmation bias leads to statistical error. In a democracy, confirmation bias could lead to polarization and the failure of democratic dialogue.

 

In February 2012, the Obama administration released the Consumer Privacy Bill of Rightswhich sought to clarify the proper use of virtual consumer data by corporations. While the administrations’ Consumer Privacy Bill of Rights is a step in the right direction—helping to ensure virtual privacy in the marketplace—it does nothing to address a ‘netizens’ right to access information, free from personalization or bias. 

 

At present, some sites, like Google are allowing users to opt-out of interest-based ads. However, these measures do not go far enough. Platforms like Facebook, Twitter, and Google have content visibility, personalization, and data sharing methods that are based on private algorithms and policies. These algorithms and polices are often opaque or inaccessible to the public, yet can yield immense influence. Making personalization algorithms transparent and simple would allow users to understand how their news is getting personalized. That combined with user ability to opt-in and out of personalization could help ensure an Arendtian public space, while providing corporations profitable advertising platforms. Personalization does not have to erode democracy. However, if personalization remains opaque, it may do just that.