A National Neurotechnology Initiative

By Jen Buss

A year ago, the President announced the BRAIN Initiative specifically charged as “a bold new research effort to revolutionize our understanding of the human mind and uncover new ways to treat, prevent, and cure brain disorders like Alzheimer’s, schizophrenia, autism, epilepsy, and traumatic brain injury.”  These diseases affect less than 5% of the population.

Neuroscience and technology will affect our entire society, not just people with these diseases. Neuroscience will be able to help

  • veterans recover and help get them jobs,
  • students excel in school and become the best and brightest in the world to stay on top of other countries, and
  • bring new industries that will create jobs and new economies.

In order to do this, we need to expand the current BRAIN initiative to a National Neurotechnology Initiative (NNTI). We need an initiative that will benefit the public good and be of national interest. The NNTI will be a national effort that will affect the whole population, not just a fraction of the population. We need to do something the public can believe in, be proud of, and see results.  Neurotechnology is going to revolutionize the world and have profound effects on the way society interacts together and societies interact with each other.

The government can enable these changes rather than sit back and watch them happen before it is too late to guide our society. Now is the time to act to create the National Neurotechnology Initiative. This Initiative should

  • focus Federal Investment in key research areas,
  • follow an investment roadmap, and
  • coordinate these investment efforts through a National Neuroscience and technology Coordination Office.

Through these three tasks, the government can succeed in expanding the BRAIN Initiative.  The National Neurotechnology Initiative is the only solution for the future of neuroscience in our society.



The Declining Creativity of America’s Students

 We need “CE” as much as “PE” in our schools.

By Mark Ridinger


When discussing the state of education in America, most talk today revolves around measuring intelligence and trying to improve standardized test performance. IQ tests (which attempt to measure convergent thinking) are frequently used to try and find our brightest students and to place them in gifted programs. Intelligence is of course an important part of the equation but what of creativity, of identifying and measuring divergent thinking, and fostering its development? What of Creativity Intelligence (CQ)? Our future problem solvers and innovators, be they entrepreneurs, inventors, authors, or researchers will rely on creative intelligence, and identifying and fostering them early in their education is paramount for America’s future. Unfortunately we are failing at that endeavor.


The paradigm of merely equating IQ with the skills needed for success is outdated. Current research shows that there is little correlation between intelligence and creativity, except at lower ends of the IQ scale. People can in fact be both highly intelligent and creative, but also intelligent and uncreative, and vice versa. But how do we identify CQ? Dr. E. Paul Torrance has been called the Father of Creativity, for his work that began in the 1960’s. His standardized test, the Torrance Test for Creative Thinking (TTCT) is considered to be the gold standard for measuring and assessing creative thinking, and can be administered at any educational level—from kindergarten through graduate work.



Several recent comprehensive reviews of Torrance’s data—spanning decades—have been published. The bottom-line is the TTCT not only identifies creative thinkers but is also a strong predictor of future lifetime creative accomplishments. In fact, Indiana University’s Jonathon Plucker determined that the correlation to lifetime creative accomplishment (e.g. inventions, patents, publications etc.) was more than three times stronger for childhood creativity (as measured by the TTCT) than childhood IQ. Having a validated instrument like the TTCT is so important because alternative means to identify CQ don’t work so well. Expert opinion and teacher nominations have been used, but these methods are prone to errors and biases. For example, students who are already achieving or who have pleasant demeanors or have already ranked well on conventional IQ tests tend to be selected, while researchers have shown that highly creative students and divergent thinkers are typically shunned and are at risk of becoming estranged from teachers and other students. In fact, the odds of dropping out increases by as much as 50 percent if creative students are in the wrong school environment.


What else has the review of Torrance’s data shown? Unfortunately, that America seems to be in a CQ crisis. Kyung-Hee Kim, an assistant professor at William and Mary, analyzed 300,000 TTCT results and has determined that creativity has been on the decline in the US since 1990. The age group that is showing the worst decline is the kindergarten to sixth grade. The factors behind this decline aren’t known, but may be due to a mix of uncreative play (escalating hours spent in front of the TV or videogame console for example), changing parenting and family dynamics (research suggests a stable home environment that also values uniqueness is important), and an educational system that focuses too much on rote memorization, standardized curriculum and national standardize testing. Are we stifling divergent thinking in our children for conformity of behavior?


The rest of the world seems to have woken up to the need to foster creativity in the educational process, and initiatives to make the development of creative thinking a national priority are on going in England, the EU and even China. The United States needs a similar national initiative if we hope to stay competitive on the world stage. What is needed is a new approach to learning that still has children mastering necessary skills and knowledge, but through a creative pedagogical approach. We know that creativity can be measured, managed, and fostered; there is no excuse to not implement such a strategy in our school system. Let’s see the creation and deployment of creative exercise classes for our students and the use of creativity tests as additional inclusion criteria to gifted programs. Surely “CE” is at least every bit as important as PE.



A Call for Proactive Protection of Privacy Rights

By Ewelina Czapla


Although we currently fear that our phone or online data is outside of our control and subject to searches by both private industry and the government, much more will be at stake in the future: our thoughts and ideas. Throughout recent years our privacy has constantly been challenged by the development of evermore invasive technologies. While there has been a call for an explicit right to privacy, the addition of such a right to our Constitution may not suffice in protecting us in the years to come.


Currently, we produce data by using our phones, computers and tablets. This data can be so personal as to include the thoughts we formalized with text. But recent developments in the field of fMRI suggest that we are able to accurately read the human mind. When looking even further into the future it is likely that we will be able to digitally interface with the human brain, making the next generation smartphone an implant. At this point, not only will the data we choose to formalize with text be subject to the access but also our very thoughts and ideas.


We find ourselves without an explicit right to privacy due to the high rate of technological development and the slow rate of legal development. Our legal system has managed to respond to the Phase 1 impacts of digital communication. However, it is still struggling to address the Phase 2 impacts after decades while the Phase 3 impacts are on the horizon. Unless action is taken now we will find ourselves not only weary about our lack of privacy but also the lack of cognitive liberty. For this reason, we must look beyond simply the right to privacy and call for cognitive liberty including the right to cognitive enhancement, ownership of personal data including thoughts and as well as protections for our thoughts similar to those afforded to spoken word. Only when such changes are made will we be afforded adequate civil liberties to function in the modern world.

Citizenship Without Borders

by Ewelina Czapla


The impact of the Internet on our ability to communicate and govern can be described through a two-phase approach where phase one technological impacts accelerate an existing process while phase two impacts are technological changes that create profound change to the way society functions.


The spread of Internet connectivity in past decades has greatly increased the ability to maintain personal communications as well as grow commerce, a phase one impact. The rise of the Internet has increased the ability to interact regularly with individuals thousands of miles away and conduct business without ever physically interacting with your customer. Your product may be produced in one country by a factory you contracted with, stored in a facility that you rented elsewhere and shipped by an international carrier to your customers in yet a third country; location is no longer a limitation. With the creation of bitcoin, it has become possible to maintain a digital trade where traditional banks and state regulations are a moot point, maintaining a digital economy.


Governance in the future may be conducted outside geographic boundaries within a digital realm where individuals are offered citizenship, a common currency and the ability to conduct business. Global movements may be spawned by an active leader with internet access who is not affiliated with any traditional government construct. We may well see the concept of an online nation arise as groups composed of geographically disparate people come together with common goals and a common currency, a phase two impact of Internet connectivity.


Traditional geographically bound states, which created the infrastructure for digital states to arise, must now consider the impact of these disparate populations coming together. This will leave many questions to be answered regarding the legitimacy and role of an online nation.

The Primrose Path: Countering the Myth of Internet Democratization

by Jennifer McArdle 


The belief that Internet and social media may be leading to greater democratization is a myth. In reality, the Internet may surprisingly be leading us down the primrose path.


At first this statement may seem counterintuitive: Platforms like Twitter have provided ‘netizens’ the ability to report news and ideas first hand. As Tracy Westin of the Center for Government Studies notes, the Internet’s ability to give individuals a personal vocal platform encourages broader democratic discussion: candidate to candidate, voter to candidate, and voter to voter. Jon Pareles calls this process disintermediation—the removal of the middleman (or the traditional news outlet) from the news. Emerging news sites, such as NewsPad, aim to benefit from this ‘disintermediation’ process, crowdsourcing the news by empowering local communities to write articles collaboratively. Andrés Monroy-Hernández, one of the creators of NewsPad, noted that the goal was to produce news that was “for the people, by the people”—a clear democratic reference to Lincoln’s famous Gettysburg Address. So why then, considering this ‘disintermediation’ process, is the belief that Internet may be leading to greater democratization false?


While disintermediation has led to the removal of the traditional news middleman, a more problematic invisible middleman has emerged in the form of our social media and search engine giants. 


The convergence of big data and behavioral science (i.e. cognitive security) has allowed search engines to ‘personalize’ news. Combining each persons’ digital footprints—their clicks, downloads, purchases, ‘likes’, and posts—with psychology and neuroscience, allows search engines or social media platforms like Google and Facebook to predict interests. The result has been individualized tailor-made news updates.


In a New York Times article, Jeff Rosen of George Washington Law investigated what ‘personalized’ news meant for democracy. After clearing cookies from two of his Internet browsers, Safari and Firefox, Rosen created a ‘democratic Jeff’ and a ‘republican Jeff.’ Within two days, his two different browsers with his different ‘identities’ began returning search results that varied based on platform predictions of partisan interests. Similarly, Eli Pariser in The Filter Bubble ran an experiment with two left-leaning, female colleagues from the Northeast. Pariser asked both colleagues at the height of the 2010 Deepwater Horizon oil spill to run searches of ‘BP.’ The first page of search results markedly differed; one woman’s results returned news of the oil spill, while the other’s search only returned investment information in British Petroleum. For the latter of the two, a quick skimming of the front-page search results would not have confirmed the existence of the ongoing environmental crisis. Google’s predictive, personalized algorithms delivered fundamentally disparate news results.


While in the past, traditional news middlemen decided which news the populace would read, today our search engine and social media platform’s enigmatic ‘personalization’ algorithms decide. As Tim Wu of Columbia Law School aptly stated, “The rise of networking did not eliminate intermediaries, but rather changed who they are.” 


Robust civil democratic dialogue requires an informed populace that has access to information and opposing viewpoints. Personalization algorithms will make this exceedingly difficult. The abstruse and publically unavailable nature of search engine algorithms may actually be more democratically dubious then our former news middlemen. The road down the primrose path may seem lined with roses, however as Shakespeare rightly notes in Hamlet, they often end in calamity. 

The Breaking Bad of Predictions: Learning from Failed Forecasts

by Mark Ridinger


Predicting the long range and far reaching effects of disruptive, emerging technologies is the focus of many organizations, including CReST. Different phases can be identified, that run the gambit from improving efficiencies within an established industry to disrupting that industry entirely, or creating entirely new, previously unimagined industries. The PC and its word processing “killer app” at first augmented the efficiency of secretaries, for example, but ultimately essentially ended that profession all together, shifting writing documents and memos to the executive or manager. The World Wide Web has been even more disruptive. Ripe for the Internet “reaper” has been “the middleman”, for example. Cut him or her out, and save the customer and seller time and efficiency. Case in point: the travel agent, driven to near extinction by Expedia et al. Successful and accurate prognostications are exciting, but studying—and learning from—examples of projected Internet produced chaos and disruption that has failed to materialize is equally invaluable.


Case in point: the real estate agent. By all accounts, this middleman industry should have been essentially eliminated by the Web, and was widely predicted to be so by savvy investors and pundits alike. A 6% commission on a very large sum of money—indeed the largest purchase or investment most folks will ever make– is a lot of money to part with. Add to that the bursting of the real estate bubble in 2006 and the financial crises and ensuing Great Recession of 2008-9, not to mention substantial venture capital backing of startups trying to take over this huge industry, it was all but inconceivable that the demise of the broker did not occur. With so many aligned financial incentives, it seemed to be a slam-dunk prediction. It was a perfect storm.


But it wasn’t. In fact, real estate agents are thriving. Bloomberg reports that only 9% of homes were sold without a broker in 2012, down from 13% in 2008.


So what happened? What went wrong? And how did so many get it wrong? At CReST one of the books we are reading is Radical Evolution, by Joel Garreau. In it, the author addresses this point at a high level. For example, some categories to which bad or failed technological predications fall into: underestimating complexity, inadequate cost/benefit ratio, the emergence of an even newer/more disruptive technology, prior bad experiences with similar technology, and a fundamental misunderstanding of human behavior. The last category explains why the real estate agent demise prediction went wrong.


The successful Internet real estate startups—now substantial companies—recognized this; people wanted their hand held, and were willing to pay for it. Far from displacing real estate agents, Zillow, Trulia and Realtor.com [the main players] have become essentially advertising companies for brokers. Many of the things brokers had to do for clients—show pictures, comparable sales, neighborhood and school information to name a few—are done by these internet firms for free. What is left, namely title search and legal closing documents primarily (which admittedly require expertise), could be outsourced to an attorney for a fraction of what a home seller is paying in commissions. Yet that hasn’t occurred. Redfin, the startup that set out to eliminate brokers and their commissions, has been on death’s door as a company for years, but has finally switched its model as well.


Technology does not advance merely for the sake of technology, nor change for the sake of change. The missing link is often the human element. In Social Physics, another book we are reading, the author, Big Data guru Sandy Pentland, contends that “people prefer trusted and personalized relationships,” likely an evolutionary remnant, and one that remains very much intact even in the era of social media. How Big Data and the Internet might be used to exploit those relationships is in part one of the focuses of the emerging field of Cognitive Security.


We need to look at—and learn from—failed predictions as much as successful ones, in order to improve our ability to make successful science and technology forecasts, and ultimately policy recommendations. Often, when we get things wrong, it is because we fail to accurately account for human behavior and desires. In short, we need to understand ourselves better to become better forecasters.



How Technology Changes Everything

How Technology Changes Everything

by Jen Buss

Technology is constantly improving our lives. Each year dozens of new technologies and products are invented that change the way we live day to day. New apps are developed, new sensors to measure our biofeedback, and new safety features for our cars, and so on. Occasionally, a technology is developed that is not just evolutionary but revolutionary and drastically changes society.

The impact that technology has on society occurs in roughly three phases. Initially, technology helps us do things better. The first impact of most technologies is to make existing process work faster or better. These are what we define as phase I impacts.  Later, a new technology often inspires entirely new processes that would not be possible without the technology. We call these phase II impacts. Even later new technologies begin to change entire systems, industries, or even governments. We develop whole new platforms, culture and/or society shifts, or the market does things completely unexpected. We call these phase III impacts.

There are several examples of phase I, II, and III impacts in the past decades.

Consider the multi-phase impacts of the computer on business operations and processes.  During phase I, computer word processors made the existing process of business communication, secretaries and typing pools, faster.  During phase II, business communication changed from memos to emails, removing the need for secretaries and typing pools, thus creating a new set of business communications processes. During phase III, businesses and industries restructured and reorganized.  Two decades ago one needed a very expensive international infrastructure to market and sell globally.  Today, anyone with a computer, a Fedex, Amazon, or Google, account can market and sell from one’s living room.

Another example of multi-phase impacts can be seen in the printing industry.  Phase I impacts are that most, maybe all, former print media, newspapers, magazines, etc. are now delivered and mostly read online; faster delivery of formal media via electronic communication. The phase II impacts are the fact that formal publishers are a struggling business.  Many authors self publish today through blogs, online journals, social media, etc. The emergence of new processes is the hallmark of phase II impacts.  Phase III impacts sustain a large restructuring of the way society obtains their news and entertainment.  The major newspapers, TV networks, and publishers are no longer the primary source of information for many people.  A growing majority receives their news and information directly from those participating in the news via video, tweets, blogs, and real-time connectivity.

As technology continues to impact, influence, and change our society at an ever increasing rate, we should expect its effects to be seen and felt from phase I through phase III in ever increasing and interesting ways.