Human Directed Transcendence: A New Heaven or Hell on Earth?

By Mark Ridinger

 

For the world’s over 2 billion Christians, Easter Sunday represents Jesus’ resurrection and ascension into heaven, and by so doing, paving the path for mankind to achieve everlasting life. It is perhaps not a coincidence then, that the producers of the movie Transcendence chose Easter weekend for the release of their new film. In it, Johnny Depp plays a brilliant, dying, computer scientist that in a quixotic effort uploads his mind into a supercomputer, in an attempt to achieve a version of what is commonly called the Singularity. Transcendence, but not of the divine variety. From there on, things get pretty interesting for humankind.

 

The concept of the Singularity, or artificial intelligence so great it surpasses human intelligence and understanding, has been discussed for decades; first introduced by the mathematician John von Neumann in the 1950’s. Since then, two diametrically opposed views have emerged: the heaven and hell scenarios. One of the biggest cheerleaders for the heaven case is inventor and futurist Ray Kurzweil (now working for Google). The heaven scenario postulates that the Singularity will bring unfathomable gifts to humankind, not only in terms of cures for disease, alleviation of hunger, and limitless energy, but for immortality as well, as we will be able to ultimately ditch our fragile, mortal biological host for a durable and ever lasting silicon model. But before these wonders are bestowed on us, Kurzweil also predicts the rise of a populace anti-technology movement, which he has labeled the New Luddites, as they would be the descendants of the movement that protested increasing machine automation in 19th century industrial-age England.

 

But it is hard to call Bill Joy, cofounder of Sun Microsystems, a Luddite. Yet, he is one of the main proponents of the hell scenario, which argues, in short, that the acute, exponentially technological explosion that is part of the Singularity would be a real existential threat to humanity, as it would give unfathomable power to potentially anyone. He thinks it is entirely possible that this all leads to the extinction of the human race—or worse—within 25 years.

 

It is true that technology always produces dislocations and disruptions. The power loom was the focus of the Luddites, but so was the “horseless carriage”, the airplane, and the Internet, to name but a few. And for the most part, people adapted. It’s dangerous to say it’s “different this time”; almost always that proves to be wrong. But is the exponential change in technology—one leading to the Singularity or not—finally poised to overwhelm the glacial pace that is evolutionary change, and those that arose from it—namely humans? Do we have the wisdom and sagacity to handle such a “transcension” and even if we do, do we really want to leave behind our humanity, as we have always known it? And are those opposed to pursuing this, now or in the future, merely technophobic New Luddites?

 

On March 27, an elderly, but healthy woman known to the public only as Anne from Sussex, availed herself of medically assisted suicide after traveling from her home in Britain to Switzerland. Although 89, it seems hard pressed to dismiss her as some Luddite. She was a former Royal Navy engineer, and described her life as “full, with so many adventures and tremendous independence.” Yet, she lamented that technology had so taken the “humanity out of human interactions” such that we were becoming “robots” that sat in front of screens, and that it was now just too hard to adapt to the “modern world.” She had grown weary of “swimming against the current.”

 

“If you can’t join them”, she said, “get off.”

 

Hopefully, Anne from Sussex will turn out to be a rare, unfortunate and sad victim of the existential ennui that rapid technological change can produce, but not a greater harbinger of things to come. But we might have to wait until the history books are written (if they are written) describing the aftermath of the Singularity—should it occur—to find out whether she was an outlier or a human canary in the coalmine. There is more to take away from this than just a case study of severe “technophobia”. Namely, what is mankind’s role in shaping our own destiny? If given the tools to direct our evolution, merge with an AI in the Singularity, for example, will we do it? Should we do it? And will it even be possible to opt out (short of suicide), if one doesn’t want to “evolve”?

 

It seems unlikely that we will be unable to avoid the seduction of achieving the Singularity, if and when. After all, we are told in Genesis 1 that God said to man: Be fruitful and multiply; fill the earth and subdue it. Is the Singularity the ultimate extension of that biblical passage and this “neogenesis” what we have been preordained to achieve?

 

Christians look to Easter as the promise of everlasting life given to us by Jesus dying for humanity’s sins, and with it, a chance to transcend the chains of our material bodies and take a seat next to God in heaven. It remains to be seen if mankind’s quest to create and direct our own transcendence–and by so doing to become God-like– will end in a heaven or hell on earth.

 

 

A National Neurotechnology Initiative

By Jen Buss

A year ago, the President announced the BRAIN Initiative specifically charged as “a bold new research effort to revolutionize our understanding of the human mind and uncover new ways to treat, prevent, and cure brain disorders like Alzheimer’s, schizophrenia, autism, epilepsy, and traumatic brain injury.”  These diseases affect less than 5% of the population.

Neuroscience and technology will affect our entire society, not just people with these diseases. Neuroscience will be able to help

  • veterans recover and help get them jobs,
  • students excel in school and become the best and brightest in the world to stay on top of other countries, and
  • bring new industries that will create jobs and new economies.

In order to do this, we need to expand the current BRAIN initiative to a National Neurotechnology Initiative (NNTI). We need an initiative that will benefit the public good and be of national interest. The NNTI will be a national effort that will affect the whole population, not just a fraction of the population. We need to do something the public can believe in, be proud of, and see results.  Neurotechnology is going to revolutionize the world and have profound effects on the way society interacts together and societies interact with each other.

The government can enable these changes rather than sit back and watch them happen before it is too late to guide our society. Now is the time to act to create the National Neurotechnology Initiative. This Initiative should

  • focus Federal Investment in key research areas,
  • follow an investment roadmap, and
  • coordinate these investment efforts through a National Neuroscience and technology Coordination Office.

Through these three tasks, the government can succeed in expanding the BRAIN Initiative.  The National Neurotechnology Initiative is the only solution for the future of neuroscience in our society.

 

 

The Declining Creativity of America’s Students

 We need “CE” as much as “PE” in our schools.

By Mark Ridinger

 

When discussing the state of education in America, most talk today revolves around measuring intelligence and trying to improve standardized test performance. IQ tests (which attempt to measure convergent thinking) are frequently used to try and find our brightest students and to place them in gifted programs. Intelligence is of course an important part of the equation but what of creativity, of identifying and measuring divergent thinking, and fostering its development? What of Creativity Intelligence (CQ)? Our future problem solvers and innovators, be they entrepreneurs, inventors, authors, or researchers will rely on creative intelligence, and identifying and fostering them early in their education is paramount for America’s future. Unfortunately we are failing at that endeavor.

 

The paradigm of merely equating IQ with the skills needed for success is outdated. Current research shows that there is little correlation between intelligence and creativity, except at lower ends of the IQ scale. People can in fact be both highly intelligent and creative, but also intelligent and uncreative, and vice versa. But how do we identify CQ? Dr. E. Paul Torrance has been called the Father of Creativity, for his work that began in the 1960’s. His standardized test, the Torrance Test for Creative Thinking (TTCT) is considered to be the gold standard for measuring and assessing creative thinking, and can be administered at any educational level—from kindergarten through graduate work.

 

 

Several recent comprehensive reviews of Torrance’s data—spanning decades—have been published. The bottom-line is the TTCT not only identifies creative thinkers but is also a strong predictor of future lifetime creative accomplishments. In fact, Indiana University’s Jonathon Plucker determined that the correlation to lifetime creative accomplishment (e.g. inventions, patents, publications etc.) was more than three times stronger for childhood creativity (as measured by the TTCT) than childhood IQ. Having a validated instrument like the TTCT is so important because alternative means to identify CQ don’t work so well. Expert opinion and teacher nominations have been used, but these methods are prone to errors and biases. For example, students who are already achieving or who have pleasant demeanors or have already ranked well on conventional IQ tests tend to be selected, while researchers have shown that highly creative students and divergent thinkers are typically shunned and are at risk of becoming estranged from teachers and other students. In fact, the odds of dropping out increases by as much as 50 percent if creative students are in the wrong school environment.

 

What else has the review of Torrance’s data shown? Unfortunately, that America seems to be in a CQ crisis. Kyung-Hee Kim, an assistant professor at William and Mary, analyzed 300,000 TTCT results and has determined that creativity has been on the decline in the US since 1990. The age group that is showing the worst decline is the kindergarten to sixth grade. The factors behind this decline aren’t known, but may be due to a mix of uncreative play (escalating hours spent in front of the TV or videogame console for example), changing parenting and family dynamics (research suggests a stable home environment that also values uniqueness is important), and an educational system that focuses too much on rote memorization, standardized curriculum and national standardize testing. Are we stifling divergent thinking in our children for conformity of behavior?

 

The rest of the world seems to have woken up to the need to foster creativity in the educational process, and initiatives to make the development of creative thinking a national priority are on going in England, the EU and even China. The United States needs a similar national initiative if we hope to stay competitive on the world stage. What is needed is a new approach to learning that still has children mastering necessary skills and knowledge, but through a creative pedagogical approach. We know that creativity can be measured, managed, and fostered; there is no excuse to not implement such a strategy in our school system. Let’s see the creation and deployment of creative exercise classes for our students and the use of creativity tests as additional inclusion criteria to gifted programs. Surely “CE” is at least every bit as important as PE.

 

 

A Call for Proactive Protection of Privacy Rights

By Ewelina Czapla

 

Although we currently fear that our phone or online data is outside of our control and subject to searches by both private industry and the government, much more will be at stake in the future: our thoughts and ideas. Throughout recent years our privacy has constantly been challenged by the development of evermore invasive technologies. While there has been a call for an explicit right to privacy, the addition of such a right to our Constitution may not suffice in protecting us in the years to come.

 

Currently, we produce data by using our phones, computers and tablets. This data can be so personal as to include the thoughts we formalized with text. But recent developments in the field of fMRI suggest that we are able to accurately read the human mind. When looking even further into the future it is likely that we will be able to digitally interface with the human brain, making the next generation smartphone an implant. At this point, not only will the data we choose to formalize with text be subject to the access but also our very thoughts and ideas.

 

We find ourselves without an explicit right to privacy due to the high rate of technological development and the slow rate of legal development. Our legal system has managed to respond to the Phase 1 impacts of digital communication. However, it is still struggling to address the Phase 2 impacts after decades while the Phase 3 impacts are on the horizon. Unless action is taken now we will find ourselves not only weary about our lack of privacy but also the lack of cognitive liberty. For this reason, we must look beyond simply the right to privacy and call for cognitive liberty including the right to cognitive enhancement, ownership of personal data including thoughts and as well as protections for our thoughts similar to those afforded to spoken word. Only when such changes are made will we be afforded adequate civil liberties to function in the modern world.

Citizenship Without Borders

by Ewelina Czapla

 

The impact of the Internet on our ability to communicate and govern can be described through a two-phase approach where phase one technological impacts accelerate an existing process while phase two impacts are technological changes that create profound change to the way society functions.

 

The spread of Internet connectivity in past decades has greatly increased the ability to maintain personal communications as well as grow commerce, a phase one impact. The rise of the Internet has increased the ability to interact regularly with individuals thousands of miles away and conduct business without ever physically interacting with your customer. Your product may be produced in one country by a factory you contracted with, stored in a facility that you rented elsewhere and shipped by an international carrier to your customers in yet a third country; location is no longer a limitation. With the creation of bitcoin, it has become possible to maintain a digital trade where traditional banks and state regulations are a moot point, maintaining a digital economy.

 

Governance in the future may be conducted outside geographic boundaries within a digital realm where individuals are offered citizenship, a common currency and the ability to conduct business. Global movements may be spawned by an active leader with internet access who is not affiliated with any traditional government construct. We may well see the concept of an online nation arise as groups composed of geographically disparate people come together with common goals and a common currency, a phase two impact of Internet connectivity.

 

Traditional geographically bound states, which created the infrastructure for digital states to arise, must now consider the impact of these disparate populations coming together. This will leave many questions to be answered regarding the legitimacy and role of an online nation.

The Primrose Path: Countering the Myth of Internet Democratization

by Jennifer McArdle 

 

The belief that Internet and social media may be leading to greater democratization is a myth. In reality, the Internet may surprisingly be leading us down the primrose path.

 

At first this statement may seem counterintuitive: Platforms like Twitter have provided ‘netizens’ the ability to report news and ideas first hand. As Tracy Westin of the Center for Government Studies notes, the Internet’s ability to give individuals a personal vocal platform encourages broader democratic discussion: candidate to candidate, voter to candidate, and voter to voter. Jon Pareles calls this process disintermediation—the removal of the middleman (or the traditional news outlet) from the news. Emerging news sites, such as NewsPad, aim to benefit from this ‘disintermediation’ process, crowdsourcing the news by empowering local communities to write articles collaboratively. Andrés Monroy-Hernández, one of the creators of NewsPad, noted that the goal was to produce news that was “for the people, by the people”—a clear democratic reference to Lincoln’s famous Gettysburg Address. So why then, considering this ‘disintermediation’ process, is the belief that Internet may be leading to greater democratization false?

 

While disintermediation has led to the removal of the traditional news middleman, a more problematic invisible middleman has emerged in the form of our social media and search engine giants. 

 

The convergence of big data and behavioral science (i.e. cognitive security) has allowed search engines to ‘personalize’ news. Combining each persons’ digital footprints—their clicks, downloads, purchases, ‘likes’, and posts—with psychology and neuroscience, allows search engines or social media platforms like Google and Facebook to predict interests. The result has been individualized tailor-made news updates.

 

In a New York Times article, Jeff Rosen of George Washington Law investigated what ‘personalized’ news meant for democracy. After clearing cookies from two of his Internet browsers, Safari and Firefox, Rosen created a ‘democratic Jeff’ and a ‘republican Jeff.’ Within two days, his two different browsers with his different ‘identities’ began returning search results that varied based on platform predictions of partisan interests. Similarly, Eli Pariser in The Filter Bubble ran an experiment with two left-leaning, female colleagues from the Northeast. Pariser asked both colleagues at the height of the 2010 Deepwater Horizon oil spill to run searches of ‘BP.’ The first page of search results markedly differed; one woman’s results returned news of the oil spill, while the other’s search only returned investment information in British Petroleum. For the latter of the two, a quick skimming of the front-page search results would not have confirmed the existence of the ongoing environmental crisis. Google’s predictive, personalized algorithms delivered fundamentally disparate news results.

 

While in the past, traditional news middlemen decided which news the populace would read, today our search engine and social media platform’s enigmatic ‘personalization’ algorithms decide. As Tim Wu of Columbia Law School aptly stated, “The rise of networking did not eliminate intermediaries, but rather changed who they are.” 

 

Robust civil democratic dialogue requires an informed populace that has access to information and opposing viewpoints. Personalization algorithms will make this exceedingly difficult. The abstruse and publically unavailable nature of search engine algorithms may actually be more democratically dubious then our former news middlemen. The road down the primrose path may seem lined with roses, however as Shakespeare rightly notes in Hamlet, they often end in calamity. 

The Breaking Bad of Predictions: Learning from Failed Forecasts

by Mark Ridinger

 

Predicting the long range and far reaching effects of disruptive, emerging technologies is the focus of many organizations, including CReST. Different phases can be identified, that run the gambit from improving efficiencies within an established industry to disrupting that industry entirely, or creating entirely new, previously unimagined industries. The PC and its word processing “killer app” at first augmented the efficiency of secretaries, for example, but ultimately essentially ended that profession all together, shifting writing documents and memos to the executive or manager. The World Wide Web has been even more disruptive. Ripe for the Internet “reaper” has been “the middleman”, for example. Cut him or her out, and save the customer and seller time and efficiency. Case in point: the travel agent, driven to near extinction by Expedia et al. Successful and accurate prognostications are exciting, but studying—and learning from—examples of projected Internet produced chaos and disruption that has failed to materialize is equally invaluable.

 

Case in point: the real estate agent. By all accounts, this middleman industry should have been essentially eliminated by the Web, and was widely predicted to be so by savvy investors and pundits alike. A 6% commission on a very large sum of money—indeed the largest purchase or investment most folks will ever make– is a lot of money to part with. Add to that the bursting of the real estate bubble in 2006 and the financial crises and ensuing Great Recession of 2008-9, not to mention substantial venture capital backing of startups trying to take over this huge industry, it was all but inconceivable that the demise of the broker did not occur. With so many aligned financial incentives, it seemed to be a slam-dunk prediction. It was a perfect storm.

 

But it wasn’t. In fact, real estate agents are thriving. Bloomberg reports that only 9% of homes were sold without a broker in 2012, down from 13% in 2008.

 

So what happened? What went wrong? And how did so many get it wrong? At CReST one of the books we are reading is Radical Evolution, by Joel Garreau. In it, the author addresses this point at a high level. For example, some categories to which bad or failed technological predications fall into: underestimating complexity, inadequate cost/benefit ratio, the emergence of an even newer/more disruptive technology, prior bad experiences with similar technology, and a fundamental misunderstanding of human behavior. The last category explains why the real estate agent demise prediction went wrong.

 

The successful Internet real estate startups—now substantial companies—recognized this; people wanted their hand held, and were willing to pay for it. Far from displacing real estate agents, Zillow, Trulia and Realtor.com [the main players] have become essentially advertising companies for brokers. Many of the things brokers had to do for clients—show pictures, comparable sales, neighborhood and school information to name a few—are done by these internet firms for free. What is left, namely title search and legal closing documents primarily (which admittedly require expertise), could be outsourced to an attorney for a fraction of what a home seller is paying in commissions. Yet that hasn’t occurred. Redfin, the startup that set out to eliminate brokers and their commissions, has been on death’s door as a company for years, but has finally switched its model as well.

 

Technology does not advance merely for the sake of technology, nor change for the sake of change. The missing link is often the human element. In Social Physics, another book we are reading, the author, Big Data guru Sandy Pentland, contends that “people prefer trusted and personalized relationships,” likely an evolutionary remnant, and one that remains very much intact even in the era of social media. How Big Data and the Internet might be used to exploit those relationships is in part one of the focuses of the emerging field of Cognitive Security.

 

We need to look at—and learn from—failed predictions as much as successful ones, in order to improve our ability to make successful science and technology forecasts, and ultimately policy recommendations. Often, when we get things wrong, it is because we fail to accurately account for human behavior and desires. In short, we need to understand ourselves better to become better forecasters.