By Paul Syers
Yesterday was National Data Privacy Day. With that in mind, I’m curious as to what data privacy will mean, in a world of enhanced neurotechnologies. Let me paint a picture for you. Imagine we have the ability to read people’s memories. Someone is suspected of a murder and police bring that person in for questioning. There’s circumstantial evidence against this suspect, but enough that would give cause for a warrant to search their house. Would this give the police a right to search the suspect’s memories?
There are lots of implications from this one question. For starters, it could greatly streamline the interrogation and trial process. Just bring a bunch of people in and scan their brains. What need is there for a jury, if you have the evidence of the person’s own memories? However, such an activity (reading someone’s memories) is also an invasion of privacy on a whole new level, and I’m not sure the ends (a direct way to discovering the truth of events) justify the means. After all, we currently give criminal suspects some recourse from divulging information; we have the Fourth and Fifth Amendments.
Now let’s change the picture slightly. Let’s say we can’t read memories directly from the brain, but that this suspect uses a memory chip that plugs into their brain and stores the raw information of their memories (audio, visual, sensory, etc.) for them. If you’d like, you can assume they were injured when they were younger in a way that impaired their brain’s ability to record memories, so the implant helps them with this disability. The crucial question: do the authorities have the right to look at what’s on that memory chip? It still feels like a huge invasion of privacy, but not as much as directly looking into someone’s brain.
Pretty soon, we will have neural enhancements that directly and continuously interface with people’s brains. This will probably begin with technologies designed to assist the disabled, but eventually it will spread to the mainstream. When that happens, the data within these technologies should be protected by privacy laws. The police cannot search your phone without a warrant, they shouldn’t be able to search a more private piece of technology without one either.
Just as there are limits on free speech, there should be limits on our right to the privacy of our data. Police can search personal items such as cellphones and email records when they have a warrant to do so, and neural memory enhancement technology would fall under that category. Advancements in neurotechnology will change what data we have access to, but it should not change our right to privacy over that data.
Ninety-one percent of Americans feel like they have lost control over how their personal information is collected and used by companies, according to a recent Pew Research Center survey. One of the United States most important values is that we have a right to privacy, and in the Digital Age this right is being eroded. Our forefathers understood the need to protect the privacy of citizens and enshrined these beliefs in the Bill of Rights to address their absence in the U.S. Constitution. The 1st, 3rd, 4th, 5th and 14th amendments all contain language that implies the privacy of U.S. citizens is protected. We define what privacy means by the rules and protections we create. The improvements in data collection and storage by companies such as Google and Facebook, combined with the emergence of Big Data analytics, have created new ways to circumvent old rules that were designed to protect what we value as private.
The amount of digital information being produced by citizens is increasing exponentially year after year. This information, our data, is being collected and stored by 3rd party data companies like Axciom who then apply Big Data analytics in order to reveal our digital identities. The companies collect data from our actions on the web, from the digital technologies we cannot live without, from information we routinely give away, and from our spending habits. We have allowed Big Data companies access to all of this information because we appear to get something in return (search engine services, e-mail, social media, coupons, etc.). We are not being compensated for the true value of our data, which companies have no problem using to learn intimate details about our lives: the type of information we are only comfortable sharing with those closest to us, our private information.
Today is Data Privacy Day and I cannot think of a better time to draw attention to this issue so we can start a serious discussion about upgrading our privacy laws and rules. Sixty-eight percent of Americans apparently feel this way too and would support action to create stronger policies that will protect people’s privacy in the digital age. What more evidence do we need to realize it is time to create new policies that govern the flow of data in accordance with the values of society?
Let’s get with the times and upgrade our privacy laws and regulations.
The Internet has become a platform of societal intercourse: an information repository, communication tool, commercial-space, and a location for self-brand promotion. Yet unlike in the past, information on societal intercourse is no longer ephemeral, the digital ones and zeros produced from these interactions are permanent, creating a digital fingerprint of each individual user in cyberspace. On their own, personalized bits of data are not particularly useful, and only appear to provide relatively esoteric indicators of a particular individual. Big data analytics, however, correlates flows of data and provides insights derived from behavior science. This information generated about individuals allows corporations and government entities to predict and model human behavior.
Personal big data can be a societal boon, helping to facilitate healthier living, smarter cities, and increasing web simplification through personalization. However there is a darker underbelly to the accumulation of this information. Personal data (clicks, keystrokes, purchases, etc.) are being used to create hundreds of inaccessible consumer scores, ranking individuals on the basis of their perceived health risk, lists of occupational merit, and potential propensity to commit fraud. Moreover, as recent leaks of celebrity photos illustrate, Internet privacy is no longer a guarantee. Information that is meant to remain in the private sphere is slowly leaking into the public sphere, challenging previously conceived notions of civil liberty. In order to curb the tide of cyber intrusions, the individual right to erase data must be enacted.
The European Court of Justice ruled in 2014 that citizens had the “right to be forgotten” — they ruled in favor of citizen right’s to privacy. As today is Data Privacy Day, perhaps it is time for the US to stand up and create their own variant of this law, a uniquely American law that allows American citizens the right to erase data— the right to ensure their privacy.
CReST Proposed Language:
“Any person has the right to erase personal data that they identify as a breach of their privacy. Data erasure may be requested to and arbitrated by the search engine that publishes the data online. If erasure is justified then the search engine must erase any links or copies of that personal data in a timely manner. The search engine is responsible for the removal of authorized 3rd party publication of said data.”
by Rebecca McCauley Rench
The ability to communicate between two life forms is a defining characteristic in whether those beings deserve rights, and we should re-examine our current stance on non-human rights. Human civilization has been struggling with how we define the rights of individuals in our societies for millennia. We can see the evolution of civil rights from a time when rights were decided by your gender, your land ownership, your age, and the color of your skin. In fact, to think that these are not still deciding factors in the way someone is treated in the eyes of the law suggests a limited exposure to the variety of societies in the world. In the United States, we believe that fundamental civil rights are a defining feature of an advanced civilization and necessary for stability in our culture and government. We are still not perfect, but we are continually improving our system and finding ways to be inclusive in those we grant rights.
However, how will we define a person as we begin to push the boundaries of integrating technology into our physiology and control over our genetics? How will we adapt those rights for life forms that do not fit into our current picture of a human yet are sentient beings? What does it mean to be a sentient life form and are there current beings on our planet that deserve more rights than we currently grant?
It is impossible to define a sentient being on their genetics as there is no one gene that makes one sentient nor is it necessary to have a genome to be a sentient being. As we begin to manipulate our own genome, integrate non-biological components into our physiology, and explore the Universe, defining a person by genomic similarity to a baseline is unlikely to hold up despite being very quantitative. The human race is full of genetic diversity and is not the same species it was 40,000 years ago. If one of our ancestors showed up today, would they have the same rights as all other humans on Earth or would we treat them differently in the eyes of the law? I do not think we would want to treat them differently if we uphold the values that urge us to grant rights to individuals. We do not interact with someone based off their genomic similarity to ourselves and this would completely negate the possibility of providing rights to alien life forms, silicon-based intelligence, and the emergence of new intelligent species on our own planet. The fundamental reasons we grant rights to all persons in society apply to these non-humans as well.
Perhaps the more important defining characteristic of being a sentient being deserving of rights is the ability to communicate with other sentient beings in society. For humanity, this has changed through time as we have moved from communication by verbal language, to written words, and now a plethora of media options. Soon we might even be able to communicate our thoughts directly with neural implants allowing us to have an even greater understanding to the ideas being shared. We would not deny a person of their rights in a court of law because we couldn’t understand what they were saying. We would spend time acquiring an interpreter to ensure that they could understand us as well as we understand them. We will face similar issues when communicating with other sentient non-humans and we should hold ourselves to the same standards of communication in those situations. This will become easier as we develop technologies that allow us to communicate directly with other species on our planet, such as neural implants that allow you to carry on a boring conversation with your house cat. Currently, we find ourselves capable of communicating with other primates through sign language and yet we do not provide them with the same rights as humans. Is this due to our inability to think outside the box on who deserves rights or rooted in our group definition of what it means to be a person? If we want to embrace a society where rights are granted to all sentient beings, we should re-examine the interactions we have with other life forms sharing our planet today. This would allow us to set standards and gradations in rights that can be easily adapted for the not too distant future. We already have gradations in rights that we give our children until they reach the age of majority, and these same guidelines can be used in determining the level of rights granted to varying levels of intelligence. This is a question we will have to tackle in the not too distant future as we continue to evolve and adapt humanity to a rapidly changing technological environment.
By Charles Mueller
At the Center for Revolutionary & Scientific Thought (CReST) we spend a lot of our time contemplating the future and imagining the kinds of impacts (both good and bad) science & technology (S&T) can have on the world. We talk about S&T impacts in terms of 3 phases:
- Phase I is where S&T changes the way an existing process is implemented, making it more efficient.
- Phase II is where S&T leads to new processes that affect businesses, government and society
- Phase III is where S&T leads to entirely new paradigms, with new systems, industries and/or governments.
Last night, with all the world watching the President’s last State of the Union (SOTU) address, I thought the President was finally going to say the things we at CReST have been saying. I thought he was going to finally call out to the world that it is time to imagine the kind of future only created by the Phase II and III impacts of revolutionary S&T. Instead he described a future where we only imagined a world changed by Phase I impacts. A world of automation would change things, but a world where we co-exist with AI or communicate to each other with our thoughts would revolutionize things more. I’m sorry Mr. President, but the next moon shot is not a cure to cancer, it would be closer to a cure to all disease.
I wish the President had painted a future maximized by the Phase III impacts of revolutionary S&T. Maybe he didn’t paint that picture because he knows his audience. Maybe we have become so obsessed with “now” that we’ve started to forget to imagine tomorrow. Maybe we simply don’t know where to look anymore for information to help us imagine a future where anything is possible.
When we go to a restaurant we ask to see the menu; our leaders need to be distilling the kinds of futures S&T can foster into a menu. People need to talk about this menu of the future, people need to get excited about this menu and people need to elect leaders that will bring some of the those items of the menu to the table. While I am glad our President told us to think about the future, I had hoped he would talk about a bigger future, a bolder Phase III impact kind of future. There are many more items that should have been on his menu.
At CReST we will continue to do our best to communicate to the world about the important S&T, the ones whose Phase III impacts will revolutionize the world. If we do it right, hopefully next year in the SOTU address we will hear of a future maximized by the benefits of revolutionary S&T and well prepared to deal with its potential misuses.
By Paul Syers
A hundred years ago, if police wanted to search someone’s private safe, they had to show cause to do it. If a judge decided they had justifiable reason, the police would obtain a warrant to search the safe. Society accepted that the warrant gave them the authority to do it. (Even if that person refused, someone could physically crack open the safe to reach the contents inside.) This basic kind of authority is at risk of being taken away.
People have always debated, and should continue to debate, when the government should and should not have the right to obtain information, electronic or otherwise. The protection from search and seizure provided in the Constitution is not absolute. Lawmakers and citizens have always acknowledged that there are cases when the government is justified in searching your stuff and taking it from you to use in due process. There is no reason this should not continue to be true in the digital realm.
The question is now being asked whether or not the government should even have the ability to get certain kinds of digital information by way of getting around an encryption technology. A backdoor key gives the government that capability, and that’s why Federal Agencies are arguing for it. It’s not a question of privacy it’s a question of capability. I think an authority should be able to quickly get crucial data when it has a justifiable need for it. We give the federal government the highest authority in the U.S. (the buck has to stop somewhere, as they say, and our systems says it stops with the Federal government), so it’s reasonable to me that they should have backdoor access to encryptions. Otherwise we’re looking at an instance where private companies do not have to submit to the authority of the federal government, even under reasonable circumstances, which is not a good precedent to set.
After learning that the Paris attacks were planned using communication that was encrypted in a way that authorities could not quickly access, this makes even more sense.
Once authorities have the ability, the responsibility is on us to ensure they use that ability justly (which would only be in rare cases), and punish them if they don’t. It’s not like the government will immediately begin looking at all encrypted data; they have neither the resources, nor the desire.
It’s not like the old rules no longer apply. If authorities want backdoor access to the information, they will need to go through the proper channels and show just cause. The ability granted by this tool should be balanced with weight of responsibility. Let’s not tie our hands behind our back from the start by taking away the tool altogether.
by Rebecca McCauley Rench
The Borg Queen in Star Trek is capable of understanding all of the Borg she is connected to implicitly and has acquired the cumulative knowledge of all assimilated alien races. Yet until assimilation, the Borg are incapable of understanding the motivations and emotions of those space farers they encounter. The holiday season spent with family and friends that you only see occasionally makes many of us feel like a Borg, incapable of communicating our thoughts with family and bewildered at the ideas our loved ones share. It can be difficult to communicate an issue with people speaking from a different reference point. We all want to understand the thoughts and feelings of loved ones, but actually putting oneself in their shoes can be an unachievable challenge. What if we could get assistance communicating with each other through neural technologies that helped us understand the concepts our loved ones are trying to share? Technologies could provide commentary or tell stories in a new way so that we can understand in a way relevant to our personal reference frame. Not only will these types of technologies help us communicate with those around us more effectively, learning becomes more effective as new concepts are explained in a way that fits our frame of mind.
The impacts do not stop with those able to verbally communicate either. The Borg could communicate through their neural network. What if we could apply those same principles to interacting with our children? New parents with their babies often discuss the difficulties in knowing what a baby wants and thinks before their child has had a chance to learn how to speak. Most parents would agree that the first 2 years of a child’s life is difficult and normally completed in a haze from lack of sleep. The benefits of such communication technologies in our child rearing are clear – what if communications between you and your child allowed you to understand why they were upset or what they didn’t like about particular foods and toys? Would child rearing become easier? Perhaps parents worry about the impact on the “natural” progression towards cognitive development. We will have to wait for the data to come in to see how these technologies affect us, but with data in hand, sign me up for a cyborg baby.