The Smart Grid Needs to be a Safe Grid

By T.J. Kasperbauer

Imagine you wake up one morning to discover that your entire city has lost power. What would you guess is the most likely cause? A tornado? Equipment malfunction? Terrorist attack?

Increasingly, American’s energy grid is under threat from cyberattacks. This is not a new problem, but so far the solutions have been inadequate. In order to improve our energy grid, we must build cybersecurity into its main functions.

One way the U.S. is currently trying to combat cyberattacks is through development of the Smart Grid. Under Smart Grid, energy production and distribution are decentralized. Decentralization creates redundancies that help prevent a single attack from taking down the whole grid. Devices on the Smart Grid are also in constant communication, which enhances detection of attacks and outages.

The main problem with the Smart Grid is that its interconnectedness produces vulnerabilities. By putting all devices in two-way communication with each other, the Smart Grid increases the number of possible entry points for attacks. Moreover, the Smart Grid connects the energy grid to lots of other “grids.” For instance, household electricity usage can be monitored on the internet. Foreign or domestic adversaries—including lone wolf hackers—could potentially use this sort of connectability to influence the Smart Grid.

Some attempts have been made to address this problem. For instance, DARPA is currently installing automated cybersecurity defense systems into power grids. And the Department of Energy routinely funds projects aimed at testing and improving the cybersecurity of the energy grid ($34 million in August 2016). There are also published guidelines for protecting energy cybersecurity (in 2010 and 2015). These are all important and should continue, but must be better integrated into the Smart Grid as it develops.

In order to preserve the benefits of the Smart Grid, we must build security alongside connectability. This requires better anticipation of future problems in order to design security into grid functions.

The Right to Erase Data

The Internet has become a platform of societal intercourse: an information repository, communication tool, commercial-space, and a location for self-brand promotion. Yet unlike in the past, information on societal intercourse is no longer ephemeral, the digital ones and zeros produced from these interactions are permanent, creating a digital fingerprint of each individual user in cyberspace. On their own, personalized bits of data are not particularly useful, and only appear to provide relatively esoteric indicators of a particular individual. Big data analytics, however, correlates flows of data and provides insights derived from behavior science. This information generated about individuals allows corporations and government entities to predict and model human behavior.

Personal big data can be a societal boon, helping to facilitate healthier living, smarter cities, and increasing web simplification through personalization. However there is a darker underbelly to the accumulation of this information. Personal data (clicks, keystrokes, purchases, etc.) are being used to create hundreds of inaccessible consumer scores, ranking individuals on the basis of their perceived health risk, lists of occupational merit, and potential propensity to commit fraud. Moreover, as recent leaks of celebrity photos illustrate, Internet privacy is no longer a guarantee. Information that is meant to remain in the private sphere is slowly leaking into the public sphere, challenging previously conceived notions of civil liberty. In order to curb the tide of cyber intrusions, the individual right to erase data must be enacted.

The European Court of Justice ruled in 2014 that citizens had the “right to be forgotten” — they ruled in favor of citizen right’s to privacy. As today is Data Privacy Day, perhaps it is time for the US to stand up and create their own variant of this law, a uniquely American law that allows American citizens the right to erase data— the right to ensure their privacy.

CReST Proposed Language: 

“Any person has the right to erase personal data that they identify as a breach of their privacy. Data erasure may be requested to and arbitrated by the search engine that publishes the data online. If erasure is justified then the search engine must erase any links or copies of that personal data in a timely manner. The search engine is responsible for the removal of authorized 3rd party publication of said data.”

Internet Providers Are Now ‘Common Carriers’: What Does That Mean For You?

Jennifer McArdle

On February 26, the Federal Communications Commission (FCC) voted in favor of reclassifying broadband Internet Service Providers (ISP) as ‘common carriers’ under Title II of the 1996 Telecommunications Act by a 3 to 2 vote. So what does that mean for you?

Well, it depends on who the ‘you’ is, because that answer differs based on whether you are a regular consumer of Internet content, a provider of Internet content, or an ISP. However, essentially, this ruling is about net neutrality—the ability to access the Internet free of discrimination. So let’s break this down a bit.

Prior to the February 26 ruling, ISP’s were regulated under Title I of the 1996 Telecommunications Act and classified as “information services.” The FCC also put in place specific anti-blocking and non-discrimination rules as part of their 2010 Open Internet Order. Basically, the FCC was attempting to ensure two things: one, that ISPs could not block or deny consumers access to an Internet site of their choosing, and two, that ISPs could not create a tiered access system, with higher paying users accessing the Internet on a ‘fast-lane’ while others are regulated to a ‘slow-lane.’

In January 2014, Verizon, one of the largest ISP’s, sued the government and challenged the FCC’s regulations designed to implement net neutrality. The US Court of Appeals, DC Circuit upheld the FCC authority to use section 706 of the Telecommunications Act to regulate the Internet; however, it struck down the specific anti-blocking and anti-discrimination measures, giving Verizon the leeway to disregard the FCC’s rules. Five months later, the FCC issued a notice of proposed rulemaking on the Internet regulatory structure, which eventually received over 4 million public comments, the large majority of which were in favor of net neutrality. This set the stage for the FCC’s ruling last month.

By defining ISP’s as common carriers, the FCC is essentially defining broadband Internet service providers as a utility. They are mandated by government to provide the same service to everyone without discrimination—much like electricity and gas services.

So, what does that mean for you?

You, the content consumer

By classifying ISP’s as common carriers the FCC has banned ‘paid prioritization’—there will be no fast lanes and slow lanes of the Internet. And this is a good thing for the average consumer. Last June, John Oliver on his show Last Week Tonight (which is well worth watching) noted that 96% of Americans have access to two or fewer cable companies. That means that even if your Internet was being delayed or distorted you may not have the option to change to another provider.

Moreover, privacy advocates have noted that in order for ISP’s to play bandwidth favorites, they need to monitor what you are doing online via deep packet inspection. While deep packet inspection is certainly important to protect against nefarious viruses or malware, it can, under certain circumstances, lead to invasions of privacy. Defining ISP’s as ‘common carriers’ helps prevent against that.

The FCC resolution is designed to ensure that you the average content consumer— regardless if you consume high- or low-bandwidth—has access to Internet content free of discrimination, much like your access to other facilities deemed essential for public life, such as canals, rails, and the telephone.

You, the content producer

Last year, Netflix consumers noticed that there was far more buffering of Netflix content. That is because broadband providers insisted that Netflix users were consuming much of the available Internet bandwidth and therefore the ISP’s slowed it down. Netflix reluctantly agreed to pay interconnection fees to broadband providers in order to ensure its content consumers could stream its videos. Netflix, not surprisingly, is for net neutrality.

However, it is not just the giant content producers that stand to benefit from this ruling. President Obama has noted that ‘paid prioritization’ stacks the deck against small content producing companies, which are unable to challenge the dominance of Internet giants such as Twitter, Facebook, and Netflix.

Defining ISP’s as ‘common carriers’ ensures that content producers are not held hostage to ‘last mile Internet gatekeepers’ and can ensure their content reaches consumers free from bias.

You, the ISP

Not surprisingly, this ruling was not the best for ISP’s who stand to make money from a tiered Internet access system. Moreover, opponents to net neutrality argue that if broadband ISPs cannot collect fees from companies who take up an outsized portion of bandwidth, they lose the incentive to invest in maintaining and upgrading their current infrastructure. This may indeed be true.

The Future?

While the February ruling seemed to settle the debate on net neutrality, it may really be just beginning. The Title II ruling is not set in stone yet and it is already beginning to be legally challenged on the Hill. What that means for ‘you’ may fundamentally change in the months to come.

Access Granted

Kathy Goodson

If the most diligent and efficient way to provide public information to citizens is through the Internet, then Internet access should be free. Free access to information is not only a right, but it is also an integral component of the government’s responsibility in creating informed citizens. The government provides multiple quality-controlled ways to access public information. This includes, but is not limited to, physical records at city hall, public hearings, and free public records databases. However, as the government continues to digitize public information, a new approach to providing access to this public information will be necessary. An approach that requires all citizens to have access to the Internet will ensure the public retains free access to the information found in places and documents such as libraries, public reports, and phone books.

Phone books and pay phones are examples of free access to public information. It is archaic that we still print in such high volumes. In Chicago alone, 1.2 million phone books were distributed this year. Supporters argue that greater than 50% of all Americans still use phone books, but every unwanted phone book incurs a fine for the phone company. The remaining 50% of Americans have a better resource to find public information in their pocket, their smartphone. Just like traditional public pay phones (when is the last time you saw one of those?), phone books are slowly being phased out for their smartphone counterparts. The smartphone gives us the functionality of the payphone and the information of the phone book all in one small portable device. Yes, there are great arguments for the use of pay phones including ensured access to phone services for those who can’t afford them. However, there is an equally compelling argument to provide public information to everyone in the most responsible, effective, and productive manner possible. Free Internet access as a means for public information would support that goal.

An educated public is vital to the existence of our democracy. Accordingly, the government has historically advocated literacy through institutions like the public library. However, traditional forms of free access to public information are no longer the most convenient or productive. Court records, marriage licenses, and phone books are all better utilized in electronic formats. The shift toward electronic repositories is becoming the norm. Legal electronic documents with electronic signatures are considered just as valid a their handwritten counterparts. Security clearances are digital from the initial application to the fingerprinting process. Shouldn’t we extrapolate these digitized models, such as the security clearance process, into all forms of public information? However, in order to do this successfully, Internet access has to be free everywhere, all the time. To achieve free Internet access in a quality-controlled manner, management by the government, as with other forms of outsourced dissemination of public information, will be required.

Hence, the government should supply reliable free Internet access to the masses everywhere. The hope is that public access to government information increases or at minimum maintains public literacy. Lack of access to government information debilitates people from interacting with the local, state, and federal government. The government is well aware of and has even made preparation for America’s Digital Age. Last year, a panel from the National Academy of Public Administration put out a report with 15 recommendations regarding ways to better position the federal government in our digital society for the Government Printing Office. While the recommendations prompt the retention and safeguard of digital documents, and the continued mandate of free public access, it does not focus on how this access will be given to the masses. If there is a shifting tide in how the masses obtain their access to government information, the government should be able to accommodate the need. The requirement to meet this need is competent and successful access. Thus, Internet access should be free. This ensures that the free government repositories of public information remain totally free and available.

 

The Truth is Hard to Find in the Digital Age

Charles Mueller & Jennifer Buss

Do you trust everything you read on the Internet? No, ok. Do you scour the first couple hits on Google until you find a source that you believe is reputable? That reputation has never mattered more than in today’s world because the level of competition for the public’s attention has never been greater. In order to respond to this demand, our information generation and delivery processes have become focused on being the first to grab the public’s attention. In order to be first, many sacrifice the accuracy of the information they produce, and because modern technology has enabled information to be spread at unprecedented rates, this results in misinformation and inconsistent “facts” becoming mainstream common knowledge; the truth is becoming harder to find.

A recent example of this occurred when Rolling Stone reported on a girl who claimed she was gang-raped at a UVA at a fraternity party. The story only reported the perspective of the female involved and did virtually nothing to corroborate her story. Rolling Stone has recently come forward explaining that their original report no longer agrees with the facts that have since emerged. This error forces the conversation away from the fact that UVA has a poor history of properly dealing with rape issues. Rolling Stone should feel absolutely humiliated. Maintaining a good reputation and trustworthiness in journalism requires good detective work, but in this instance, it looked like the author didn’t even try. If journalists can’t verify the events from the sources, they aren’t doing their job (i.e. reporting the facts to the public).

This situation has exposed a problem with our information delivery systems, a problem where the truth is sacrificed for personal gain. We’ve seen this problem with Internet reporting of current events; in the scientific literature regarding the creation of stem cells from skin cells; in medicine with the claims that vaccines cause autism; and in the 2008 global financial crisis. What matters most now is no longer the truth. The most important objective is giving your audience what they want because that is what they pay for.  We are pressuring scientists to produce revolutionary results instead of encouraging them to think freely and incentivizing journalists to entertain us rather than report on the facts. What are going to be the long-term consequences of putting these types of pressures on the professionals that produce information in our society?

In the days where the newspaper reined supreme, there was less disagreement among society about the facts surrounding an issue. In the digital age, where the accuracy of information is questionable and availability of different perspectives is unprecedented, the amount of disagreement among the facts can only broaden. While a diversity of opinion is essential for a democratic society, too much diversity, especially when it’s spawned through misinformation, can only damage society. The opinions built on misinformation are only going to increase as technology continues to make it easier to access and generate information. To counter this trend, we need to start mandating trustworthy sources, validating our news, changing the monetary value in publishing, and modifying the current system to focus less about the individual and more on the greater good. Applications like Checkdesk attempt to do this, but more is needed. It is time to take action, combat this reality head-on, and restore confidence in our information generation and delivery processes. The truth in the Digital Age is already hard enough to find.

 

The Internet House of Representatives

Brian Barnett

We should create an Internet House of Representatives, where your representative is chosen based on your political beliefs rather than based on where you live. A representative democracy is a good system because of the sheer size and complexity of our federal government. The men and women in Congress dedicate their time to synthesizing the advice of experts, the desires of constituents, and the influence of interest groups to make informed decisions and choices for our government. The average citizen does not have time to learn or deal with the intricacies of our bureaucratic systems. Yes, their voices should be as well represented as possible, especially in situations where a vote can easily determine a policy outcome, but they do not necessarily have the time and resources to make every political decision.

A representative democracy therefore makes sense when the general population is not interested in writing the content of laws for issues in which they have no education. The Internet provides us all with the opportunity to become educated across many fields, but we do not (yet) have the technology that minimizes the inordinate amount of time that this requires. When we think about the ways in which the Internet affects the government, a logical application would be the creation of a direct democracy. Everyone with an Internet connection could vote on all of our laws and the simple majority would win. This scenario raises the above issue of whether people have the time and knowledge to accomplish this feat effectively. I would argue that a representative democracy still makes sense in the Digital Age, but we can leverage the benefits of the Internet within this framework when it comes to how well the people we elect to Congress represent our interests.

Why are our representatives divided based on state lines and districts? If I am a conservative voter living in San Francisco or I am a liberal voter living in Oklahoma, my voice will be washed out by the opposite majorities in my district. Does this mean I am really being represented if my representative votes in diametric opposition to my political beliefs? What about in a moderate district where the voters are split 50/50 but my candidate just barely lost? Is my voice again stifled if the winner of the election is not in line with my political beliefs? Should I have to move to a district that is more in line with my beliefs? The average margin of victory for a representative across the US is 33%. This shows that 66% of the population has a representative that they voted for (regardless of how well this person will actually represent them), but 33% of the nation does not have a representative who even comes close to matching their political beliefs. The Internet allows me to communicate and become very close to other people around the country. Why can’t I form a voting bloc with similarly minded men and women in Seattle, Reno, Nashville, and Cleveland to have an impact on the federal legislative branch?

This Internet House of Representatives could be made up of 300 women and men (one representative per roughly one million people), elected every 2 years, who each represent a constituency made up of a population spread out among the US. Their offices, lines of communication, reports, and bills could all be located on the Internet so everyone can evaluate candidates and vote for the person who best represents their interests. The details of how you vote for these representatives and the reassignment of federal-state interactions in the Senate or elsewhere are important as well, but do not need to be hashed out here to make the point. In today’s world, where the Internet is the nervous system that connects us all together, our national policies and laws should be written by representatives whose path to Congress is based on the usage of this nation-bridging technology.

If you want to understand 21st Century ‘Electioneering’, look to Cicero

Jennifer McArdle
In the first century BC, Marcus Tullius Cicero ran for consul, the highest office in the Roman Republic. His younger brother, Quintus, sought to advise his elder brother on how to effectively ‘social engineer’ the electorate. In Quintus Tullius Cicero’s The Commentariolum Petitionis, Quintus directs Marcus to wage a campaign based on micro targeting, delivering targeted campaign messages (which often contradicted each other) to various members of the Roman populace, in order to gain their support. Quintus’ campaign strategy delivered Marcus victory, demonstrating the power of tailored messaging.
The use of behavioral science and big data by campaigns to effectively model voter behavior is adding new relevance to Cicero’s 2000 year-old campaign strategy—micro targeting is once again in vogue.
The 21st century has witnessed the emergence of ‘data driven campaigns.’ Campaigns are combining big data with behavioral science and emergent computational methods to model individual voter behavior. By combining the data located in public databases, which include information such as party registration, voting history, political donations, vehicle registration, and real estate records with those of commercial databases, campaigns have been able to effectively target individuals. This micro targeting extends beyond the ability to identify which voters to contact, but to the content of the message as well. Philip N. Howard in his book, New Media Campaigns and the Managed Citizen, notes that in the weeks prior to the 2000 presidential election, two middle-age, conservative, female voters logged on to the same Republican website, from different parts of the country. The first, a voter from Clemson, South Carolina saw headlines about the Republican commitment to 2nd Amendment protections and their pro-life stance. The second, based in Manhattan, was never shown those headlines. The website’s statistical model suggested that the former female would respond positively to those headlines, while the latter likely supported some measure of gun control and a woman’s right to choose.
While micro targeting in Rome arguably made the process more democratic—Marcus was not a member of the nobility and would have typically been eliminated from the candidacy—today’s use of micro targeting has the potential to erode democracy. These computational models allow parties to acquire information about voters without directly asking those same voters a question. With this information in hand, campaigns can opaquely micro-target individuals, selectively providing information that fits their partisan and campaign issue bias, while removing platforms that may not align with their interests. Essentially, campaigns are able to generate filter bubbles, which reinforce individual viewpoints, while removing differing ideas or philosophies from their search results. Voters are not even aware that micro-targeting has occurred.
While it is unlikely that micro targeting can be removed completely from politics, there may be a mechanism to ensure the integrity of the democratic process in politics. While difficult, given the opaque nature of micro targeting, attempting to create a ‘sunshine movement’ during campaigns by creating non-partisan sites that highlight each candidates’ individual platforms could help to ensure that voters know each candidates true views. ‘Data driven campaigns’ need not erode democracy, but should they remain as is, they may do just that.

Our world, “BI” and “AI”

    There is no question that everything has changed, is changing, or will change because of the Internet.

It is hard to imagine life without the constant connectivity available today.  But there was a time, only about 20+ years ago, when communicating with someone in India either meant a discussion with an operator or it meant mailing a written letter (remember those?) with lots of postage.  I have started calling that era Before the Internet, or BI.

Today, one can dictate a text or email to an iPhone, push a virtual button, and have your thoughts delivered to the other side of the globe within seconds.  (If you want to talk to someone in India, just call customer service at your favorite company!)  Modern, instantaneous, ubiquitous communications have revolutionized international business.  Companies have had to change entire organizational structures and approaches to accommodate this technology.  Mostly, the technology has streamlined and greatly improved efficiencies and products.  It has also generated its own challenges.  This new state of human affairs I call After the Internet, or AI.

So here is my current BI/AI list (please send me your additions):

BI                                                                   AI

One carried paper maps                              My phone, iPad, or car tells me when            to turn, or “make a legal u-turn”

 Guess that song was a test of memory       Shazam is really accurate!

Movie night at home involved VHS             Movie night involves Netflix

One read the Sunday papers for sales       One checks prices on the iPhone

Weather reports followed the News          Weather reports are pushed to users

Everyone on the beach had a
paperback                                                  Everyone carries Kindles

If your pocket buzzed, you
got strange looks                                       Everyone’s pocket/purse buzzes

Tweets were sounds made by birds           Tweets occupy too much of my life!

Facebook was another name for the          Facebook will soon go public for billions
FBI book of mug shots                                                more than they make!

Please send yours in!

Science fiction and the reality we live in

     In the 1960s and 1970s, the Star Trek TV series – and later, movies – portrayed a future where humans traveled faster than the speed of light, were transported from one place to another almost instantaneously, and carried handheld devices that allowed instant communication with anyone on a planet.  The crew members of the Enterprise also were treated in a medical facility where the entire body could be imaged, and handheld devices read vital signs to the doctors.

Twenty-five years later, we still cannot travel faster than the speed of light (of course, we have almost totally abandoned space travel research) and we cannot transport ourselves from one place to another, although physicists have demonstrated the science behind this possibility in the laboratory.

We do, however, have something like the “tricorder” from the Star Trek series.  In fact, our version of it, the iPhone, is far more capable and smaller than the one in the old TV show.  We also have body imaging and medical diagnostic technology far more advanced than that envisioned twenty-five years ago.

I know: cool, but what’s the point?  There are two points to be made.

One, we take all of these new capabilities and gadgets for granted, and act like it has always been this way.  The Current Status Quo is, “like, normal,” man.  Well, it isn’t normal.  It is far different than it was ten or twenty years ago: just ask someone over sixty!  These great changes in technological capability are happening so rapidly now that we have several generations of user expertise levels coexisting in our society at the same time.

Example:  I know people who still prefer to have just a phone:  yes, a cell phone, but without features.  They say smart phones are just too confusing.  They were really happy when cell phones came along and changed their lives, but they’re not ready for all the “bells and whistles.” Right next to them in the theater trying to watch a movie, while everyone else is buzzing or beeping, are Blackberry users who swear the ultimate in connectivity is email and phone service as represented by the 1995 technology in their pockets.  And of course, everyone else is using an iPhone or Android with the ability to watch TV or another movie while sitting with you in the theater.  There are also surely people out there who are only comfortable with a landline-touch-tone phone, but we don’t care as much about them, because there is no way they are reading this blog!

The second point is that we really should be thinking, and thinking really hard, about what will come next.  Surely it will be cooler and better than what we have now.  But it will also change our lives, society, the way we do business, etc.  Preparing for the next great technological advancement will require first a bit of vision to postulate what it will be, and them some thought as to what it will mean for us.  This type of science and technology forecasting is not in our nature, nor often practiced.  I would claim it is something worth considering.

The nation was greatly surprised in 1959 when the Soviets (remember them – before the Russians…or weren’t they Russian too?) put something in space before we could.  Scared us half to death, and spurred the ONLY commitment this country has ever had to space research and investment.

If we don’t take the time to think about what might come next and who might get there first, we are likely to be just as surprised again.