Think Big: Science and Technology Policy Priorities for the Next Administration

Kathryn Schiller Wurster
Nov 29, 2016

The priorities of the new Administration are to rebuild American infrastructure and reinvigorate the economy. Rather than return to the infrastructure and economy of the past, we should look to the future and think big. America’s strengths in innovative science and technology will help us leap forward and maintain our economic strength and global leadership.

The Potomac Institute was founded over twenty years ago in a politically turbulent era- Newt Gingrich and the Republicans had just taken over Congress, written their Contract with America, and dissolved the Office of Technology Assessment (OTA) on the premise that it was too partisan when dealing with science and technology policy issues (a decision that has been much debated since). The Potomac Institute was founded to fill the role of a non-partisan, objective, and technically competent advisor to Congress and the Administration, regardless of party. The Institute was founded on the principles that 1) science should inform policy and 2) policy should foster the growth of science. Most importantly, the Institute works to anticipate emerging technologies and their associated policy implications, then guide investments to shape the future we want.

We urge the new administration to develop policy based on the best available science. In policy-making, the best available science can take many forms- from technical and experimental data to economic data to social science research findings. Most important, however, is that any policy be informed by the available information on potential impacts. Often policy-makers must make decisions based on incomplete or insufficient data- in those cases, we must use what is available and then support efforts to increase the available data. The concept of using science to inform policy should be non-partisan; data and evidence should form the basis of solid policy that all can agree on.

We urge the new administration to foster the development of science and technology. Economic development starts with good ideas and translation into products, and industry and government each have important roles in this process. If America leads the world in innovation, economic strength will follow, but to get there we have to focus on big ideas for the future rather than trying to return to the successes of the past. The science and technology investment priorities the Institute has identified for the next Administration include:

Revolutionizing Medicine: Advances in genetics, precision medicine, sensors, and big data analytics hold great promise to revolutionize human health. The costs and inefficiencies of the American health care system could be vastly improved by leveraging technology, putting more power in the hands of the patients, and adapting the medical workforce.

Renewing American Infrastructure: Major public investments to achieve great things are a hallmark of American history; we went to the Moon, built an atomic bomb, built an interstate highway system, and created the Internet. When we set big goals and invest in the science and technology needed to achieve them, the benefits are enormous. We need revolutionary new infrastructure projects to drive America forward, not just fix what is broken.

Industrial Policy: The U.S. needs a strategic national industrial policy to drive economic development and preserve industries that are vital to national security. This industrial policy should focus on fostering American innovation, helping American companies stay competitive in a global marketplace, and protecting intellectual property.

Biotechnology and Climate Engineering: These fields promise immense benefits but also represent unprecedented power to shape the world around us in ways we may not yet fully understand. The government has an important role to play in fostering innovative research and ensuring responsible development of biotechnologies.

Innovation in science and technology are the keys to American economic strength and national security. We will not lead the world by investing in old technology, old infrastructure, and old ways of doing business. The way to maintain America’s leadership and keep our country and economy strong is to think big.

The Science of an Upset

By  Kathryn Schiller Wurster

Donald Trump won the presidency last night, taking the electoral college despite what appears to be Clinton’s narrow win in the popular vote. The results surprised nearly everyone in the media and polling world, who had almost entirely predicted a wide margin of victory for Hillary Clinton. Even Nate Silver’s blog FiveThirtyEight, which has earned a reputation for crunching numbers in exquisite fashion, had Clinton with much better odds throughout most of the race, with the final odds at 70/30 Clinton to Trump the day before the election.

But all the numbers crunchers depend on polls and statistical methods that aren’t reliable and now seem remarkably old fashioned. A Nature article examined this problem in mid-October and blamed the decline of landlines, rise of mobile phones, “shy voter” behavior, and unreliable online polls. At one time in history, calling people on the phone and asking them questions may have been the best way to find out their opinions and predict their likely behavior. But this election has just proved that it doesn’t always work. The UK saw a similar upset against the pollsters’ predictions in the Brexit vote.

The problem is what people say on the phone is likely driven by lots of other factors, especially when the candidates and poll questions are controversial. Conducting phone surveys today also relies on an increasingly outdated mode of social interaction, likely biasing the samples. Online polls likely have their own biases; they also rely on people answering honestly and having a representative sample. In the end, it is clear that asking a small subset of people questions cannot be relied on to give us a real picture of what likely voters are actually going to do.

At the same time, we have more data streams about people, and correlations to their behavior, than ever before. Advertisers can target microgroups based on incredibly detailed demographics. Each of us leave vast trails of data everywhere we go; these trails could be mined to answer all the questions pollsters ask (and likely much more). Social network analysis should be able to tell us who the influencers are and measure their impact on the outcomes.

Now we need a team of statisticians and big data analysts and marketing gurus to look back at trends in data from a wide range of sources in the lead-up to the election. We need a forensic investigator to try to find correlations and trends that we missed along the way and connect the dots that led us here. The margins were narrow, so it may be that – for now – the degree of uncertainty we have to accept is still greater than the margin of error in the actual results. But we should be able to do better than this.

Electing for the Future

By Charles Mueller

Today is one of the most cherished traditions of the United States. It is Election Day, a day where the people get a chance to voice their opinion about who should represent them in US, State, and Local governments.  Every year though this beloved day is riddled with controversy as we debate how we vote, when we vote, where we vote, who should vote, who we vote for, and why the entire process just seems destined to always fail us in some way.

I’m tired of having these same old debates.  None of these conversations recognize the real problem: year after year we refuse to accept that our entire system of governance and electing officials to represent us is not just archaic, but centuries outdated.  We live in a world of advancing technology, a world where my refrigerator can restock itself, people can transmit thoughts to each other using neurotechnology, and we can not only educate ourselves about virtually all of human history with the click of a button, but we can also communicate our thoughts and opinions just as fast.  Our society is fundamentally different in practically every way than the one that existed during the time of our founding fathers and it’s time we stop trying to make their system work for our way of life.

It is time we rethink the idea of only casting our vote once a year for those who represent us.  Why can’t Election Day be, in a sense, every day?  Why don’t we create a system where we can continually voice our confidence in our leaders, helping put the appropriate amount of pressure to keep them honest, transparent and effective as policymakers?  It’s not like we don’t have the technology to do it…

It is also time we rethink the very structure of our government and the way it utilizes things like S&T to carry out its mission to serve and protect the people.  Why can’t we create a government that is efficient and instead of being decades behind utilizing technology, is a pioneer of how to incorporate technology to carry out the job of governance?  Why do we continue to waste our time and energy complaining about the shortcomings of our governance system instead of using that time and energy to fix it?

None of this will be easy, but all of it is necessary.  The future is one where S&T will continue to change the fabric of society seemingly overnight and we need a new process for defining what government is, how it works, and how the people are involved in this next phase of our existence.  So as we all stand in the long lines today that are too part of the Election Day tradition, let’s use that time to talk with our friends and neighbors about the future of democracy and the United States, instead of continuing to complain about how awful everything is.

Let’s work together to make the future better and hopefully some day down the road, Election Day will be a time we elect some individuals bold enough to lead us into this brave new world.

Making time to vote

 

By Paul Syers

It is absurd that Election Day is not a federal holiday.  Not doing so greatly impedes the ability of millions of Americans to vote. Other measures to address this problem, like early voting, only serve to erode the authority of the election. 

I don’t think I’m taking a controversial stance when I say that I think we should have a federal holiday for elections.  There is something the public cherishes about voting in person. The logistical setup required for enabling the entire voting public in the US to cast their ballots on a single day is massive and it boggles my mind that we manage to do it every two years.  The volunteers who make it happen have my immense respect. 

Yet the fact that people have to either take off work or fit voting around their work schedules creates huge unnecessary problems.  We see it every time: the news reports of long lines, people waiting for hours, waiting into the night. With Election Day not a holiday, the difficulty of voting is hardest on the lowest income voters in this country, who often work multiple jobs a day, have children to care for, or both.

Most states have enacted early voting policies, in large part to alleviate many of these problems.  Early voting has a major problem itself, and that is that it undermines the basic function of an election.  As I’m sure some know-it-all has told you at a high school or college party at some point in your life, we don’t actually live in a democracy, we life in a representative democracy.  That’s right, we pick leaders to represent us, and the vote is the way we pick our leaders.  The vote is meant to represent the will of the people, but it can only represent the will of the people at a specific moment in time.  Early voting screws with that.  It begins sampling the will of the people at multiple times. Some states begin early voting a full month before Election Day.  A month, or a couple weeks might not have seemed to matter much in the past, but with the increased speed with which our society consumes news and spreads information, it matters a lot more.  

The new mobile communications technologies of recent years hold the possibility of revolutionizing how we vote. Our children may never need to go to polling locations, instead wirelessly casting their vote, or data gathering and analysis of the public’s social behavior could even assess the will of the public on an ongoing basis. However, as the hacking scandals of recent years have shown the dire need for advancements in the security of our mobile technology, voting at polling places, in person is still the most secure method, in my mind. This traditional method needs to be done with integrity, and making Election Day a national holiday helps accomplish that in many different ways.

If people knew they didn’t have to work, then more would volunteer to man polling stations.  Schools could be closed for a half or a full day. Fourteen states currently do that, but we could make it a national policy. A half-day of school lets parents vote in the morning and teachers in the afternoon. The morning could be spent learning about the process and importance of elections, and elected officials can even visit schools and tell students the story of how and why they got into public service. It would be a great way to teach the importance of participation in public governance.

Public opinion about both candidates this campaign has swung significantly, often over the course of a few days.  It’s been estimated that 15 percent of eligible voters have remained undecided all the way up to the end, a far larger percentage than in recent elections.  How many of them voted early, only to change their mind over the past week and a half? The will of the people is fluid.  The best way we can be sure of it is to measure it at a single moment in time and work with those results.  In order to do that, we should make it easier, not harder to take that measurement.  Making the federal election day a holiday would not only provide a simple way to enable a swift and efficient conduction of the election, but it would also communicate to the public that we genuinely value the power of the ballot.

Everyman Powered Science

By Beth Russell

lego-1044891_1920.jpg

Intellectual exceptionalism. The idea that scientists have a special quality that has been honed and amplified by training on difficult problems over copious hours hit a speedbump this week when video game players beat scientists in a competition to determine the structure of an Alzheimer’s Disease related protein. The game was Foldit, a research-based game designed to allow the incremental process of discovery in protein-folding biochemistry to be worked out by a group of players building on each other’s best ideas. The 469 players were able to deduce the structure faster than two crystallographers and a team of 61 undergraduate students using computer-based modeling, and faster than two computer algorithms for automatic structure determination.

Building highly accurate models of protein structure using crystallographic data is highly labor-intensive and the accuracy of these models can impact downstream science for many years after the initial model is produced. Research games like Foldit have revolutionized our ability to solve problems that required more labor than can be practically obtained. The phenomenal success of these games and other mechanisms for involving the public in scientific research signal a paradigm shift for the research enterprise. What was previously the purview of an elite few highly educated scientists is now, with a little training, the domain of the everyman. We call it Citizen Science. In a brief period, the concept of public participation of scientific research went from a few birdwatchers and butterfly counters to an international phenomenon of such importance that last year the White House issued a memorandum directing federal agencies and American institutions to take better advantage of the opportunities that Citizen Science provides.

The most interesting thing about the most recent Foldit results isn’t that the humans were faster, but they also developed the better model. This poses the curious scientist to ask why? I posit that the gamers beat the scientists and the computers for the same reason that revolutionary science isn’t usually developed from an incremental process, the same reason that we associate the word “Eureka!” with scientific discovery. This reason is plasticity.

Humans really can think outside of the box. Not only can we understand rules, we can also be curious about what happens when we bend or break them. Science has devised these “rules,” properties of different atoms, functional groups, and structural types. Unlike computers who must follow the rules coded in their program or algorithm, or the scientists who drilled the properties into their heads with years of study, the non-scientist can see the new way, the exception, that in the complex world of biology ends up being right pretty often. Throw enough people together, and you’ll get a few of these. Some right, some wrong, but the group is self-correcting and doesn’t take long to find the right combination of bends in the rules to solve the puzzle.

We need the scientists, to collect the data, to build hypotheses, and to integrate complex ideas that require deep knowledge, but we also need the everyman too. As scientists we can’t keep locking the discipline up in our labs and ignoring the power of bringing the citizenry to the table. For some problems, two (or two thousand) heads really are better than one.

The Dark Side of CRISPR

By Kathryn Ziden

The Tsarnaev brothers, who carried out the 2013 Boston Marathon bombings, built their pressure cooker bombs using instructions found in al Qaeda’s English-language, online magazine Inspire. In the same 2010 issue of Inspire, it states, “For those mujahid brothers with degrees in microbiology or chemistry lays the greatest opportunity and responsibility. For such brothers, we encourage them to develop a weapon of mass destruction.” Although the bombs that were detonated and discovered in New York and New Jersey this past weekend were also pressure cooker bombs, what if it had been a bio-engineered, deadly pathogen? New, inexpensive and readily available gene-editing techniques could provide an easy way for terrorists to stage bioterrorist attacks.

CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) is a novel gene-editing technique that has the potential to do everything from ending diseases like cystic fibrosis and muscular dystrophy to curing cancer. CRISPR also has the power to both bring back extinct species and cause living species to go extinct. There is hot debate currently within the scientific and policy communities about the ethical ramifications of this powerful tool and how it should be regulated. However, there is almost no discussion within these communities of the security risks that CRISPR poses, or the scary scenarios that could result from unintended consequences or its misuse.

The Office of the Director of National Intelligence’s “Worldwide Threat Assessment” listed gene-editing techniques like CRISPR on its list of weapons of mass destruction for the first time in 2016. Here, we list some actors that could use CRISPR to create a bioweapon.

Non-state actors: Terrorism specialists have warned that obtaining a biological weapon is much easier than obtaining a nuclear or chemical weapon, given the relative ease by which components can be purchased and developed. Terror groups intent on developing biological weapons could use existing members’ skills, or send recruits to receive adequate education in the biological sciences, similar to al Qaeda’s method of sending attackers to train in U.S. flight schools prior to 9/11.

Rogue scientists: Disgruntled or mentally ill scientists could easily use CRISPR to mount an attack, similar to the 2001 anthrax attacks. However, unlike other deadly pathogens, CRISPR is widely available and requires no security clearance or mental health screening for access.

Do-it-yourself biohackers: Do-it-yourself (DIY) scientist movements are growing across the country. DIY centers now offer CRISPR-specific classes and DIY CRISPR kits are inexpensive and widely available for sale online for amateur scientists working out of their basements. Some websites sell in vivo, injection-ready CRISPR kits for creating transgenic rats (rats included), and directly advertise to “full service” and “DIY” users.

Religious groups: The first and single largest bioterrorist attack in the U.S. was perpetrated by followers of an Indian mystical leader, infecting 751 people with salmonella bacteria in 1984. In 1993, the doomsday cult Aum Shinrikyo attempted an anthrax attack in Tokyo, but mistakenly used a non-virulent strain.

Foreign governments: The development of bioweapons is banned under the 1975 Biological and Toxin Weapons Convention; however many countries, including China, Russia and Pakistan are widely believed to have bioweapons programs. Each of these countries are also actively using CRISPR in scientific research.

The large, potential impacts of gene-editing techniques combined with the low barriers to obtaining the technology make it ripe for unintended and intended misuse. In order to address the security challenges of this emerging technology, all stakeholders need to act.

The scientific community can add value by:

  • Shifting their focus from ethical concerns to security concerns, or at least give security concerns equal footing in their discussions.
  • Engaging with the intelligence and policy communities to identify real-world scenarios that could be actualized by the actors discussed above.

Regulatory bodies can counter the risks poses by the unintended use or potential misuse of gene-editing techniques by:

  • Designating all precision gene-editing enzyme systems as controlled substances, similar to radioactive isotopes or illicit drug precursors used in research laboratories, and putting use-verification and accounting procedures into place.
  • Registering, licensing and certifying all laboratory-based and DIY users of CRISPR. Gene-editing technology users could also be required to undergo National Agency Check with Inquiries background investigations.

The intelligence community can lead the efforts of countering more serious, bioterrorism threats by:

  • Tracking all gene-editing kits or other system-specific plasmids or components, including materials already purchased during the current pre-regulation timeframe.
  • Tracking all users of gene-editing technologies, specifically looking for rogue or DIY users who fail to register, individuals actively seeking to buy kits through the black market, or individuals searching for CRISPR instructions or other relevant information online.

These recommendations are just some of the actions that could be taken to minimize risks of gene-editing technologies. CRISPR is a powerful technology that is capable of creating a gene drive that can result in mass sterilization and extinction. If it can be used to kill off a species of mosquito, then it can be used to kill off the human race. It is time to think of these gene-editing techniques in terms of an existential threat.

SYSTEM_ERROR_505_STATS_FAIL

By Beth Russell

If data is the gold standard, then why don’t all scientists agree all the time? We like to say the devil is in the details but it is really in the analysis and (mis)application of data. Scientific errors are rarely due to bad data; misinterpretation of data and misuse of statistical methods are much more likely culprits.

All data are essentially measurements. Imagine that you are trying to figure out where your property and your neighbors meet. You might have a rough idea of where the boundary is but you are going to have to take some measurements to be certain. Those measurements are data. Maybe you decide to step it off and calculate the distance based on the length of your shoe. Your neighbor decides to use a laser range finder. You are both going to be pretty close but you probably won’t end up in the exact same place. As long as his range finder is calibrated and your stride length is consistent, both methods are reliable and provide useful data. The only difference is the accuracy.

Are the data good or bad? It depends upon how accurate you need to be. Data are neither good or bad as long as the measurement tool is reliable. If you have a legal dispute your neighbor will probably win, on the other hand if you are just trying to figure out where to mow the grass you’re probably safe stepping it off. Neither data sets are bad, they just provide different levels of accuracy.

Accuracy is a major consideration in the next source of error, analysis. Just as it is important to consider your available ingredients and tools when you decide what to make for dinner, it is vital to consider the accuracy, type, and amount of data you have when you go to choosing a method for analysis. The primary analysis methods that science uses to determine if the available data supports a conclusion are statistical methods. These are tests that can estimate how likely it is that a given assumption is not true, they are not evidence that a conclusion is correct.

Unfortunately, statistical methods are not one size fits all. The validity of any method is dependent on properties of the data and the question being tested. Different statistical tests can lead to widely disparate conclusions. In order to provide the best available science, it is vital to choose, or design the best test for a given question and data set. Even then, two equally valid statistical tests can come to different conclusions, especially if there isn’t very much data or the data has high variability.

Here’s the rub… even scientists don’t always understand the analysis methods that they choose. Statistics is a science in itself and few biologists, chemists, or even physicists are expert statisticians. As the quantity and complexity of data grows, the importance of evaluating which analysis method(s) should be used becomes more and more important. Many times a method is chosen for historical reasons – “We’ve always used this method for this type of data because someone did that before.” Errors made due to choosing a poor method for the data are sloppy, lazy, bad science.

Better education in statistics will reduce this type of analysis-based errors and open science will make it easier to detect them. Another thing we can do is support more team science. If a team also includes a statistics expert, it is much less likely to make these type of errors. Finally, we need more statistics literate editors and reviewers. These positions exist to catch errors in the science and they need to consider the statistics part of the experiment, not the final arbiter of success or failure. High quality peer-review, collaboration, and the transparency created by open data are our best defenses against bad science. We need to strengthen them and put a greater emphasis on justifying analysis methodology choices in scientific discovery.