Now Hiring S&T Policy Fellows

S&T Policy Fellow

Science & Technology – Arlington, Virginia

The Potomac Institute for Policy Studies is currently inviting applications from mid-to-senior career professionals from diverse backgrounds including academia, industry, military and government, for a position as an S&T policy fellow. Ideal candidates will have a technical background and S&T policy experience.

Selected candidates are expected to manage existing research efforts and bring in new programs. Management of programs will include strategic planning, budgeting, and program execution. Candidates must be capable of technical assessment and analysis, and be able to communicate well with government customers and non-technical audiences. This position will require deep thinking about the impacts of emerging sciences and technologies and the ability to provide policy recommendations based on thoughtful analysis. Candidates must feel comfortable thinking outside the box and making bold policy recommendations.

Who should apply?

Candidates should be mid-to-senior career professionals. A technical background and S&T policy experience are preferred, but not required. Candidates must have excellent interpersonal skills, be able to communicate effectively in meetings, presentations and written material, and be able to work unsupervised. All candidates must be US citizens, and eligible for a security clearance. Candidates must be willing to relocate to the Washington D.C. area.

Application Process

Please submit a cover letter and résumé by November 16, 2018 via emailto Kathryn Schiller Wurster atkschillerwurster@potomacinstitute.org.

A final selection will be made based on personal interviews with the selection committee.

 About Potomac Institute

The Potomac Institute for Policy Studies is an independent, 501(c)(3), not-for-profit, public policy think tank and research institute. The Institute identifies and aggressively shepherds discussion on key science, technology, and national security issues facing our society, providing in particular, an academic forum for the study of related policy issues. From these discussions and forums, we develop meaningful policy options and ensure their implementation at the intersection of business and government. The Potomac Institute offices are located in the Ballston area of Arlington, Virginia.

http://www.potomacinstitute.org

 

Keywords: science policy, S&T policy, think tank, Washington D.C., Arlington

Job Type: Full-time

Advertisements

Now Hiring CReST Fellows

CReST Fellow

Science & Technology – Arlington, Virginia

The Potomac Institute for Policy Studies’ Center for Revolutionary Scientific Thought (CReST) is currently inviting applications for creative and qualified candidates.

CReST is an academic center within the Potomac Institute that studies emerging S&T and its potential impacts. The group thinks through far-future scenarios with the goal of understanding the policies that will be needed to encourage the development or control a given technology. Selected individuals will be required to engage in deep and out-of-the-box thinking to put forth bold solutions for future S&T policy concerns.

CReST fellows will participate in an intensive three-month training course before transitioning to hands-on S&T policy projects. During the initial training, fellows will be immersed in daily classes and discussions on emerging S&T topics, national security, policy making, and forecasting. Fellows will be engaged in reading science fiction, thinking through the impacts of S&T, following technology trends via primary literature, and writing a range of policy products.

CReST participation is intended to be a life-changing experience, training each member to think strategically and find big, bold solutions to S&T challenges. Members are mentored by S&T leaders and policy makers from business and government. The CReST experience will be challenging, but equally rewarding.

Who should apply?

Ideal candidates are early to mid-career professionals from diverse backgrounds including academia, industry, and government. A Ph.D. in the sciences, engineering, economics, mathematics, philosophy, science policy, or political science, with 0-10 years’ experience is preferred. All candidates must be US citizens, and eligible for a security clearance. This is a full-time, paid position in the Science and Technology Policy division at the Potomac Institute. Candidates must be willing to relocate to the Washington D.C. area

Application Process

Applicants must write a letter of interest to the CReST selection committee. The letter should address the following considerations: (1) the applicant’s interest in S&T policy; (2) the applicant’s career path and aspirations for the future; and (3) a brief, creative paragraph answering the questions: what will be the biggest S&T policy challenge of the year 2050, and what is an innovative solution to address it?

Please submit your letter of interest, résumé and a writing sample by November 16, 2018 via email to Kathryn Schiller Wurster at kschillerwurster@potomacinstitute.org.

A final selection will be made based on personal interviews with the selection committee. The interview will include questions and discussions requiring the candidate to think creatively and respond to specific challenges, in addition to conventional interview questions.

About Potomac Institute

The Potomac Institute for Policy Studies is an independent, 501(c)(3), not-for-profit, public policy think tank and research institute. The Institute identifies and aggressively shepherds discussion on key science, technology, and national security issues facing our society, providing in particular, an academic forum for the study of related policy issues. From these discussions and forums, we develop meaningful policy options and ensure their implementation at the intersection of business and government. The Potomac Institute offices are located in the Ballston area of Arlington, Virginia.

http://www.potomacinstitute.org

 

Keywords: science policy, S&T policy, think tank, science communication, Washington D.C., Arlington

Job Type: Full-time

Negative Results Should be a Positive

Albert Einstein once said “Failure is success in progress.” Winston Churchill agreed, saying, “success is stumbling from failure to failure with no loss of enthusiasm.” But too often in academic science, a result that fails to support the tested hypothesis is discarded, never to be shared or further investigated. This is disastrous for scientific and technological progress.

Science is meant to be transparent. For science and society to progress, all scientific studies should be published to the broader community. But this doesn’t happen. This failure and inability to publish negative results is detrimental to scientific progress and should be immediately fixed.

Most academic scientists need a high publication output with a high citation rate to receive competitive grants that fund their research and enable promotions. This “publish or perish” culture in academia forces researchers to produce “publishable” results. And for a result to be “publishable”, it most likely has to be a positive result that supports the tested hypothesis. Studies that produce negative results often end up buried in the lab’s archive, never to see the light of day.

5-stages-of-bad-data

Retrieved from: https://theupturnedmicroscope.com/comic/the-5-stages-of-bad-data/

But most science is the production of negative results. Not publishing these results wastes U.S. taxpayer money, as federally funded scientists may be unknowingly repeating the same “failed” experiments as previous studies. The efficiency of scientific progression would drastically improve if negative results were published.

This bias against negative results has a big impact on scientists, especially young scientists in graduate school. Instead of being viewed in a positive light, negative results are often associated with flawed or poorly designed studies and are therefore viewed as a negative reflection on the scientist. The inability to publish negative results threatens to stunt not only the progression of science, but the United States’ ability to train the next generation of scientists.

Recommendation

The U.S. Congress should add language to the NSF Authorization and Appropriation that requires the NSF to annually show that 50% of all NSF funding resulted in validated negative results or results that invalidated previously accepted science.

 

Fixing a broken system

America is on the verge of a new industrial revolution powered by biology that is being killed by outdated policies.

By using recent advancements in genetic engineering, scientists are repurposing life in completely new ways. We are designing biosensors that monitor our ecosystems, sweeter strawberries with a longer shelf life and goats that produce stronger-than-steel spider silk. Such remarkable engineering is possible due to advancements like low-cost/high efficiency gene editing using CRISPR, low-cost genome sequencing and rapid improvements in gene synthesis.

But existing policies for gene-editing technology and its products are woefully outdated and create a regulatory environment that kills innovation. For starters, the regulatory process is undertaken in a cumbersome environment that splits oversight between the FDA, USDA and EPA. Next, products of genetic engineering are regulated with unfounded fears, and not data.

Take the example of hornless cattle. Farmers desire the hornless trait as horns are a danger to other cattle and farm workers. Hornless cattle bred for beef naturally exist due to a spontaneous mutation. However, dairy cows have horns and naturally breeding them to become hornless is completely impractical. Using gene-editing technology, scientists developed hornless dairy cows, but they never went commercial as they were killed by FDA regulations based on outdated fears rather than science and data.

The costs associated with this regulatory environment have been enormous. For instance, the regulatory compliance costs to take a new biotech crop to market between 2008 and 2012 was roughly $36 million. This burdensome, costly and inflexible regulatory environment has stifled market competition and innovation. We must unleash American innovation in biotech by fixing the regulatory system to account for recent advancements in genetic engineering.

I propose that Congress enacts new legislation for gene-editing biotechnology with the following framework: (a) stipulate that policies asses the outcomes of gene-editing rather than process itself, and (b) establish a new agency that oversees gene-editing technology.

New laws must be agnostic to the process of genetic engineering. Regulations should assess new functions that arise out of gene-editing, rather than process that led to it. Policies must be flexible to account for the differences within products and the extent of regulation necessary for each.

Enforcement of new policies and oversight of genetic engineering and its products should be conducted by a new federal agency. This agency would review applications for new products, assess the scope of necessary regulatory compliance, and coordinate with other federal agencies if necessary. This system would streamline the regulatory process for businesses, who would not have to coordinate independently with multiple federal agencies.

To Educate Intelligently, Use Artificial Intelligence

Our children’s education is vital. And we are on the cusp of a pedagogical revolution, an upending of traditional instruction. We must invest now to keep education lock-step with technological progress.

Automation, machine learning, and artificial intelligence may be serving up the greatest challenge we have ever faced when it comes to education. As these technologies displace jobs at faster and faster rates, we’ll increasingly need a workforce that’s adaptable. We need people who are not just ready for some of tomorrow’s jobs. We need people who are ready for any of tomorrow’s jobs. We need a population that can learn new skills incredibly quickly and can perform complex problem solving across multiple domains.

Fortunately, the same forces disrupting the labor market can be harnessed to disrupt our educational system. Machine learning and artificial intelligence can assist in creating a generalized and flexible curriculum that trains a population of thinkers who can seamlessly transition between careers.

The technology is here, but in its infancy. MATHia is a machine-learning tool that aims to personalize tutoring. It collects data on students’ math progress, provides tailored instruction, and helps students understand the fundamental aspects of mathematical problem solving. Intelligent Tutoring Systems can assist in human-machine dialogue helpful in learning new languages.

These are admirable approaches, but they lack the much-needed problem-solving punch to train truly adaptable individuals across many domains. They fail to tap into what truly makes for effective teaching. A consensus report from the National Academy of Sciences (NAS) states that mentorship in the form of continuous and personalized feedback is key to effective learning. This is a far cry from the current state of education, wherein students are taught in large classrooms and assessed for rote knowledge on standardized exams.

According to the NAS, “accomplished teachers…reflect on what goes on in the classroom and modify their teaching plans accordingly. By reflecting on and evaluating one’s own practices…teachers develop ways to change and improve their practices.”

Thankfully, continuous reflection and improvement are the bread and butter of machine learning algorithms. AI will therefore be adept at delivering personalized feedback to every single student. This feedback, in turn, will provide students with the cognitive toolbox to transfer knowledge between a litany of different subjects.

The current lack of knowledge transfer is at the crux of today’s workforce debates: arguments are abundant on how to “reskill” workers displaced by automation. This is important. But the reskilling debate is nothing new, and it’s only one piece of the puzzle. We must also focus resources on creating a workforce that needs less reskilling. It’s a workforce that can adjust to new labor demands in the blink of an eye. We must begin early, in primary and secondary education.

In December 2017, the House introduced the “FUTURE of Artificial Intelligence Act.” Dead on arrival, it had only one small provision addressing education. This act must be resurfaced, and it must give AI in education its due. As the technology landscape changes, so too will the labor landscape. Education must evolve to meet this need.

I can read your memories

I know everything you did over the last two years. I know where you went, who you were with and everything you thought about, down to each second of the day. I know this because I hacked into the brain-computer interface (BCI) that records your memories and stores all of your thoughts. To make matters worse, you weren’t even aware that most of this data was collected by your BCI.

The technology to read minds is already in the lab and will soon be commercially available. The broader policy debate we have today on privacy and security issues related to data must include BCIs and the neural data they generate. In fact, there is an urgent need to do this.

Neural data is a unique biometric marker, like fingerprints and DNA, that can accurately identify unique individuals. What is worrisome is that biometric data privacy is mostly non-existent in the United States. Often, it is collected without consent or knowledge. For instance, people living in 47 states can be identified through images taken without their consent, using facial recognition software.

As neural data is increasingly incorporated into each person’s biometric profile, any notion of privacy will go flying out the window. In contrast to other biometric markers that mostly describe physical characteristics, neural data can give precise insight into the most intimate details of our minds. Allowing this information to become an engine for profit threatens our fundamental right to privacy.

To remedy this issue, I propose that Congress enact a Neural Data Privacy Act (NDPA).

The central premise of NDPA is that individuals must have a fundamental right to cognitive liberty. This means that people must be free to use or refuse BCIs without fear of discrimination and consent is always required for the collection of any neural data. Furthermore, strict limitations will be imposed on the type of neural data that can be collected and for what purposes. For instance, businesses and employers will be prohibited from profiting off neural data by selling or leasing it to third parties.

As this legislation will establish a fundamental right to cognitive liberty, a violation of this right will result in severe penalties. Language criminalizing invasion of cognitive liberty will be included in the NDPA, with a mandate for law enforcement agencies to enforce it. We can define violations of cognitive liberty under a few broad categories: accessing neural data without consent, distributing neural data without consent and compelling an individual to use of a BCI against their will.

Brains on Loan

Instantaneously, your brain power increases by an order of magnitude. Previously difficult problems are now trivial. Tip-of-the-tongue moments are a thing of the past. All manner of intellectual and creative pursuits are at your fingertips.

This is the new reality with the Brain Cloud.

When brain-computer interfaces are the new normal, we’ll prosper from the selective advantages of both silicon and biology.

Think of the electrical grid. If I install solar panels outside my house that provide more energy than I need, that energy flows back into the grid. Then, I’m provided an energy credit toward my next bill. I’ve made the investment in something that society can harness, and I’m repaid for that investment.

Take another example: SETI@home allows you to loan out the processing power of your computer to analyze data for the Search for Extraterrestrial Intelligence. When you’re not using the computer, it’s still providing something useful to society.

Enter the Brain Cloud. When I’m asleep, let’s say, I’ll be able to loan a portion my brain’s processing power to the grid through a brain-computer interface. I can do this because much of my brain is actually a back-up system, a sort of biological insurance policy. Case studies have shown that some individuals are born with only half a brain, only portions of their cortex, or no cerebellum at all. Yet, astonishingly, they lead relatively normal lives.

Of course, the beauty of the Brain Cloud is that no one has to permanently give up portions of their brain. Instead, processing power is out on loan only temporarily.

At this point, a reasonable person might be wondering: Why would I do that?

Just like with the electrical grid, there’s much to be gained. Each time I put my brain on loan, a portion of the processing power I lend out will be used to mine for cryptocurrency through a blockchain. I’ll receive compensation for putting processing power in the grid, and others will be able to harness that power when they need it.

The blockchain will serve another purpose. It will keep an exact, private, and non-refutable ledger of how much processing power I’ve loaned. While anyone will be able to observe that a transaction occurred in the blockchain, no party will have access to the contents of that transaction – allowing me to keep the contents of my brain private.

We are all perpetually hamstrung by our lack of brain power. Yet, for the processing they do, brains are fantastically efficient. Processing in computers, on the other hand, requires massive amounts of energy. Most of this energy ends up as heat, rather than the actual computational processes we want in the first place.

By moving processing power through a brain-computer interface grid, we would be selecting for the best of both worlds: super-efficient conduction of signal through machines, and super-efficient processing of signal through brains.

It’s a win-win.