Negative Results Should be a Positive

Albert Einstein once said “Failure is success in progress.” Winston Churchill agreed, saying, “success is stumbling from failure to failure with no loss of enthusiasm.” But too often in academic science, a result that fails to support the tested hypothesis is discarded, never to be shared or further investigated. This is disastrous for scientific and technological progress.

Science is meant to be transparent. For science and society to progress, all scientific studies should be published to the broader community. But this doesn’t happen. This failure and inability to publish negative results is detrimental to scientific progress and should be immediately fixed.

Most academic scientists need a high publication output with a high citation rate to receive competitive grants that fund their research and enable promotions. This “publish or perish” culture in academia forces researchers to produce “publishable” results. And for a result to be “publishable”, it most likely has to be a positive result that supports the tested hypothesis. Studies that produce negative results often end up buried in the lab’s archive, never to see the light of day.

5-stages-of-bad-data

Retrieved from: https://theupturnedmicroscope.com/comic/the-5-stages-of-bad-data/

But most science is the production of negative results. Not publishing these results wastes U.S. taxpayer money, as federally funded scientists may be unknowingly repeating the same “failed” experiments as previous studies. The efficiency of scientific progression would drastically improve if negative results were published.

This bias against negative results has a big impact on scientists, especially young scientists in graduate school. Instead of being viewed in a positive light, negative results are often associated with flawed or poorly designed studies and are therefore viewed as a negative reflection on the scientist. The inability to publish negative results threatens to stunt not only the progression of science, but the United States’ ability to train the next generation of scientists.

Recommendation

The U.S. Congress should add language to the NSF Authorization and Appropriation that requires the NSF to annually show that 50% of all NSF funding resulted in validated negative results or results that invalidated previously accepted science.

 

Advertisements

Fixing a broken system

America is on the verge of a new industrial revolution powered by biology that is being killed by outdated policies.

By using recent advancements in genetic engineering, scientists are repurposing life in completely new ways. We are designing biosensors that monitor our ecosystems, sweeter strawberries with a longer shelf life and goats that produce stronger-than-steel spider silk. Such remarkable engineering is possible due to advancements like low-cost/high efficiency gene editing using CRISPR, low-cost genome sequencing and rapid improvements in gene synthesis.

But existing policies for gene-editing technology and its products are woefully outdated and create a regulatory environment that kills innovation. For starters, the regulatory process is undertaken in a cumbersome environment that splits oversight between the FDA, USDA and EPA. Next, products of genetic engineering are regulated with unfounded fears, and not data.

Take the example of hornless cattle. Farmers desire the hornless trait as horns are a danger to other cattle and farm workers. Hornless cattle bred for beef naturally exist due to a spontaneous mutation. However, dairy cows have horns and naturally breeding them to become hornless is completely impractical. Using gene-editing technology, scientists developed hornless dairy cows, but they never went commercial as they were killed by FDA regulations based on outdated fears rather than science and data.

The costs associated with this regulatory environment have been enormous. For instance, the regulatory compliance costs to take a new biotech crop to market between 2008 and 2012 was roughly $36 million. This burdensome, costly and inflexible regulatory environment has stifled market competition and innovation. We must unleash American innovation in biotech by fixing the regulatory system to account for recent advancements in genetic engineering.

I propose that Congress enacts new legislation for gene-editing biotechnology with the following framework: (a) stipulate that policies asses the outcomes of gene-editing rather than process itself, and (b) establish a new agency that oversees gene-editing technology.

New laws must be agnostic to the process of genetic engineering. Regulations should assess new functions that arise out of gene-editing, rather than process that led to it. Policies must be flexible to account for the differences within products and the extent of regulation necessary for each.

Enforcement of new policies and oversight of genetic engineering and its products should be conducted by a new federal agency. This agency would review applications for new products, assess the scope of necessary regulatory compliance, and coordinate with other federal agencies if necessary. This system would streamline the regulatory process for businesses, who would not have to coordinate independently with multiple federal agencies.

To Educate Intelligently, Use Artificial Intelligence

Our children’s education is vital. And we are on the cusp of a pedagogical revolution, an upending of traditional instruction. We must invest now to keep education lock-step with technological progress.

Automation, machine learning, and artificial intelligence may be serving up the greatest challenge we have ever faced when it comes to education. As these technologies displace jobs at faster and faster rates, we’ll increasingly need a workforce that’s adaptable. We need people who are not just ready for some of tomorrow’s jobs. We need people who are ready for any of tomorrow’s jobs. We need a population that can learn new skills incredibly quickly and can perform complex problem solving across multiple domains.

Fortunately, the same forces disrupting the labor market can be harnessed to disrupt our educational system. Machine learning and artificial intelligence can assist in creating a generalized and flexible curriculum that trains a population of thinkers who can seamlessly transition between careers.

The technology is here, but in its infancy. MATHia is a machine-learning tool that aims to personalize tutoring. It collects data on students’ math progress, provides tailored instruction, and helps students understand the fundamental aspects of mathematical problem solving. Intelligent Tutoring Systems can assist in human-machine dialogue helpful in learning new languages.

These are admirable approaches, but they lack the much-needed problem-solving punch to train truly adaptable individuals across many domains. They fail to tap into what truly makes for effective teaching. A consensus report from the National Academy of Sciences (NAS) states that mentorship in the form of continuous and personalized feedback is key to effective learning. This is a far cry from the current state of education, wherein students are taught in large classrooms and assessed for rote knowledge on standardized exams.

According to the NAS, “accomplished teachers…reflect on what goes on in the classroom and modify their teaching plans accordingly. By reflecting on and evaluating one’s own practices…teachers develop ways to change and improve their practices.”

Thankfully, continuous reflection and improvement are the bread and butter of machine learning algorithms. AI will therefore be adept at delivering personalized feedback to every single student. This feedback, in turn, will provide students with the cognitive toolbox to transfer knowledge between a litany of different subjects.

The current lack of knowledge transfer is at the crux of today’s workforce debates: arguments are abundant on how to “reskill” workers displaced by automation. This is important. But the reskilling debate is nothing new, and it’s only one piece of the puzzle. We must also focus resources on creating a workforce that needs less reskilling. It’s a workforce that can adjust to new labor demands in the blink of an eye. We must begin early, in primary and secondary education.

In December 2017, the House introduced the “FUTURE of Artificial Intelligence Act.” Dead on arrival, it had only one small provision addressing education. This act must be resurfaced, and it must give AI in education its due. As the technology landscape changes, so too will the labor landscape. Education must evolve to meet this need.

I can read your memories

I know everything you did over the last two years. I know where you went, who you were with and everything you thought about, down to each second of the day. I know this because I hacked into the brain-computer interface (BCI) that records your memories and stores all of your thoughts. To make matters worse, you weren’t even aware that most of this data was collected by your BCI.

The technology to read minds is already in the lab and will soon be commercially available. The broader policy debate we have today on privacy and security issues related to data must include BCIs and the neural data they generate. In fact, there is an urgent need to do this.

Neural data is a unique biometric marker, like fingerprints and DNA, that can accurately identify unique individuals. What is worrisome is that biometric data privacy is mostly non-existent in the United States. Often, it is collected without consent or knowledge. For instance, people living in 47 states can be identified through images taken without their consent, using facial recognition software.

As neural data is increasingly incorporated into each person’s biometric profile, any notion of privacy will go flying out the window. In contrast to other biometric markers that mostly describe physical characteristics, neural data can give precise insight into the most intimate details of our minds. Allowing this information to become an engine for profit threatens our fundamental right to privacy.

To remedy this issue, I propose that Congress enact a Neural Data Privacy Act (NDPA).

The central premise of NDPA is that individuals must have a fundamental right to cognitive liberty. This means that people must be free to use or refuse BCIs without fear of discrimination and consent is always required for the collection of any neural data. Furthermore, strict limitations will be imposed on the type of neural data that can be collected and for what purposes. For instance, businesses and employers will be prohibited from profiting off neural data by selling or leasing it to third parties.

As this legislation will establish a fundamental right to cognitive liberty, a violation of this right will result in severe penalties. Language criminalizing invasion of cognitive liberty will be included in the NDPA, with a mandate for law enforcement agencies to enforce it. We can define violations of cognitive liberty under a few broad categories: accessing neural data without consent, distributing neural data without consent and compelling an individual to use of a BCI against their will.

Brains on Loan

Instantaneously, your brain power increases by an order of magnitude. Previously difficult problems are now trivial. Tip-of-the-tongue moments are a thing of the past. All manner of intellectual and creative pursuits are at your fingertips.

This is the new reality with the Brain Cloud.

When brain-computer interfaces are the new normal, we’ll prosper from the selective advantages of both silicon and biology.

Think of the electrical grid. If I install solar panels outside my house that provide more energy than I need, that energy flows back into the grid. Then, I’m provided an energy credit toward my next bill. I’ve made the investment in something that society can harness, and I’m repaid for that investment.

Take another example: SETI@home allows you to loan out the processing power of your computer to analyze data for the Search for Extraterrestrial Intelligence. When you’re not using the computer, it’s still providing something useful to society.

Enter the Brain Cloud. When I’m asleep, let’s say, I’ll be able to loan a portion my brain’s processing power to the grid through a brain-computer interface. I can do this because much of my brain is actually a back-up system, a sort of biological insurance policy. Case studies have shown that some individuals are born with only half a brain, only portions of their cortex, or no cerebellum at all. Yet, astonishingly, they lead relatively normal lives.

Of course, the beauty of the Brain Cloud is that no one has to permanently give up portions of their brain. Instead, processing power is out on loan only temporarily.

At this point, a reasonable person might be wondering: Why would I do that?

Just like with the electrical grid, there’s much to be gained. Each time I put my brain on loan, a portion of the processing power I lend out will be used to mine for cryptocurrency through a blockchain. I’ll receive compensation for putting processing power in the grid, and others will be able to harness that power when they need it.

The blockchain will serve another purpose. It will keep an exact, private, and non-refutable ledger of how much processing power I’ve loaned. While anyone will be able to observe that a transaction occurred in the blockchain, no party will have access to the contents of that transaction – allowing me to keep the contents of my brain private.

We are all perpetually hamstrung by our lack of brain power. Yet, for the processing they do, brains are fantastically efficient. Processing in computers, on the other hand, requires massive amounts of energy. Most of this energy ends up as heat, rather than the actual computational processes we want in the first place.

By moving processing power through a brain-computer interface grid, we would be selecting for the best of both worlds: super-efficient conduction of signal through machines, and super-efficient processing of signal through brains.

It’s a win-win.

Mine an Asteroid and Become a Trillionaire!

Asteroids have a staggeringly high net worth and the United States needs to take advantage of this. Ryugu, a near-Earth object currently being visited by the Japanese probe Hayabusa-2, has an estimated value of USD$83 million. Davida, the 7th largest known asteroid, has an estimated value of >USD$100 trillion. These asteroids are so valuable because of their potential stores of minerals and water, supplies that would be critical to any long-term space colonization attempt.

We can launch, and are currently launching, these supplies into orbit from Earth. But just one liter of water from Earth costs $10,000! And platinum-group metals, a group of elements essential to current and emerging technologies, are very rare in the Earth’s crust. We need to develop celestial resources so that we can (1) increase the availability of supplies critical to space missions and (2) reduce delivery costs.

Approximately 1/2000 near-Earth objects are predicted to have platinum-group metals present in concentrations large enough to ensure a profitable return on investment. With over 20,000 near-Earth objects already discovered and more being found every day, there is already a sizable number of targets for metal mining. Carbanaceous asteroids, about 10% of near-Earth objects, are up to 10% water. These are incredible potential sources waiting to be tapped.

Nations are beginning to take notice at the potential economic boon that asteroid mining presents. The United States, Luxembourg, and the United Arab Emirates have all passed laws granting legal and regulatory protection to asteroid miners, granting them ownership of all material recovered. In 2017, Luxembourg and the United Arab Emirates signed a memorandum of understanding to start bilateral cooperation on space activities with a particular focus on the exploration and utilization of space resources. Luxembourg has also invested in Planetary Resources, a firm preparing for a set of data-gathering missions that will visit multiple near-Earth objects to determine the location of the first mine.

The United State needs to rapidly ramp up activities to take the lead and capitalize on the most profitable asteroids. To facilitate the production of operational asteroid mines, the Unites States should promote research efforts into the improved remote determination of elemental compositions to enable the accurate targeting of asteroids without sending expensive probes. Landing on asteroids, particularly tumbling asteroids, is difficult and will require advanced engineering.

To encourage industrial attempts at asteroid mining, the U.S. should adopt a regulatory framework that protects miners’ ownership rights while acknowledging liability issues, and a taxation scheme that mimics that of terrestrial mining. A space-based military force should also be developed to ensure the security of U.S.-flagged mining operations with a space diplomatic corps to mitigate international issues.

One Pill to Rule Them All

Say goodbye to doctor’s visits and pharmacies. A revolution in healthcare may be on the horizon: It’s time to treat disease before it makes you sick. Taking just one pill could do that.

This proposed pill takes advantage of the fact that cells in our internal organs – liver, lungs, heart, and even brain – release a litany of chemicals at every moment. When someone has signs of pending disease – say, inflammation in the lungs or clogged arteries – the pill would detect how this release of chemicals changes over time. Then, it would sense and determine what combination of medications would be required to rectify the situation. Finally, it would instantly order and deliver the precise medication needed.

This technological idea already exists for diabetes. When a diabetic person’s blood sugar dips too low, an automated pump delivers insulin to combat the low blood sugar. And biocompatible, stretchable sensors are currently in clinical trials for monitoring infants’ health in the neonatal ICU.

But more sophisticated techniques are hitting the scene now. At the MIT Media Lab in Cambridge, Dr. Canan Dagdeviren has invented a device called a “Conformable Decoder.” It comes in pill form, unfurls in the stomach, sticks to the stomach lining, and then provides data on how the gut is doing at all hours. You can imagine the same technology being used to prevent heart attacks: biosensors embedded in the heart would detect a pending attack well before it takes place, and deliver the precise medications needed to prevent it from occurring.

It’s possible these devices could be implanted into each of our organs. We could swallow a pill that breaks apart and delivers microscopic biosensor devices to each organ. There, the biosensors would spend their time diagnosing potential problems and prescribing the precise chemical elixirs needed to fix them. No need for a pharmacy. All you have to do is go outside; a drone delivery service would bring the solution directly to you.

These biosensor technologies – and the artificial intelligence networks governing their operation – could prevent any number of diseases. When these advances come to fruition, the healthcare system would undergo vast automation. In doing so, human beings would be to live beyond the risk of disease.