SYSTEM_ERROR_505_STATS_FAIL

By Beth Russell

If data is the gold standard, then why don’t all scientists agree all the time? We like to say the devil is in the details but it is really in the analysis and (mis)application of data. Scientific errors are rarely due to bad data; misinterpretation of data and misuse of statistical methods are much more likely culprits.

All data are essentially measurements. Imagine that you are trying to figure out where your property and your neighbors meet. You might have a rough idea of where the boundary is but you are going to have to take some measurements to be certain. Those measurements are data. Maybe you decide to step it off and calculate the distance based on the length of your shoe. Your neighbor decides to use a laser range finder. You are both going to be pretty close but you probably won’t end up in the exact same place. As long as his range finder is calibrated and your stride length is consistent, both methods are reliable and provide useful data. The only difference is the accuracy.

Are the data good or bad? It depends upon how accurate you need to be. Data are neither good or bad as long as the measurement tool is reliable. If you have a legal dispute your neighbor will probably win, on the other hand if you are just trying to figure out where to mow the grass you’re probably safe stepping it off. Neither data sets are bad, they just provide different levels of accuracy.

Accuracy is a major consideration in the next source of error, analysis. Just as it is important to consider your available ingredients and tools when you decide what to make for dinner, it is vital to consider the accuracy, type, and amount of data you have when you go to choosing a method for analysis. The primary analysis methods that science uses to determine if the available data supports a conclusion are statistical methods. These are tests that can estimate how likely it is that a given assumption is not true, they are not evidence that a conclusion is correct.

Unfortunately, statistical methods are not one size fits all. The validity of any method is dependent on properties of the data and the question being tested. Different statistical tests can lead to widely disparate conclusions. In order to provide the best available science, it is vital to choose, or design the best test for a given question and data set. Even then, two equally valid statistical tests can come to different conclusions, especially if there isn’t very much data or the data has high variability.

Here’s the rub… even scientists don’t always understand the analysis methods that they choose. Statistics is a science in itself and few biologists, chemists, or even physicists are expert statisticians. As the quantity and complexity of data grows, the importance of evaluating which analysis method(s) should be used becomes more and more important. Many times a method is chosen for historical reasons – “We’ve always used this method for this type of data because someone did that before.” Errors made due to choosing a poor method for the data are sloppy, lazy, bad science.

Better education in statistics will reduce this type of analysis-based errors and open science will make it easier to detect them. Another thing we can do is support more team science. If a team also includes a statistics expert, it is much less likely to make these type of errors. Finally, we need more statistics literate editors and reviewers. These positions exist to catch errors in the science and they need to consider the statistics part of the experiment, not the final arbiter of success or failure. High quality peer-review, collaboration, and the transparency created by open data are our best defenses against bad science. We need to strengthen them and put a greater emphasis on justifying analysis methodology choices in scientific discovery.

Advertisements

The Power of Imagination

By Charles Mueller

Sunday was an emotional day.  It was the 15th anniversary of one of the most traumatic days in US history, the anniversary of 9/11.  That day is burned into the memories of the American people because its events defied what we believed was possible.  We will never forget because we will always remember the day the unthinkable became reality.

The official story that came out of the investigations of 9/11 to explain how it was able to occur highlighted a failure to imagine the kinds of horrors terrorists could unleash upon our nation.  In some ways this finding was ironic because it was our imagination that helped us land on the moon, invent the Internet, and harness the atom, all accomplishments in our climb to become the world’s only remaining superpower at the time.  On 9/11 though it somehow became our weakness.  By failing to take serious what might seem impossible, by failing to imagine the extremes people might go to hurt us, we created an opportunity that could be exploited.  The sad reality of that day is that many people saw the signs of what was coming, but we still chose to ignore it; we chose to refrain from imagining it could ever take place.

That day showed the real the power of imagination.  If you can imagine it, you can often make it real.  The terrorists imagined all that took place on 9/11 and because they believed, were able to inflict a wound on this country that may never fully heal.  As we move forward, continuing to recover from that day, we must never forget this lesson; we must never forget the power of imagination.

Today we live at a time where what was once the imagination of science fiction writers is now becoming reality.  We are on the cusp of being able to engineer all types of life, including ourselves, to have the traits and properties we desire.  We are on the verge of potentially creating sentient life fundamentally different than our own.  We have tools today that are enabling our imagination to translate into reality.  As amazing as the future can be, days like 9/11 remind us that there exist those that will ultimately try to use these new technologies and their imaginations to make the future worse.  We have to remember this as we start thinking about how to manage this brave new world.

In order to ensure the future is better than tomorrow, we have to use our imagination to consider all the different ways it can go right and wrong.  We have to imagine the future we want and then work together to figure out the right path to get there.  We cannot afford another failure of imagination moving forward because S&T has simply made the stakes too high.  Let’s use the power of imagination to create a better world and ensure 9/11 is a day we remember, not relive.

The Future of AI in Healthcare: No Doctors Required

By Kathryn Ziden

Robotics and artificial intelligence (AI) are changing the field of healthcare. Doctors are seemingly open to this change, as long as there still is a place for them in the system. But is this a reality? Will we need doctors in the future? In the short term, yes. In the long term, not likely.

A recent study by the market research firm Frost & Sulllivan estimates the AI market in healthcare will exceed $6 billion by 2021. AI is already making big advances in automated soft-tissue surgery, medical imaging, drug discovery, and perhaps its biggest success so far: using big data analytics to diagnose and treat disease. IBM’s Watson is already being used at 16 cancer institutes, and recently correctly diagnosed a rare form of leukemia in a Japanese woman, after conventional (human) methods had failed.

However, on the question of where this leads in the long term, there is a disconnect between technology forecasts and doctors’ opinions. Article after article on the future of AI in healthcare quotes doctors and healthcare professionals as predicting that computers/robots/AI will never be able to replace them. The reasoning of these professionals seem to fall into one of the following arguments, and reflects a we-are-too-big-to-fail attitude or a god complex on behalf of the doctors:

#1.) Doctors will always be needed to give that special, reassuring human factor. One doctor claims that she could never be replaced by a computer because patients routinely leave her office saying how much better they feel after just talking to her. Another doctor adds, “Words alone can heal.”

Rebuttal: Although the human factor may be vital, it does not require a medical degree to provide comfort or solace to a patient. Social workers, nurses or psychologists can fill this role.

#2.) Only doctors can pick up on nuanced subtleties of a patient’s mood, behavior or appearance to make a diagnosis. For example, one doctor posits that if a woman has ovarian pain or a missed menstrual period, a computer would never be able pick up on the fact that it could be caused by anxiety, stress, depression, a lack of sleep or over-mediation– the way a doctor can.

Rebuttal: AI systems could almost certainly be able to probe for these types of secondary causes through a patient’s facial cues, bloodwork and a line of questioning, the same way a doctor currently does.

#3.) Doctors will be assisted by AI in calculating diagnoses and treatment plans, but doctors will still be needed to make the final decision. A doctor’s decision-making process can account for additional variables.

Rebuttal: An AI framework has already been shown to make better medical decisions than human doctors. Doctors are prone to biases and human error; their decisions can be based on emotions, influenced by fatigue from a long day, and limited by the brain’s capacity to store and recall data.

#4.) Computers will never have a doctor’s intuition. Doctors have a “sixth sense” used in their diagnoses. Practicing medicine is an art.

Rebuttal: Intuition is pattern recognition; something computers are much better at. Medicine is an applied science that requires decision making (i.e. the “art” part). AI algorithms are already better at making medical decisions than doctors.

#5.) Doctors will still be needed to perform physical examinations.

Rebuttal: The idea of having a physical exam by a doctor is already going by the wayside. As one doctor puts it, “The physical exam was critical when it was all we had.” And in cases where a physical exam is needed, there is already a robotic glove that can perform a physical exam.

In the next 10 to 20 years, I believe we will see a human-AI hybrid healthcare system, where individuals will be more in control of their own healthcare. Successful doctors will need to change their practices and attitudes to cope with the emergence of new technologies. In the long term, however, our entire healthcare system will likely undergo an AI-ification, and the idea of human doctors and surgeons will be obsolete. But perhaps there will be an increased market for homeopathy or alternative medicine by patients who aren’t ready for this future either.

Doctors’ reluctance or inability to see a future in which they are not needed is a form of optimism bias and is something that all of us are susceptible to. A 2016 PEW Research Center study showed that 65% of respondents thought that robots and computers will take over jobs currently done by humans within the next 50 years; however, 80% of people said that it won’t be their job that is taken over. There is a disconnect. Nothing is more powerful than human delusion… except perhaps the efficiency and skillfulness of your future AI doctor.

Genetically Engineering Animals Modifies Nature, But That’s Nothing New

By TJ Kasperbauer

Some people want to use genetic modification to restore the American chestnut tree and the black-footed ferret. Some people couldn’t care less. Still others might think chestnut trees are ugly and ferrets are a nuisance. Choosing between these preferences is difficult. But whichever course of action we take, it won’t help to ask if we are conserving pristine nature. Instead, we must accept that we are merely modifying nature—as we have many times before.

Thinking about conservation as modifying nature conflicts with the dominant paradigm of nature preservation. Nature is to be protected, not redesigned. But this view of nature is misguided. We have always influenced nature, even if unintentionally and haphazardly. Genetic modification is only the most recent step in our long history of altering nature.

Many conservationists already accept this view of nature. For instance, some endangered and highly valued species have been relocated in order to improve their chances of survival. Doing so changes the species as well as the surrounding ecosystem—nature is changed. Captive breeding programs also frequently aim to modify the genetic makeup of the species before release. These practices also operate under the assumption that we are constantly changing nature.

Starting these discussions now helps prepare us for policy decisions we will inevitably face in the future. These decisions will be less and less about conservation and more about what we ultimately value and desire. This is difficult because there is such widespread disagreement, as illustrated by recent proposals to relocate pikas, white bark pine, and many others.

Just last week the International Union for Conservation of Nature proposed a temporary cessation of field trials and research on genetically modifying nonhuman organisms for conservation. Until the consequences have been properly assessed, they reason, such interventions are too dangerous. This conclusion is sensible—we do need more data. But the data will be quickly forthcoming, and the traditional conservation framework will not be very helpful. We must remember the potential upshot of genetic modification: not just to keep what we have, but to build and design what we want.

“Do no harm” is NOT Enough

computer-1149148_1280By Beth Russell

“If it ain’t broke, don’t fix it, right,” my granddad used to say, right before he would wink at me, chuckle, and say “let’s see if we can figure out how to make it better.” This type of ingenuity is at the root of American innovation, invention, and process evolution. Observation, experimentation, and a national drive for optimization are part of our culture. As we have moved from the 20th century into the 21st, there has been a fundamental shift from “one size fits all” solutions, towards more personalized solutions.

The Precision Medicine Initiative is one of the great goals of our time. However, most of our medical treatment is still geared toward the treatment that will usually work, rather than the treatment that is the best for the individual patient. What would the world look like if we could change that in years rather than decades? What if we could do it cheaply, and easily, with information that already exists?

We can. To start the process, we need only to do one thing – to share. Buried within our medical records, our genetics, and our health data, is the information that we need to make our medical treatments better. Our diversity in population, hospital, and practitioner policies, and personal health decisions compose an enormous health data set. If we are willing to share our data with researchers and to insist that the insurers, hospitals, and practitioners make sure that the data is interoperable, we will be well on our way.

We often have widely held medical practices that are not actually supported by scientific data. This is illustrated by a recent decision by the Department of Agriculture and the Department of Health and Human Services to remove daily flossing from their guidelines. Apparently, there was no actual scientific data behind it. Such practices are often low-risk procedures or treatments that do not warrant the expense of a clinical trial. Many of these will probably turn out to be accurate for most people, but not necessarily for everyone. I for one don’t plan to stop flossing anytime soon.

These sorts of medical practices are typically adopted based upon observation and consensus. This approach is cheap but relies on practitioners detecting a pattern of good or bad results, is highly subject to human bias, and is much more geared towards safety than efficacy. There will always be room for common sense and human observation in the medical process but they will miss both small, and rare effects.

For over a century the arrow has been shifting away from simple observation towards data-based decision making. Large observational studies like the Framingham Heart Study and the Nurse’s Health Study have had outsize impacts on medical practices but they are still too small. Only with many observations from numerous patients can we detect the variations in efficacy and safety that are needed for precision medicine.

Today, clinical trials are the gold standard for medical treatments. These experiments are expensive, time consuming, and often suffer from low subject numbers and a lack of diversity. They also can run into ethical issues, especially with vulnerable populations. Even when the results of clinical trials are excellent, their results aren’t always adopted initially by practitioners. Medicine tends to be slow to adopt change. Data sharing will allow scientific analysis to extend beyond the length of time and number of subjects that are used in any “trial” and will allow us to better evaluate drugs and treatment after they go to market, not just before they are approved.

Data sharing is also important for areas of medicine for which traditional clinical trials are difficult or impossible to run. One of these areas is surgery. Most surgeries are not subjected to clinical trials and there is great variation in the methods for even relatively common surgeries from hospital to hospital. How does a patient decide where to get a life-saving surgery? Recommendations from friends and family are the number one method for choosing a doctor. There is no place to look to find out whose favorite method is the best one overall, nor the best for the individual patient. This needs to change. Sharing our medical data will make this possible.

Medical practice is poised for a revolution. We are beginning to move from treating the symptoms to treating the person. This can only happen if enough of us are willing to share. So let’s practice our earliest kindergarten lesson already.

Are We Ready for Life at 150?

By Kathryn Ziden

A future in which we live to be 150 years old is no longer far-off or science fiction. Global life expectancy figures doubled within the last century. Advances in healthcare, precision medicine, gene and immunotherapies and genetic engineering will likely lead to increased longevity sooner than current trends predict. But are we prepared for this future?

Are we prepared for this future financially? The current system of Social Security and Medicare is failing, facing “long-term financing shortfalls,” according to the Social Security Administration.

A report out earlier this year from The Brookings Institution adds that the gap between lifetime benefits received by poor and less-educated workers versus those received by wealthy, well-educated workers is widening. In addition, age discrimination in the workplace may prevent older generations from working the longer careers that will be financially required of them.

Are we prepared for this future socially? What will the concept of marriage be like, especially given the current prevalence of “gray divorce?” An entirely new healthcare system, perhaps based on A.I., will need to be created to deal with the shifting demographics. If careers span 100 years instead of 40, innovation in corporations and universities may stall, hindered by the stagnant ideas of long-standing CEOs and professors.

Are we prepared for this future politically? Increased lifespans coupled with a slowing, but still positive population growth rate will lead to a more crowded Earth. Increased competition for resources will likely result in new domestic and international conflicts. Longer lifetimes will also increase the use of public services, placing additional strains on budgets and increasing deficit spending.

Even without major S&T advances, extended longevity is inevitable; it is time to prepare now. The good news is that all of the problems outlined here are fixable, if we begin the dialogue and planning that will be required now. There are a large number of scientists working in the field of aging, gerontology, longevity, and other biological or medical fields whose work is directly affecting human life expectancy. It is time that there be the same commitment from the policy side.

The Smart Grid Needs to be a Safe Grid

By T.J. Kasperbauer

Imagine you wake up one morning to discover that your entire city has lost power. What would you guess is the most likely cause? A tornado? Equipment malfunction? Terrorist attack?

Increasingly, American’s energy grid is under threat from cyberattacks. This is not a new problem, but so far the solutions have been inadequate. In order to improve our energy grid, we must build cybersecurity into its main functions.

One way the U.S. is currently trying to combat cyberattacks is through development of the Smart Grid. Under Smart Grid, energy production and distribution are decentralized. Decentralization creates redundancies that help prevent a single attack from taking down the whole grid. Devices on the Smart Grid are also in constant communication, which enhances detection of attacks and outages.

The main problem with the Smart Grid is that its interconnectedness produces vulnerabilities. By putting all devices in two-way communication with each other, the Smart Grid increases the number of possible entry points for attacks. Moreover, the Smart Grid connects the energy grid to lots of other “grids.” For instance, household electricity usage can be monitored on the internet. Foreign or domestic adversaries—including lone wolf hackers—could potentially use this sort of connectability to influence the Smart Grid.

Some attempts have been made to address this problem. For instance, DARPA is currently installing automated cybersecurity defense systems into power grids. And the Department of Energy routinely funds projects aimed at testing and improving the cybersecurity of the energy grid ($34 million in August 2016). There are also published guidelines for protecting energy cybersecurity (in 2010 and 2015). These are all important and should continue, but must be better integrated into the Smart Grid as it develops.

In order to preserve the benefits of the Smart Grid, we must build security alongside connectability. This requires better anticipation of future problems in order to design security into grid functions.