The Future of AI in Healthcare: No Doctors Required

By Kathryn Ziden

Robotics and artificial intelligence (AI) are changing the field of healthcare. Doctors are seemingly open to this change, as long as there still is a place for them in the system. But is this a reality? Will we need doctors in the future? In the short term, yes. In the long term, not likely.

A recent study by the market research firm Frost & Sulllivan estimates the AI market in healthcare will exceed $6 billion by 2021. AI is already making big advances in automated soft-tissue surgery, medical imaging, drug discovery, and perhaps its biggest success so far: using big data analytics to diagnose and treat disease. IBM’s Watson is already being used at 16 cancer institutes, and recently correctly diagnosed a rare form of leukemia in a Japanese woman, after conventional (human) methods had failed.

However, on the question of where this leads in the long term, there is a disconnect between technology forecasts and doctors’ opinions. Article after article on the future of AI in healthcare quotes doctors and healthcare professionals as predicting that computers/robots/AI will never be able to replace them. The reasoning of these professionals seem to fall into one of the following arguments, and reflects a we-are-too-big-to-fail attitude or a god complex on behalf of the doctors:

#1.) Doctors will always be needed to give that special, reassuring human factor. One doctor claims that she could never be replaced by a computer because patients routinely leave her office saying how much better they feel after just talking to her. Another doctor adds, “Words alone can heal.”

Rebuttal: Although the human factor may be vital, it does not require a medical degree to provide comfort or solace to a patient. Social workers, nurses or psychologists can fill this role.

#2.) Only doctors can pick up on nuanced subtleties of a patient’s mood, behavior or appearance to make a diagnosis. For example, one doctor posits that if a woman has ovarian pain or a missed menstrual period, a computer would never be able pick up on the fact that it could be caused by anxiety, stress, depression, a lack of sleep or over-mediation– the way a doctor can.

Rebuttal: AI systems could almost certainly be able to probe for these types of secondary causes through a patient’s facial cues, bloodwork and a line of questioning, the same way a doctor currently does.

#3.) Doctors will be assisted by AI in calculating diagnoses and treatment plans, but doctors will still be needed to make the final decision. A doctor’s decision-making process can account for additional variables.

Rebuttal: An AI framework has already been shown to make better medical decisions than human doctors. Doctors are prone to biases and human error; their decisions can be based on emotions, influenced by fatigue from a long day, and limited by the brain’s capacity to store and recall data.

#4.) Computers will never have a doctor’s intuition. Doctors have a “sixth sense” used in their diagnoses. Practicing medicine is an art.

Rebuttal: Intuition is pattern recognition; something computers are much better at. Medicine is an applied science that requires decision making (i.e. the “art” part). AI algorithms are already better at making medical decisions than doctors.

#5.) Doctors will still be needed to perform physical examinations.

Rebuttal: The idea of having a physical exam by a doctor is already going by the wayside. As one doctor puts it, “The physical exam was critical when it was all we had.” And in cases where a physical exam is needed, there is already a robotic glove that can perform a physical exam.

In the next 10 to 20 years, I believe we will see a human-AI hybrid healthcare system, where individuals will be more in control of their own healthcare. Successful doctors will need to change their practices and attitudes to cope with the emergence of new technologies. In the long term, however, our entire healthcare system will likely undergo an AI-ification, and the idea of human doctors and surgeons will be obsolete. But perhaps there will be an increased market for homeopathy or alternative medicine by patients who aren’t ready for this future either.

Doctors’ reluctance or inability to see a future in which they are not needed is a form of optimism bias and is something that all of us are susceptible to. A 2016 PEW Research Center study showed that 65% of respondents thought that robots and computers will take over jobs currently done by humans within the next 50 years; however, 80% of people said that it won’t be their job that is taken over. There is a disconnect. Nothing is more powerful than human delusion… except perhaps the efficiency and skillfulness of your future AI doctor.

Advertisements

Genetically Engineering Animals Modifies Nature, But That’s Nothing New

By TJ Kasperbauer

Some people want to use genetic modification to restore the American chestnut tree and the black-footed ferret. Some people couldn’t care less. Still others might think chestnut trees are ugly and ferrets are a nuisance. Choosing between these preferences is difficult. But whichever course of action we take, it won’t help to ask if we are conserving pristine nature. Instead, we must accept that we are merely modifying nature—as we have many times before.

Thinking about conservation as modifying nature conflicts with the dominant paradigm of nature preservation. Nature is to be protected, not redesigned. But this view of nature is misguided. We have always influenced nature, even if unintentionally and haphazardly. Genetic modification is only the most recent step in our long history of altering nature.

Many conservationists already accept this view of nature. For instance, some endangered and highly valued species have been relocated in order to improve their chances of survival. Doing so changes the species as well as the surrounding ecosystem—nature is changed. Captive breeding programs also frequently aim to modify the genetic makeup of the species before release. These practices also operate under the assumption that we are constantly changing nature.

Starting these discussions now helps prepare us for policy decisions we will inevitably face in the future. These decisions will be less and less about conservation and more about what we ultimately value and desire. This is difficult because there is such widespread disagreement, as illustrated by recent proposals to relocate pikas, white bark pine, and many others.

Just last week the International Union for Conservation of Nature proposed a temporary cessation of field trials and research on genetically modifying nonhuman organisms for conservation. Until the consequences have been properly assessed, they reason, such interventions are too dangerous. This conclusion is sensible—we do need more data. But the data will be quickly forthcoming, and the traditional conservation framework will not be very helpful. We must remember the potential upshot of genetic modification: not just to keep what we have, but to build and design what we want.

“Do no harm” is NOT Enough

computer-1149148_1280By Beth Russell

“If it ain’t broke, don’t fix it, right,” my granddad used to say, right before he would wink at me, chuckle, and say “let’s see if we can figure out how to make it better.” This type of ingenuity is at the root of American innovation, invention, and process evolution. Observation, experimentation, and a national drive for optimization are part of our culture. As we have moved from the 20th century into the 21st, there has been a fundamental shift from “one size fits all” solutions, towards more personalized solutions.

The Precision Medicine Initiative is one of the great goals of our time. However, most of our medical treatment is still geared toward the treatment that will usually work, rather than the treatment that is the best for the individual patient. What would the world look like if we could change that in years rather than decades? What if we could do it cheaply, and easily, with information that already exists?

We can. To start the process, we need only to do one thing – to share. Buried within our medical records, our genetics, and our health data, is the information that we need to make our medical treatments better. Our diversity in population, hospital, and practitioner policies, and personal health decisions compose an enormous health data set. If we are willing to share our data with researchers and to insist that the insurers, hospitals, and practitioners make sure that the data is interoperable, we will be well on our way.

We often have widely held medical practices that are not actually supported by scientific data. This is illustrated by a recent decision by the Department of Agriculture and the Department of Health and Human Services to remove daily flossing from their guidelines. Apparently, there was no actual scientific data behind it. Such practices are often low-risk procedures or treatments that do not warrant the expense of a clinical trial. Many of these will probably turn out to be accurate for most people, but not necessarily for everyone. I for one don’t plan to stop flossing anytime soon.

These sorts of medical practices are typically adopted based upon observation and consensus. This approach is cheap but relies on practitioners detecting a pattern of good or bad results, is highly subject to human bias, and is much more geared towards safety than efficacy. There will always be room for common sense and human observation in the medical process but they will miss both small, and rare effects.

For over a century the arrow has been shifting away from simple observation towards data-based decision making. Large observational studies like the Framingham Heart Study and the Nurse’s Health Study have had outsize impacts on medical practices but they are still too small. Only with many observations from numerous patients can we detect the variations in efficacy and safety that are needed for precision medicine.

Today, clinical trials are the gold standard for medical treatments. These experiments are expensive, time consuming, and often suffer from low subject numbers and a lack of diversity. They also can run into ethical issues, especially with vulnerable populations. Even when the results of clinical trials are excellent, their results aren’t always adopted initially by practitioners. Medicine tends to be slow to adopt change. Data sharing will allow scientific analysis to extend beyond the length of time and number of subjects that are used in any “trial” and will allow us to better evaluate drugs and treatment after they go to market, not just before they are approved.

Data sharing is also important for areas of medicine for which traditional clinical trials are difficult or impossible to run. One of these areas is surgery. Most surgeries are not subjected to clinical trials and there is great variation in the methods for even relatively common surgeries from hospital to hospital. How does a patient decide where to get a life-saving surgery? Recommendations from friends and family are the number one method for choosing a doctor. There is no place to look to find out whose favorite method is the best one overall, nor the best for the individual patient. This needs to change. Sharing our medical data will make this possible.

Medical practice is poised for a revolution. We are beginning to move from treating the symptoms to treating the person. This can only happen if enough of us are willing to share. So let’s practice our earliest kindergarten lesson already.

Are We Ready for Life at 150?

By Kathryn Ziden

A future in which we live to be 150 years old is no longer far-off or science fiction. Global life expectancy figures doubled within the last century. Advances in healthcare, precision medicine, gene and immunotherapies and genetic engineering will likely lead to increased longevity sooner than current trends predict. But are we prepared for this future?

Are we prepared for this future financially? The current system of Social Security and Medicare is failing, facing “long-term financing shortfalls,” according to the Social Security Administration.

A report out earlier this year from The Brookings Institution adds that the gap between lifetime benefits received by poor and less-educated workers versus those received by wealthy, well-educated workers is widening. In addition, age discrimination in the workplace may prevent older generations from working the longer careers that will be financially required of them.

Are we prepared for this future socially? What will the concept of marriage be like, especially given the current prevalence of “gray divorce?” An entirely new healthcare system, perhaps based on A.I., will need to be created to deal with the shifting demographics. If careers span 100 years instead of 40, innovation in corporations and universities may stall, hindered by the stagnant ideas of long-standing CEOs and professors.

Are we prepared for this future politically? Increased lifespans coupled with a slowing, but still positive population growth rate will lead to a more crowded Earth. Increased competition for resources will likely result in new domestic and international conflicts. Longer lifetimes will also increase the use of public services, placing additional strains on budgets and increasing deficit spending.

Even without major S&T advances, extended longevity is inevitable; it is time to prepare now. The good news is that all of the problems outlined here are fixable, if we begin the dialogue and planning that will be required now. There are a large number of scientists working in the field of aging, gerontology, longevity, and other biological or medical fields whose work is directly affecting human life expectancy. It is time that there be the same commitment from the policy side.

The Smart Grid Needs to be a Safe Grid

By T.J. Kasperbauer

Imagine you wake up one morning to discover that your entire city has lost power. What would you guess is the most likely cause? A tornado? Equipment malfunction? Terrorist attack?

Increasingly, American’s energy grid is under threat from cyberattacks. This is not a new problem, but so far the solutions have been inadequate. In order to improve our energy grid, we must build cybersecurity into its main functions.

One way the U.S. is currently trying to combat cyberattacks is through development of the Smart Grid. Under Smart Grid, energy production and distribution are decentralized. Decentralization creates redundancies that help prevent a single attack from taking down the whole grid. Devices on the Smart Grid are also in constant communication, which enhances detection of attacks and outages.

The main problem with the Smart Grid is that its interconnectedness produces vulnerabilities. By putting all devices in two-way communication with each other, the Smart Grid increases the number of possible entry points for attacks. Moreover, the Smart Grid connects the energy grid to lots of other “grids.” For instance, household electricity usage can be monitored on the internet. Foreign or domestic adversaries—including lone wolf hackers—could potentially use this sort of connectability to influence the Smart Grid.

Some attempts have been made to address this problem. For instance, DARPA is currently installing automated cybersecurity defense systems into power grids. And the Department of Energy routinely funds projects aimed at testing and improving the cybersecurity of the energy grid ($34 million in August 2016). There are also published guidelines for protecting energy cybersecurity (in 2010 and 2015). These are all important and should continue, but must be better integrated into the Smart Grid as it develops.

In order to preserve the benefits of the Smart Grid, we must build security alongside connectability. This requires better anticipation of future problems in order to design security into grid functions.

Don’t Be Afraid of Our Bright Future

By Charles Mueller

 The story of human history has been about becoming healthier, smarter and stronger.  We have always been searching for ways to overcome the limitations imposed on us by Mother Nature using science and technology.  Through a conscious effort aimed at making us the best we can be, we have proven time and again that we can make what was once impossible, possible, always improving our way of life along the way.

 So then why did a Pew Research Center survey find evidence to support the claim that the majority of American’s are afraid of the technologies on the horizon that will make us healthier, smarter and stronger?  Why are we afraid of enhancing ourselves with bio- and neuro-technologies that can help us fight off disease and/or perform miracles like restoring vision to the blind?   

 The reality is we’ve been using S&T to improve our lives since we could.  Today millions can walk because of prosthetics, they can breathe because of organ transplants and they can access the largest database of human knowledge in the blink of an eye thanks to the Internet and their Smart phones.  We love technology, and modern advancements, while mysterious in how they work for most, are just the next phase in what we’ve always been doing.  

 We need these next generation human enhancement technologies.  Their proper use today could drastically improve the quality of life for billions.  Aside from that, the human species is a fragile, intelligent and creative species.  These technologies, if developed and applied in the right ways, can help us overcome our fragility, increase our intelligence and expand our creativity.  The future versions of us will have very different problems than the ones of today and ensuring they have the tools to survive their challenges, which might range to dealing with a natural ice age to colonizing another planet, is the greatest gift we could hope to give.  These tools, properly developed, are that gift. 

 Using these technologies is the first step in developing the knowledge about how to properly develop, manage and control these awesome technologies.  It is the first step in learning how to control and adapt our human systems to the environments of the future, be they here on Earth or out in the cosmos.  We will never be able to remove all the risk associated with their use, and there are bound to be accidents, but as humans we take equivalent risks all the time, every day.  It is good we are starting this conversation because it means there is public pressure to ensure we evolve these technologies with foresight and caution.  However, we have to ensure the dialogue doesn’t halt the progress these tools promise.  Abandoning a transparent, global pursuit of these technologies will only relegate their development to the shadows, an environment primed to foster our greatest fears. 

 We need to continue to embrace the technologies that will help us grow to be healthier, smarter and stronger, not be afraid of them.  These tools can help us start evolving ourselves with some foresight instead of blindly hoping we get to where we need to go.  We need these human new enhancement technologies so let’s figure out how to manage this reality instead of denying it.  Our future literally depends on it.

The Box is Open, Now What?

Charles Mueller

Today we have the ability to modify the DNA in any organism we can isolate.  Yet we still don’t have the knowledge to be able to precisely know how these changes will translate into new behaviors. 

In the latest example, the people of Key Haven, Florida are about to be part of new medical experiment, approved by the FDA, and to be carried out by a company called Oxitec.  This company is planning to release millions of genetically modified mosquitos into the wild in hopes of containing the spread of the Zika virus.  Really cool idea, but do we know if there are any potential negative consequences?  Well according the FDAs Environmental Assessment the people of Key Haven have nothing to worry about.  How exactly was the FDA able to make such a call?

Most of the ability to say that certain genetic modifications in other species (or even humans) will not have an impact on human health is based on laboratory data and existing biological theory, not on actual direct evidence, like human clinical trials.  There would be no problem with this except for the fact laboratory data rarely translates into the clinic and our existing biological theory is incomplete, routinely riddled with “exceptions” that are only understood in hindsight.  The process therefore banks on a scientific consensus that boils down to an educated prediction.  So when the FDA reviewed Oxitec’s data and the theories they cited, it is simply not possible for them say with certainty that releasing genetically modified mosquitos into the wild will not have an impact on human health or the environment; no direct evidence exists to support such a claim or even a solid theory to back it up.

As scientists, we want to test our ideas and challenge our theories, but we have to do it wisely.  We have to do it with foresight and we need to accept that we may need to move more slowly towards the really exciting experiments.  It is our job to ensure we don’t become cowboys firing off experiments with unknown consequences whenever we gather enough support or have a nice financial incentive (Oxitec looks to make $400M off this technology).  We need to be humble, we need to move forward, but we must always remain cautious when our experiments are potentially playing in a sandbox we’ve never played in before.

In order to move forward properly we need to accept we probably don’t know as much as we think we do.  If we are going to continue to mess with the DNA of organisms and the nature of ecosystems let’s at least make sure we are doing our best to collect all the data about what is changing when we do this and obtain consent from the people potentially affected.  If we do that, we can use the information to better inform our policies on how to appropriately design and manage these new “experiments”.

Pandora’s box is open and the situation surrounding the use of the Oxitec mosquito is just the hot issue in the news today.  We need a strategy to fill our gaps in the knowledge of biological sciences and in how to manage this awesome power over how life on this planet exists and evolves.