By Bruce Ramshaw, MD
A few months ago, I was in Gdansk, Poland, for the annual meeting of the European Hernia Society. In a session on guidelines, Dr. Jaap Bonjer presented a paper that looked at expert consensus regarding guidelines published by the International Endo Hernia Society. Guidelines were presented based on levels of evidence, with Level 1 evidence being the highest quality and Level 5 being the lowest. Interestingly in this review, Level 5 guidelines were evaluated as appropriate by experts about 85% of the time (a solid B grade), but amazingly, Level 1 guidelines were judged as appropriate only about 60% of the time, a near-failing grade.
How could that be? Based on our current understanding and interpretation of medical guidelines, this makes no sense. But, if we allow ourselves to understand how this could possibly happen, it might help us to understand why our health care system is struggling, and what we might be able to do to improve it.
Because we tend to accept simple explanations for events and concepts that sometimes are quite complicated, we often end up misinterpreting and misunderstanding phenomena that are not simple. Our health care system is one troubling example of this. And because the science we have applied to health care, with its hierarchy of evidence, is based on the investigation of isolated or mechanical systems rather than complex biologic systems, we often are left with incomplete and inaccurate results and interpretations. I believe our misinterpretations and misunderstandings of medical evidence and health care laws have contributed to the current problems in health care.
Let me try to explain.
Misinterpretation of a law can lead to unintended harm and consequences. The misinterpretation of the Health Insurance Portability and Accountability Act (HIPAA) is a prime example. A gut-wrenching demonstration of this error occurred in December 2012, after the Sandy Hook Elementary School shootings in Newtown, Conn. Many parents were denied information about their children by the hospital, with HIPAA cited as the reason. This is a clear misinterpretation of the law and led to significant emotional harm for family members of the involved children.
This event resulted in a Congressional hearing on the misinterpretation of HIPAA. On April 26 this year, Leon Rodriguez, JD, director of the Office of Civil Rights under the Department of Health and Human Services (HHS), testified, “We have never taken enforcement action because [a] provider decided that the best interest of the patient [was] to disclose information to a third party. In fact, of the 80,000 complaints of HIPAA violations that HHS has received, only twelve have resulted in monetary penalties.”
In an article by David Pittman in MedPage Today, which covered the hearing, Mark Rothstein, JD, director of the Institute for Bioethics, Health Policy and Law at the University of Louisville, in Kentucky, was quoted: “Physicians frequently misinterpret what they’re allowed to share under HIPAA. <00BC> The outcome is that some use of disclosures permitted by the privacy rule are not allowed by some covered entities, perhaps out of ignorance or an overabundance of caution.”
Although one would think that a daylong Congressional hearing would lead to changes in hospital and physician policies, and that there would be more transparency with patients and family members regarding the use of their information, that has not happened. In fact, I have not found anyone who even knows there was a Congressional hearing on the misinterpretation of HIPAA. What’s more, this was not even the first hearing on the misinterpretation of the law. On Sept. 23, 2003, Richard Campanelli, the director of the Office for Civil Rights at that time, testified, “A number of concerns that have come to our attention actually are not a problem with the rule itself, but rather, misconceptions about the rule.” He continued to explain that the rule “specifically allowed doctors and other providers to share this information for treatment purposes, to obtain payment, or to carry out their day-to-day operations without first having to obtain a patient’s written approval.”
It is my belief that the problem has been one in which the rule was interpreted by organizations out of fear of how it could harm them, rather than how it could help patient and their families have access to medical information. Improving the value of patients’ care should be our focus when interpreting laws and rules. Laws and rules must be made and used to continuously improve the value of care for patients and their families.
Another challenge is to really understand the massive amount of information that is being produced in health care, including information from traditional medical research. If we apply only simplistic levels of understanding to medical problems, we can end up with good intentions that result in unexpected harm and waste. One example of this arises out of the wonderful research done by Drs. Atul Gawande and Peter Pronovost and others. Patient safety and quality improvement have been significant areas of research focus for these physician scientists. In fact, some of the studies they have led have resulted in central-line and operating room (OR) checklists. In work done with the World Health Organization, a surgical safety checklist has been created and used with varying degrees of success around the world.
The problem is not that a multidisciplinary team developed a tool (a checklist) to address a problem (patient safety/need to improve quality). The problem is the simplistic thinking that a tool such as a checklist can be generalized and be the solution for all hospital ORs, intensive care units, etc., in all locations and for every condition. From the perspective of complexity science, the actual solution was the process of getting a diverse team together to come up with a solution and to implement it locally. The solution was not the checklist, but a diverse local team given the authority and resources to implement ideas for process improvement. It is my belief that to make optimum use of this valuable concept, if a checklist is determined to be an appropriate tool, teams need to modify the checklist to take into consideration the local differences and conditions that exist in every hospital setting.
The aviation industry often is cited as an example of how to use checklists to improve safety. I have had the privilege of working with Dr. Jerry Berlin, one of the early pioneers in the development of new methods for improving aviation safety. Dr. Berlin has recounted the pain he experienced as he investigated commercial and military accidents of the 1960s, 1970s and 1980s. Checklists had been used for years with some significant success, but there were also many accidents involving risk variables, especially complicated ones, that were entirely inappropriate for inclusion in checklists. A good example of this was the horrific airport disaster in Tenerife, Canary Islands, in 1977, when two 747s collided on a runway. It was the deadliest accident in aviation history. The cause, like many previous accidents, was the human factors related to communication within the cockpit and between the cockpit and air traffic control. The solution to the communication problem was a long painful process to transform the relationships and the hierarchy of the traditional aviation culture.
The term cockpit (or crew) resource management was a process of change in how the pilot and crew interacted with one another. (By the way, Jerry, an organizational design psychologist, always says pilots and surgeons are very similar psychologically.) Aircraft commanders and captains had to go through a significant behavioral change and understand that protectiveness of their authority led to either their not listening to a co-pilot or engineer who discovered a problem or the co-pilot or engineer becoming afraid of speaking up because of an anticipated punitive response. I have heard many of these recordings and it is chilling to hear one human being reject another’s plea to be heard just minutes or seconds before a fatal crash.
Each airline subsequently developed its own checklists and crew resource training. There is no standardized, one-size-fits-all solution. In fact, in a recent article in the Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine, the authors interviewed experts from a variety of highly reliable organizations, including the aviation industry, about the use of checklists. Their message was consistent and clear: Checklists are tools, not solutions unto themselves. Checklists should be implemented and used locally by those who do the work. None of the authors recommended that a checklist be developed in a standardized way. In fact, they noted that the local group had to have the authority and resources to adapt the checklist over time. This is not the current simplistic understanding of our governing bodies and organizational leaders in health care. This lack of understanding in health care continues to lead to patient harm and death, the extent of which is unknown and perhaps unknowable.
There is another harm done when simplistic understanding is applied by forcing the implementation of a checklist, as in the example of mandating the use of antibiotics before all operations. The problem is there is a gradual change in focus, especially when the goal is tied to reimbursement, from improving the value to the patient to achieving the target goal or benchmark.
A friend of mine was the CEO of a chemical company in California. He recently recounted a story from his industry that he related to the situation in health care. He used to sell large amounts of cyanide to gold-mining companies. The mining companies used the cyanide to erode rock quarry to facilitate extraction of the gold. They encountered a problem with ducks dying from exposure to the cyanide after landing on the tubs. A wildlife foundation discovered this and levied fines for each duck that was killed. Spotters would count the number of ducks that were killed. This amounted to a large sum of money each year. The mining company addressed the problem by putting up nets and noisemakers, and this method worked. Fewer ducks were killed. But instead of being happy about the success of their efforts, spotters for the wildlife foundation began to use duck calls to entice the ducks to the cyanide tubs. They were missing their revenue from the fines. I cannot vouch for the accuracy of this story, but I believe the analogy to the current state of our health care system is an apt one.
I would like to start a dialogue over the next several months within the pages of General Surgery News about the science of complexity, or complex systems, as applied to patient care. Just as Einstein discovered the incompleteness of Newtonian physics, we are only just beginning to understand the incompleteness of our traditional research and management methods in medicine. I will use the terms “simple” or “simplistic” to describe our traditional thinking, management and research principles, which are based on the understanding of isolated or mechanical systems. In these systems, a cause is always followed by a predictable effect. I will use the terms “complexity” or “complex systems” to describe our complex biological world, in which the same cause (or causes) can have multiple results, some of which are not predictable.
I first began to understand this concept of complex systems when our mesh lab group studied the effects of the body on hernia mesh. We have explanted and studied hundreds of mesh, and we learned that the same mesh (the cause) can have very different effects in different bodies (unpredictable results). This finding helped me to better understand the science of complexity. To learn more about complexity science applied to health care, I refer you to Appendix B of Crossing the Quality Chasm: A New Health System for the 21st Century, written by Paul Plsek (2001, Institute of Medicine; National Academy Press). In this article, “Redesigning Health Care with Insights from the Science of Complex Adaptive Systems,” Mr. Plsek outlined what he saw to be the key components of establishing a health care system based on the theory of complex systems.
We are approaching the law of diminishing returns using current traditional research and organizational management methods in health care. The billions of dollars spent on traditional research methods and technologies, such as electronic medical records, and our current medical training have not led to much improvement of patient value in our system. A better understanding of complexity science will allow us to apply new concepts of research and management as a natural part of caring for patients. Interestingly, some of these concepts are contained in the HIPAA law from 1996, but we have yet to interpret and apply them.
Continuous improvement research and care coordination patient care management have the potential to focus on the goal of improving the value of care for our patients. That also will improve the value for all the parts of our system that contribute to that value. This will take effort at the level of all local environments. There is no single right answer to our health care problem and no simple fix. One of the first objections I heard about implementing a new approach to patient care and research is that we cannot afford to do it. My response is that clearly, we cannot afford not to.
To view a TEDx talk by Dr. Ramshaw on health care, visit www.youtube.com/watch?v=QPeLIbh0BAw