Electronic Health Records raise new ethical concerns

By Edward L. Zuckerman, Ph.D.
January 21, 2014



Electronic Health Records raise new ethical concernsLoss of privacy is commonly thought of as an “unauthorized disclosure” in which the client’s Protected Health Information (PHI) is released to someone not authorized by the client to receive it, usually by accident such as a misdirected fax or email.

But we now commonly hear of breaches of privacy in which thousands of records are lost or compromised by having an unencrypted laptop computer stolen. Such events violate the ethical principles of beneficence and nonmaleficence. Indeed, there are dozens of ways the privacy of the client can be weakened or violated with paper or electronic records but I want to discuss some less obvious risks to our ethical promises.

Interoperability

A major goal of Electronic Health Records programs (EHRs) is to share information to support comprehensive treatment. This data transferability is termed interoperability and while beneficial (e.g. preventing needless retesting) can create risks.

There is a famous video online (http://www.aclu.org/orderingpizza) in which the order taker at a pizza restaurant knows, among other information, the buyer’s health status, neighborhood’s crime data, library borrowings, travel plans and credit card details.

As systems and databases become connected it becomes impossible to know who has access to what PHI. The risk to privacy is obvious but there is also a risk to care providers. How can the clinician provide informed consent on the use and security of the PHI when the clinician doesn’t know and can’t control access to the PHI? The client’s autonomy cannot be respected.

Of course, HIPAA tries to prevent the unauthorized release of PHI outside “Covered Entities” (and now “Business Associates” and their subcontractors) but we have often traded our privacy for convenience and “de-identified” data can be distributed almost without limit. Threats to privacy include relatively easy re-identification and efficient data mining as well as hacking and accidents.

In hospital settings a way to provide the heightened privacy felt necessary for psychiatric PHI vs. medical PHI is to allow psych staff to access both psych records and medical records but allow medical staff to access only medical records. Recent evidence indicates this actually harms patients.

“…(D)epression after a heart attack is the number one determinant of whether the patient will be alive one year later” and “… psychiatric patients were 40 percent less likely to be readmitted to the hospital within the first month after discharge in institutions that provided full access to those medical records.”

This artificial dichotomy appears to violate the ethical principles of both non-maleficence and justice (by treating persons differently and unfairly).

Workflows

Each EHR has its own ways to enter and display data. The frames of this sequence are called templates. The sequence for completing and using them is called a workflow and you can be sure your own familiar and preferred workflow will not match your EHR’s

What are you doing when you use an EHR?

• Reading previous entries – often cookie-cutter clutter
• Locating the next form/template and reading ALL the stems/boxes • Entering data by clicking on the choices offered in the template or typing in other data
• Following the workflow of the program to the next step.

All of these distract you from your clinical work. There is less observation of the patient, interrupted observations, and so less information is gathered and communicated by each person. There is also a diversion from the relationship with less eye-contact, less conversation, less exploration and less rapport.

Research suggests that when there is less trust clients withhold information – a barrier between client and clinician. This can’t be in the patients’ best interest and we fail a fiduciary requirement to put their needs ahead of our own.

A patient portal

One of the goals of the HITECH Act that has shaped EHRs is making our clinical records much more easily available to the patient. This is incentivized thorough Meaningful Use criteria and so all EHR will have to offer a “patient portal” by next year. Expected benefits include easier communication, patient education, monitoring and feedback and the simpler provision of administrative documents like Record Releases and Notices of Privacy Practices.

It is expected that such transparency will enhance trust and improve the relationship of client and clinician. Further steps in this openness will include providing clients with the HIPAA-required “access” to their records that might include our progress notes. The Veterans Administration has enabled this in a program called “Open Notes.”

A physician who practices geriatrics has raised some concerns about how patients might be harmed by fairly common notations. For example, the clinician might necessarily and accurately write, “Need to evaluate for Alzheimer’s” which would frighten anyone. Writing “These suggest he will not act safely at home or driving” would raise the client’s fear of loss of independence or the need for intrusive monitoring. Noting “His family reports to me concerns about his thinking, safety and ADLs” would reveal what the family had expected the clinician to keep private and lead to family conflict. What is a clinician to do when it is clear that the client’s access would harm the client or the relationship with the clinician?

Making clinical decisions

It’s also been widely cited that it takes 17 years for new medical evidence to find its way into practice. By the time that happens, the evidence could be outdated and so clients suffer from substandard treatment. Similarly it appears that our professional knowledge deteriorates in validity and usefulness with time and new findings. Recent research suggests that the half-life was about nine years and will become about seven for some fields. An EHR could incorporate ways of reminding clinicians about more and more recent options for diagnostic or treatment decisions.

The tools to do this are called Clinical Decision Support Systems. They can provide suggestions from the professional literature and even options based on algorithms and reminders specific to each patient. Concerns arise when the clinician must respond to these. Simple prompts and reminders present no ethical concerns and may improve care just as the routine use of checklists has.

But suggestions or even advice go beyond this. Are they based on x number of studies/clinical trials, “Guidelines” (from whom?), “Best Practices” (says who?), Cochrane Collections, local norms, statistical considerations, ethical concerns? Who decides on the content of this advice? Who is responsible? Software isn’t a person and can’t be held responsible for the consequences of following or refusing to follow suggestions. What role can and should such information have in clinical work? Software is not a person and can’t practice medicine. (Each of us is a “learned intermediary.”) All EHR programs require the clinician to agree to “hold harmless” the programs developer for any negative outcomes. Any system of ethics requires the locating of responsibility. They don’t even have a warranty of effectiveness. If the patient is harmed by the clinician’s following the advice, the clinician is left holding the bag.

This arrangement creates a “moral hazard” where private individuals accrue benefits but their action’s costs accrue to others or the public. In such situations we usually turn to the government to enforce responsibility but here it has abdicated its responsibility. The FDA is responsible for oversight of these medical devices which include EHRs. There is no method to publicly report safety issues and failures.

Practice management

EHR programs allow discovering relationships that are invisible with paper records. The manager of a clinic can integrate data across clinicians and clients by asking the EHR for practice management reports. For example, suppose clinicians were compared by their outcomes and one was found to have much worse results than another. What should the manager do? Isn’t it unethical to allow some clients to be treated by a less effective clinician? We have taken on obligations to our clients not to harm them (nonmaleficence) and to allow them to decide what is best for them (autonomy).

Suppose the manager then asks the system to compare clinicians using patient ratings. Suppose the less effective clinician is beloved by his or her patients. Should the patients have this information? Recall that justice is breached when persons do not have equal access to health information resources and public health services.

These are some of the ethical concerns raised by EHRs but more will become apparent as EHRs become more nearly universal and their functions more elaborate.

Share Button

 

To learn more about this topic or to get these articles delivered to your
office every other month, subscribe today!.
Subscribe

advertisement