American Medical News
By — Posted Aug. 19, 2013
The digitization of medical records has given physicians opportunities to do much more with their patients’ records than they were able to do with them in paper form. But have electronic health records lessened the opportunity for the record to be viewed in the appropriate context?
Experts say the most useful patient record will strike a good balance between structured data (data readable by a computer) and a physician’s narrative. This balance is sometimes hard to strike, however, as many electronic health records focus on creating templates meant to capture the structured data.
While templates can speed up the documentation process, they do not provide room for nuance. As patient records become more portable and the number of people involved with a patient’s care increases, it’s increasingly important for the record to tell a patient’s story accurately and thoroughly.
There are ways physicians can learn to create thorough records without sacrificing efficiency. Much of it relies on the technology used for documenting, and some of it involves the physician thinking two or three steps ahead.
Many physicians adopted EHRs to ensure that their patients’ stories are complete, said Lesley Kadlec, director of HIM Practice Excellence at the American Health Information Management Assn. Through templates, the EHRs help physicians document more accurately and efficiently. Although templates are a great tool, she said, they don’t always support good documentation. This is especially true for more complex patients whose stories don’t fit within the confines of standard templates.
There are two ways doctors deal with this: smarter use of free text — typed notes outside the templates — in the record, and speech recognition technology, Kadlec said. “The most important message is to make sure that your method of documentation doesn’t create a barrier to telling the patient’s story,” she added.
One of the biggest complaints Brian Yeaman, MD, heard from physicians after the EHR implementation at Norman (Okla.) Regional Health System was that it forced physicians to practice cookie-cutter medicine, and that the structured data were too static and did not add value to the doctors’ interpretation of the clinical story. In addition to his part-time family practice, Dr. Yeaman serves as chief medical informatics officer of Norman.
A lot of dictation is often lost because of an EHR, he said. This dictation includes information such as the doctor’s thought process, why certain decisions were made, and the doctor’s “if/then” statements that would guide steps in decision-making. “It’s valuable to the consultants, to the other physicians caring for that patient, and to the nurses who are trying to manage that patient’s care when the doctor is not physically seeing them,” he said.
Albert Lai, PhD, assistant professor in the Dept. of Biomedical Informatics at The Ohio State University College of Medicine, said most EHRs still give physicians the ability to enter free text. But for data to be processed easily, by computers and humans, there needs to be a balance between physician narrative and structured data. Too much narrative can result in important information being buried. Not enough narrative can result in a physician not having enough information about the patient to make a truly informed decision.
“You need to identify the items that everyone’s going to document,” Lai said, “Some of our specialists are asking for more of this data to be structured because, for some of them, they document the same thing for every patient they ever see. So for those types of items, having the structured data is great — you can just type it in and move on. You don’t have to type in a whole narrative about it.
“But on the other hand, there are things that should never be structured, [such as] talking about the scenario in which [the patient] came in or the background of the patient. Structuring that is going to be very difficult.”
As physicians are entering information, they should think about the next time someone else opens that patient’s record, experts advise. Figure out what information is not captured in the data, and how it can be conveyed through the narrative. They also should think about what information in the narrative might come off as confusing or incomplete.
Most physicians can answer a question about a patient while they are in the exam room with them, said Nick van Terheyden, MD, chief medical information officer of Nuance Communications, a technology company that develops speech recognition and clinical language processing technology. “But if you wait five minutes, five hours or five days … it’s going to be harder for me.” Technology can be applied that will allow information about that visit to be captured at the point of care so that it can be recalled at any time.
Even though most EHR systems allow physicians to enter narratives in free text form, it’s very time consuming to enter all of that text and structured data into the appropriate fields, Dr. Yeaman said.
In 2012, Dr. Yeaman started using a speech recognition technology that captures the dictated note and creates a text narrative. Then, using clinical language processing (natural language processing with medical specificity), the system will grab text that can be converted to structured data. For example, a doctor’s dictation can include the fact that a patient has heart arrhythmia. Heart arrhythmia will be added to the patient’s problem list.
“We, as clinicians, focus on talking and narrating the patient’s story. There is a richness embedded in that narrative,” Dr. Van Terheyden said. But for the data to be processed, mined or analyzed by a computer, there must be elements of the record that are converted to structured data. Clinical language processing helps bridge that gap, he said.
Lai said speech recognition is a good option for physicians as long as there are audits of its accuracy. The technology has come a long way, but there still are opportunities for inaccuracies. A condition that was expressed verbally as a family history could be interpreted by the technology as a condition related to the patient.
“To some extent, speech recognition is error prone, ” Lai said. It works with about a 90% to 95% accuracy rate, he said. “But humans are probably not much better, to be honest.”