In 1974 Peter Szolovits made a prediction: By the 1980s, a majority of large hospitals would adopt electronic medical records. While the technology did not progress as quickly as expected, the U.S. government is currently making a major push to have hospitals switch from paper folders stuffed with memos to a secure and efficient electronic system for collecting, storing and retrieving medical records. Now Szolovits — a professor of computer science and engineering, health sciences and technology, and the leader of the Clinical Decision Making Group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) — is at the forefront of the movement to make health care information technology more effective and useful for both professionals and patients.
Today, Szolovits is focused on using natural-language processing to make better sense of medical records, which often contain unstructured narrative notes and abbreviations. As part of the government’s push towards electronic medical records, Szolovits is participating in one of four Strategic Health IT Advanced Research Projects (SHARP) of the Office of the National Coordinator for Health Information Technology. Led by colleagues at the Mayo Clinic, the project focuses on secondary uses of electronic medical records, providing a way to analyze clinical data sets for research, quality assessment and support of public health. The principal technical challenges include developing natural-language processing methods that allow computer programs to automatically extract clinical files, events and relationships from the narrative text in records, and to combine the resulting data with existing information from laboratory tests and prescription orders to identify patients’ conditions and current treatments.
“My interest in natural language processing was rekindled about 12 years ago by the observation that a lot of critical medical data was actually locked up in these narrative notes and we had to have some way of digging them out in order to make use of them,” Szolovits says.
William Long, a principal research scientist in the Clinical Decision Making Group who has worked with Szolovits for more than 30 years, has developed several programs that scan electronic nurses’ reports and intensive care unit (ICU) discharge summaries, searching for keywords and phrases that provide clues to a patient’s condition. Relying on information gathered from the Uniform Medical Language System, a compilation of more than 150 medical dictionaries, the Clinical Decision Making Group has programmed the system to identify a comprehensive list of terms and key concepts. The program can then offer doctors a brief summary of the compiled information.
Thus far, this technology has been used more for clinical research than diagnostic purposes. For example, it was used to gather a group of patients who are all suffering from the same medical condition, but who reacted differently to identical treatment methods. Thanks to systems that can organize and parse medical records, physicians can uncover whether genetics, outside medications or personal habits affected the patient, and learn more about which treatments are most effective.
Szolovits still believes in applying artificial intelligence to the diagnostic process, but in a different manner than he originally envisioned. He and his collaborators — Roger Mark; George Verghese; and colleagues at CSAIL, the Laboratory for Information and Decision Systems, Harvard-MIT Health Sciences and Technology, and the Beth Israel Deaconess Medical Center — have collected data on approximately 35,000 ICU admissions to a major Boston hospital. CSAIL graduate Caleb Hug used the data to create predictive models that, for each significant change in a patient’s state, estimate how they are likely to fare in the future. Such acuity models can warn clinicians of danger and are also useful in determining the resources needed to assist a particular patient.
The same methodology can also be used to make more fine-grained predictions. Hug’s research has applied this technique to predicting when it is safe to wean patients from life-assistance machines such as ventilators and intra-aortic balloon pumps, as well as whether a patient has a high probability of experiencing septic shock, hypotension or renal failure.
Outside the ICU, the group is tackling the issue of recording interactions between doctor and patient, and translating those conversations into usable information. In a project underway at Children’s Hospital in Boston, group members have recorded about 100 encounters in the Pediatric Environmental Health Center, where doctors often see cases of lead poisoning in children. Each doctor-patient interaction is recorded, translated into English text using a speech-understanding program, and then analyzed for key terms and phrases with a natural-language processing program developed by associate professor of computer science and engineering Regina Barzilay and her group.
While the project has proved challenging due to the difficulties of building a speech-understanding component with high accuracy, Szolovits believes it could make medical visits more efficient for doctors. In the future, Szolovits says the technology could also be used by hospital nurses, so that they could focus more on patient care rather than compiling notes.
Despite the difficulties posed by transitioning to more technologically advanced systems, Szolovits and Long strongly believe that natural-language processing programs like the ones developed by the Clinical Decision Making Group could dramatically improve medical care.
“It will, I think, revolutionize health care. It will make it much easier to get all of the information in a form that we can actually do something with and process it in ways that will benefit everyone,” Long says. “Doctors are going to benefit by being able to look over information from years of treating patients and turn that into a study of what works and what doesn’t work, how to improve the process of medicine and how to treat patients better.”
By Abby Abazorius, CSAIL