Features

Can AI Boost Safety and Quality in Patient Care?

Reading time: 11 minutes
image_print
A robot hand holding a stethoscope

Using a risk-based lens, and keeping patient care top of mind key to getting it right

By Stuart Foxman

For years, Dr. Joan Chan spent two or three hours each evening completing charts for her family practice. It ate into her personal time and left her feeling exhausted. About 18 months ago, she took a course on charting efficiency and got into the habit of opening her EMR, taking notes while seeing patients. This eliminated after-hours work but posed other problems.  

“I was typing as I was listening to patients, sometimes with my back to them, half distracted,” says the Guelph physician. “Your attention is always split. And my notes would be cursory.” 

Now, Dr. Chan’s notes are thorough, with minimal time spent, and she’s much more attentive to her patients. How? By turning to artificial intelligence (AI) and using a digital scribe assistant.

This browser-based app. records patient visits (after informed consent), creates a transcript, then turns that into a summary note in the physician’s preferred format using several available templates. It now takes Dr. Chan only a few minutes per patient to edit the notes, which she usually does during breaks. For Dr. Chan, this isn’t just a tool to relieve an administrative burden; it’s a way to support better patient encounters. 

“I feel more engaged. This has allowed me to focus more, notice body language and the emotions in the room, and be more present,” she says. “That’s the primary value I bring to patients, my connection to them and a curious mind. That’s why I wanted to be a clinician. So, I’m winnowing down the things that distract me from that.”

Digital scribes are just one example of how AI technology is making its way into medicine. AI promises progress across practice management, patient modelling, diagnostics, triage, clinical decision-making, resource allocations, risk prediction and interventions, digital coaching, device integration and more.  

In the simplest terms, AI uses computer systems and machines to mimic the way humans think in order to perform complex tasks, such as analyzing, reasoning and learning.  In health care, as in other sectors, reduced costs and increased efficiency are key outcomes. But what matters most is the impact on safety, quality, population health, the well-being of care teams and the patient experience. 

Dr. Anil Chopra, CPSO’s Associate Registrar and an emergency physician, says while these tools offer opportunities, the risks need to be understood. “As the information output by any AI application may contain errors or direct the physician to take an incorrect action, physicians must carefully review the information for accuracy in the clinical context,” he says.

Broad applications address pain points 

For primary care doctors, the scribe function is an intriguing, early-use case as it addresses a common pain point, says Dr. Chandi Chandrasena, Chief Medical Officer at OntarioMD, while also enabling improved patient interactions. OntarioMD is partnering with the Ontario Medical Association and the eHealth Centre of Excellence to assess the value of clinical AI scribes to reduce the administrative burden.

The number of existing and developing AI uses demonstrates a range of possibilities.  

At St. Michael’s Hospital in Toronto, an AI solution called CHARTWatch Surgical uses patient data on the EMR to predict the level of support a patient will need. In Kitchener, Grand River Hospital is using the same basic technology to predict changes in patients’ care, as well as identify who’s improving and may be nearing discharge, and who’s at risk of deteriorating and may require additional care. 

Also at St. Michael’s, home to Canada’s largest multiple sclerosis clinic, the AI-powered MuScRAT (Multiple Sclerosis Reporting and Analytics Tool) quickly summarizes a patient’s relevant clinical history. Neurologists and other staff use MuScRAT to get a quick snapshot of new patients, prepare for appointments and plan their care.  

Other applications target patients at risk. Wounds Canada says 85 percent of amputations from diabetic foot wounds are preventable. In Toronto, the Michener Institute of Education at UHN worked with Swift Medical to integrate AI into a chiropody clinic. A smartphone or tablet app photographs a wound, measures its dimensions, and guides clinicians through an assessment and treatment protocol. The AI solution improves the accuracy, consistency and objectivity of wound measurements. 

Clear Medical Imaging, a provider of radiology services in Ontario, has partnered with 16 Bit to offer an AI-powered tool called Rho. When a patient has an X-ray for any clinical indication, radiologists can use Rho to analyze the image and flag suspected low bone mineral density for follow-up, potentially transforming osteoporosis screening. 

At the University of Waterloo, researchers are exploring how AI-driven technology that analyzes MRI data might better predict if a patient with breast cancer is likely to benefit from chemotherapy before surgery. 

Meanwhile, University of Toronto researchers used AlphaFold, an AI-powered protein structure database, to design and synthesize a potential drug to treat the most common type of primary liver cancer (hepatocellular carcinoma) — and they did it in just 30 days. 

The range of applications in medicine are generating excitement and intrigue, but also questions about how AI should be introduced and the safeguards needed.

And in Penetanguishene, researchers at the Waypoint Centre for Mental Health Care are studying AI’s potential to develop an early warning system to predict crises. The idea is to combine historical data, real-time monitoring and AI algorithms to come up with a model that identifies small changes that may foretell a crisis up to 72 hours in advance. 

Another widely used AI application, ChatGPT, is “trained” on a huge volume of data to become capable of natural language processing tasks.  

In one study published in  JAMA Internal Medicine, researchers from the University of California looked at patient questions and responses collected from the AskDocs community on Reddit. The researchers then posed the same questions to ChatGPT. Afterwards, a panel of doctors was asked to compare the quality of the answers.  

ChatGPT’s answers were more detailed and rated far higher in both quality and empathy than the responses from the actual doctors. The researchers said the results illustrate the potential of ChatGPT as a physician learning tool. 

Some jurisdictions are using tools like ChatGPT and other chatbots to help with patient education, and even with triage in emergency departments. 

The range of applications in medicine are generating excitement and intrigue, but also questions about how AI should be introduced and the safeguards needed. 

Fear of the unknown 

Some doctors see AI’s integration in medicine as a tool to support their role. “There’s a group that says this will help me do my job better,” says Muhammad Mamdani, Vice-President, Data Science and Advanced Analytics, Unity Health Toronto, and Director at the University of Toronto Temerty Centre for Artificial Intelligence Research and Education in Medicine (T-CAIREM). 

Others wonder if certain AI uses based on aggregated data will make health care less patient-centred. “[Doctors] don’t want this to get between them and their patient,” says Dr. Dan Lizotte (PhD), an Associate Professor in Computer Science at Western University, and senior author of a study on the impact of AI on primary health care published in BMC Medical Informatics and Decision Making.

For yet other doctors, Dr Lizotte says AI brings a fear of the unknown, including whether AI will replace some fundamental physician competencies. A recent study found  ChatGPT performed at or near the passing threshold for the U.S. Medical Licensing Exam, without any specialized training.  

“On the one hand, we should stand in awe. On the other, maybe it says more about the tests themselves than on the AI taking them,” says Dr. Gus Skorburg (PhD), Co-Academic Director of the Centre for Advancing Responsible and Ethical Artificial Intelligence at the University of Guelph, and Assistant Professor of Philosophy. 

Some tasks might be automated and roles might evolve, as happens with technology. But is your job in jeopardy? Will patients of tomorrow book a visit with a Dr. Chatbot? 

“AI will not replace doctors — but doctors who use AI will be more effective,” says Dr. Bo Wang, CIFAR AI Chair at the Vector Institute for Artificial Intelligence in Toronto, and AI Lead at the Peter Munk Cardiac Centre at UHN. 

Those aren’t the only fears. One concern is the “black box effect,” where AI produces results or advice without explanation, leading to possible questions about biased data or inappropriate modelling.  And with the immense data it can hold, AI raises privacy and security worries.

“Physicians must be extremely careful to avoid including confidential patient information or research data when using public AI search tools so as not to inadvertently disclose confidential information,” says CPSO’s Dr. Chopra. “Any information entered into a public facing application may be shared by the tool with other individuals or organizations,” he says.

Recognizing patterns

“Forever, doctors have taken information, put it together, sought out patterns and made decisions. What’s new is AI algorithms rely on vast amounts of data to analyze, comparing individual patients to thousands or millions of others,” says Dr. Andrew Pinto, Director, Upstream Lab at St. Michael’s Hospital, and Associate Professor, Temerty Faculty of Medicine, IHPME and Dalla Lana School of Public Health, University of Toronto.  

Patients want insights into their doctor’s decision-making process, which the black box effect can hinder, acknowledges Dr. Brian Hodges, EVP Education and Chief Medical Officer at UHN. “The doctor doesn’t want to say you’re at risk or I’m worried, and I can’t tell you why.” 

No system is perfect. The human brain excels in pattern recognition yet is prone to bias and errors just like AI, explains Dr. Hodges. Combining the cognitive power of doctors with AI’s computational and learning abilities could be optimal, he suggests. “AI is doing probabilities and the exciting promise is that it might recognize patterns we don’t see.” 

According to a Leger survey for Canada Health Infoway, 60 percent of patients are comfortable with the use of AI in health care. Using AI to track epidemics, optimize workflow, detect diseases, conduct diagnostic imaging, and monitor and predict diseases rank as the most accepted uses.

Yet the survey shows patients have big worries too, namely around the loss of human interaction with health care professionals, privacy breaches and liability related to AI-influenced care decisions. 

Is current oversight adequate? 

AI software may be novel, but the core expectations on doctors remain unchanged, says Tanya Terzis, Interim Manager, Policy at the College of Physicians and Surgeons of Ontario. Whether using AI or any other tools, she says doctors must uphold the standard of care and rely on their professional judgment.  

Do current regulations suffice? “That’s the million-dollar question,” says Dr. Skorburg. He contends that questions around privacy, bias, equity, fairness, etc. are well-covered by existing requirements. Still, “With the rise of chatbots, we never had to consider other questions before, like do patients have the right to know they’re talking to AI?” 

The Canadian Medical Protective Association (CMPA) recently posted an article about the emergence of AI in health care and the associated risks. It noted that AI technologies should complement clinical care, and that physicians have to critically assess whether the AI tool suits its intended use and nature of their practice. 

“These technologies need to go through the same rigour and assessment as any other,” says Chantz Strong, Executive Director, Research and Analytics and Chief Privacy Officer at the CMPA. He says physicians still remain accountable for their clinical care, and legal and medical obligations. 

Dr. Hodges agrees that for now, current regulations are enough — with a big asterisk. “All regulatory colleges may need to give some thought on guidance, education and maybe even standards on the use of this technology. It’s going to be ubiquitous and an enormous group of physicians haven’t had training in it.”  

Healthcare Excellence Canada has developed valuable principles and guidance on implementing AI. It includes a risk assessment tool for data stewardship and privacy, ethics, regulatory/legal approvals, resourcing and engagement. Dr. Jennifer Zelmer, the organization’s CEO, says AI is such an umbrella term that it’s hard to make blanket statements about its use and safeguards. Every application needs to be assessed on its own. “So much depends on thoughtful implementation.” 

“Combining the cognitive power of doctors with AI’s computational and learning abilities could be optimal”

Two years ago, the World Health Organization (WHO) issued a report outlining six guiding principles for AI’s design and use. Recently, WHO called for caution in using AI-generated large language model tools like ChatGPT. While acknowledging their potential benefits, WHO noted the risks of disseminating misleading or inaccurate information and bias, and compounding existing inequities in socioeconomic status, race, ethnicity, religion, gender, disability or sexual orientation. 

“To get it right, we should use a risk-based lens, and keep patient care and clinical care top of mind,” says Mr. Strong. 

Potential for harm, injury 

As AI proliferates across sectors, does health care face particular risks and concerns? While protecting sensitive data is imperative across many sectors, bioethical considerations are paramount. What data is being used to make decisions? That’s integral in the programming of AI, says Fiona Cherryman, Head of Academic Affairs and Operations, Michener Institute of Education at UHN.

Health care’s distinctiveness arises from its potential for harm and injury, notes Dr. Carolyn McGregor, Canada Research Chair in Health Informatics, University of Ontario Institute of Technology. And if errors do cause harm, who’s accountable: doctors, the AI developer or an IT department? 

Still, health care isn’t alone in using AI where lives are at stake. Consider autonomous vehicles, where each crash prompts a debate on AI’s reliability. Machines are fallible, but so are humans — most crashes result from driver error. And human professionals across domains, including doctors, make mistakes that can have serious consequences. 

To Dr. Skorburg, the issue isn’t AI’s perfection, but whether it improves the status quo. If autonomous vehicles will indeed prevent crashes, injuries and deaths, “It would be immoral to not promote their widespread adoption,” he asserts. And if AI in health care leads to more accurate diagnoses, more timely interventions and fewer errors, will it be unethical to not use it to its full capabilities? “It’s a live question.” 

AI can face both pathways and obstacles to wide adoption. Dr. McGregor wonders if users will have an unconscious positive bias towards AI. She points to studies showing that people who procure technology, despite having an objective evaluation process, can subjectively feel the new, complex and mysterious is simply superior. 

New technology can also evoke fear. The stethoscope faced initial pushback, reminds Dr. Hodges, as many doctors doubted it would be as accurate as their ear against a chest. 

An article in the Journal of Medical Humanities looked at reactions to past technologies to shed light on contemporary debates about digital innovations. It cited an instance from 1879, when a U.K. hospital hailed a certain tool’s role in reducing the spread of infection during doctor-patient conversations. Conversely, some physicians feared this tool’s potential to overwhelm them. The tool in question was the telephone. 

One big difference: unlike objects like the stethoscope and phone, AI assumes what we view as human abilities. It thinks and learns, which can feel threatening and compelling. 

Dr. Wang remains optimistic in spite of concerns about over-reliance on AI and some of its limitations. “I’m very positive about this technology. It’s not something scary,” he says.

What’s the greatest area of opportunity? “Personalized medicine,” says Dr. Wang. “AI is a powerful analytic tool. It will really help doctors provide the most personalized and effective treatment plans.” 

Dr. McGregor agrees. “A lot of medical knowledge comes from looking at the average. AI allows you to create a much more personalized experience and gives you a lot more detail. We’re collecting, storing and analyzing more data, giving us opportunities for richer discovery. We’ll be improving outcomes.” 

That’s the litmus test for any tool used in health care, including AI. “Innovations can either promote quality and safety or put quality and safety at risk, depending on how it’s done,” says Dr. Zelmer. 

All technology expands on what people are capable of, adds Dr. Hodges. Whatever the AI application, “We only need to say, does the technology extend human abilities, enhance our cognitive function, extend our communication, or is it getting in the way?”