TORONTO - The rise of technologies such as ChatGPT has thrust artificial intelligence into the spotlight throughout 2023 — and health care is no exception.
"With the increasing availability of health-care data and the rapid progress in analytic techniques — whether machine learning, logic-based or statistical — AI tools could transform the health sector," the World Health Organization said when it launched a set of regulatory recommendations in October.
As we move into 2024, here are some key AI developments — and cautions — that will be top of mind for Canadian experts in the new year and beyond.
PERSONALIZED PATIENT CARE
One of the most exciting potential developments in health-care AI is harnessing the ability of a computer model to process and interpret "multi-modal" data about a patient, said Roxana Sultan, chief data officer and vice-president ofhealth at the Toronto-based Vector Institute dedicated to AI research.
Right now, AI models can make a diagnosis based on one or two pieces of information, such as an X-ray, Sultan said. That's achieved by training the model on "tons and tons of X-ray images" so it learns to recognize certain diagnoses.
"That is fantastic. But that is (only) one source of information," Sultan said.
In the "near future," she said, machine learning will develop so that AI can take a "much more comprehensive look at patient health."
You might be interested in
In addition to a patient's X-ray, for example, AI would be able to process other data, including doctor's notes, lab samples, medications the patient is taking and genetic information.
That ability will not only play a critical role in diagnosing a patient, but also in coming up with a more personalized treatment plan, Sultan said.
"When you have models that understand the complex interplay between a person's genetics and a person's medications and all the different diagnostic tests that you run on that patient, you pull those together into a picture that allows you to not only understand what's happening in the moment, but also to kind of plan ahead that, if I applied this treatment ... what is the more likely outcome for this particular person?"
Russ Greiner, who holds a fellowship with the Alberta Machine Intelligence Institute, agreed.
"The standard medical practice used to be one size fits all," said Greiner, who is also a professor of computing science at the University of Alberta.
"Now you realize that there's huge differences amongst individuals ... different genes, different metabolites, different lifestyle factors, all of which are influential (on health)," he said.
Machine learning means computers can analyze hundreds or thousands of characteristics about a patient — more than a human clinician could possibly process — and find patterns "that allow us to figure out that for this characteristic of patients, you get treatment A, not treatment B," Greiner said.
CLINICAL TRIALS
AI's ability to go through enormous amounts of data will also save “tens of thousands – probably hundreds of thousands – of human hours" for researchers analyzing the results of clinical trials, said Sue Paish, CEO of DIGITAL, one of five "global innovation clusters" across the country funded by the federal government.
“AI basically can evaluate billions of pieces of data in a fraction of a second,” said Paish, who is based in Vancouver.
That means that new medications could be evaluated for safety and efficacy much faster, she said.
IMPROVING QUALITY OF DATA
Whether AI is being used for clinical care or for health research, the results it generates can only be as good as the data it's fed, experts agree.
"Garbage in, garbage out," said Greiner.
"If I train on faulty data, the best I can do is to build a model as good as that data, which is problematic."
One of the priority areas is to make sure AI is getting data from reliable sources, rather than just indiscriminately taking publicly available information, said Sultan.
ChatGPT, for example, is a technology to "essentially scrape the internet," she said.
"The problem with that ... is first and foremost, it's not always reliable and true," Sultan said.
"And second of all, it is riddled with biases and problematic perspectives that get reinforced when you train something that can't make those judgments. It just reads it all, absorbs it and spits it back out for you."
One example of a way to improve the quality of medical analyses AI generates is to train it on medical textbooks rather than the internet, Sultan said.
"I think the ChatGPTs of the world will seem very caveman, like very rudimentary (in the future)," she said.
Researchers are also developing AI algorithms to find bias in health information, including racial or gender discrimination, Sultan said.
PATIENT SELF-MANAGEMENT
Another key area where AI will grow is in developing technologies that help patients manage their own health, experts agree.
For example, wearable AI has already been developed to help patients with heart failure self-monitor, Sultan said.
AI has also been used “quite effectively” in remote areas of Canada to manage some patients' wounds when they weren't able to access care during the pandemic, said Paish.
The AI technology attaches to a patient's cellphone, takes a 3D image of a wound and assesses whether it's infected or healing well.
That information is then sent to a doctor or nurse, who can advise the patient remotely on how to care for the wound.
“I think we’re going to see more and more examples of where AI is actually supporting patient health by reducing the need for a human being to take all the steps in assessment and delivery of health-care services,” Paish said.
That will take pressure off overburdened doctors, nurses and hospitals and allow them to provide in-person care when it's most needed, she said.
ETHICS AND REGULATION
“One of the big flashing yellow lights in the application of AI is making sure that there is very thorough and thoughtful evaluations of how AI is being trained,” said Paish.
“Public policy is going to be extremely important."
Dr. Theresa Tam, Canada's chief public health officer, said it's critical to develop regulations and safeguards that address ethical issues such as patient privacy.
"I think this is a really opportune time to, you know, more systematically look at ... what governance we have to put in place in order to responsibly use AI," Tam said in a recentinterview.
Ensuring data is managed in a way that protects privacy must be "interwoven" with AI development, Sultan said, noting that other legal and ethical ramifications are "uncharted territory."
"We're all trying to figure out what makes the most sense. So issues like consent, issues like data ownership and data custodianship, those are all going to shift in terms of the paradigm that we've looked at them through in the past," she said.
This report by The Canadian Press was first published Dec. 31, 2023.
Canadian Press health coverage receives support through a partnership with the Canadian Medical Association. CP is solely responsible for this content.
Anyone can read Conversations, but to contribute, you should be a registered Torstar account holder. If you do not yet have a Torstar account, you can create one now (it is free).
To join the conversation set a first and last name in your user profile.
Sign in or register for free to join the Conversation