Products You May Like
Stanford Medicine has developed an artificial intelligence model that can accurately predict cancer patients’ prognoses and responses to treatment. The first of its kind to leverage multiple types of imaging and language-based data, the model has already shown promise with several forms of cancer, including lung cancer, gastroesophageal cancer, and melanoma.
Over the last few years, researchers have created a range of experimental AI models that examine imaging data for tiny signs of cancer that doctors and radiologists might easily miss. Early tests show that these models are highly effective. Sybil, a model developed by MIT and the Massachusetts General Cancer Center, can predict patients’ one-year lung cancer development with an 86% to 94% accuracy rate, while Harvard Medical School’s pancreatic cancer prediction model can map a patient’s three-year prognosis with 88% accuracy. Another MIT model even spots signs of the riskiest forms of breast cancer to shield patients from overtreatment.
Impressive as these models are, they share in common one essential shortcoming: They’re only capable of analyzing one form of data at a given time. Each model looks at MRI scans or CT scans or X-ray images or microscopy slides, then identifies areas of concern within that dataset. Even Microsoft’s multi-diagnostic AI model, which accepts a whopping nine forms of imaging data, must examine those types of imaging separately.
Stanford Medicine’s model, MUSK (short for multimodal transformer with unified mask modeling), looks at several types of data at once. In a paper for Nature, the researchers write that MUSK was trained on 50 million pathology images and 1 billion “text tokens” from more than 11,500 patients. Although the images depict different forms of cancer across X-rays, microscopy, and CT and MRI scans, the text tokens represent language-based medical data—exam notes, communications between specialists, and so on—associated with various cancer diagnoses.
Credit: da-kuk/E+ via Getty Images
MUSK’s ability to analyze multiple types of data simultaneously mimics how doctors assess a person’s imaging results and health records. It also allows MUSK to assist doctors in predicting prognoses, not making diagnoses, the latter of which most medical AI models are focused on.
Across the 16 major types of cancer on which MUSK was trained, the model is capable of accurately predicting a patient’s disease-specific survival 75% of the time, according to a Stanford Medicine release. That’s an 11% improvement over doctors’ average accuracy rate, which hovers around 64%. MUSK has also correctly identified which non-small cell lung cancer patients would benefit from immunotherapy 77% of the time (beating doctors’ 61% accuracy rate) and predicted which melanoma patients were most likely to relapse within 5 years of initial treatment with 83% accuracy.
“The biggest unmet clinical need is for models that physicians can use to guide patient treatment,” said senior study author and radiation oncologist Ruijiang Li. “Does this patient need this drug? Or should we instead focus on another type of therapy? If we can use artificial intelligence to assess hundreds or thousands of bits of many types of data, including tissue imaging, as well as patient demographics, medical history, past treatments, and laboratory tests gathered from clinical notes, we can much more accurately determine who might benefit.”