DOI: 10.1158/1557-3265.aimachine-a054 ISSN: 1557-3265

Abstract A054: Multimodal LLM-Driven Intervention for Precision Risk Prediction in Lung Cancer Surgery

Shubham Pandey, Bhavin Jawade, Srirangaraj Setlur, Venugopal Govindaraju, Kenneth P. Seastedt

Abstract

Introduction:

Lung cancer surgery, while potentially curative, carries a 30% risk of serious postoperative complications, such as pneumonia or respiratory failure, increasing morbidity, mortality, and healthcare costs. Traditional risk assessment tools, based on static scores or clinical judgment, lack precision and adaptability to identify high-risk patients or reassess surgical candidacy. These tools also fail to provide interpretable outputs aligned with surgical workflows, limiting their clinical utility. To address these gaps, we propose a novel multimodal deep learning framework integrating clinical variables, imaging-derived radiomic features, and large language model (LLM) insights to predict complications accurately. Our model generates editable, clinician-friendly risk summaries, enhancing transparency, trust, and personalized surgical decision-making to improve outcomes and optimize planning.

Methods:

We analyzed data from 3,440 lung cancer surgery patients, combining 17 preoperative clinical variables (e.g., age, smoking history, pulmonary function) with CT imaging from 3,205 cases. From these scans, 113 radiomic features (e.g., texture, shape) were extracted using PyRadiomics. Our model integrates three modules: (1) a clinical data encoder, (2) a radiomics module for imaging-based risk, and (3) an LLM-based interpreter (using Llama 3.3, DeepSeek R1-Distil, OpenBioLLM, Clinical Longformer) to generate surgeon-like risk narratives from unstructured data. These narratives, linked to a binary postoperative pulmonary complication outcome, allow real-time edits that update risk predictions, reflecting surgeon expertise. The model was trained with a hybrid loss balancing accuracy and usability and evaluated using AUC-ROC.

Results:

Benchmarking traditional machine learning models (e.g., logistic regression, random forests) on our dataset yielded AUC-ROC values ranging from 76.8-78.2%. Our multimodal framework achieved an AUC-ROC of 75.0%, comparable to baseline models, while also providing interactive interpretable risk summaries not possible in baseline models that enable surgeons to refine predictions based on clinical expertise. Human surgeon assessments had a True Positive Rate of 44.9% and a False Positive Rate of 20.0%, underscoring our model’s prediction precision. Editable risk summaries allow surgeons to adjust predictions in real-time, enhancing transparency and supporting personalized surgical decisions.

Conclusions:

Our multimodal framework transforms preoperative risk assessment in lung cancer surgery by integrating clinical data, radiomic features, and LLM-driven insights. It surpasses human judgment and competes with traditional risk tools in predictive precision while offering exceptional interpretability through editable, clinician-friendly outputs. By enabling real-time risk adjustments, the model supports dynamic surgical planning, reduces complications, and enhances patient-specific care in lung cancer, with the potential to improve resource allocation and support AI-augmented precision medicine in thoracic oncology.

Citation Format:

Shubham Pandey, Bhavin Jawade, Srirangaraj Setlur, Venugopal Govindaraju, Kenneth P. Seastedt. Multimodal LLM-Driven Intervention for Precision Risk Prediction in Lung Cancer Surgery [abstract]. In: Proceedings of the AACR Special Conference in Cancer Research: Artificial Intelligence and Machine Learning; 2025 Jul 10-12; Montreal, QC, Canada. Philadelphia (PA): AACR; Clin Cancer Res 2025;31(13_Suppl):Abstract nr A054.

More from our Archive