New AI models for reliable inference and predictions in high-dimensional multimodal health and biomedicine data
We are developing AI models for clinical and biomedical data for meaningful downstream tasks. Brute forcing existing inductive biases and architectures can lead to unreliable scientific inference and predictions. Therefore, the lab focuses on developing data representations, learning tasks, and model architectures that will be robust to large distributional shifts, and evaluated with robust frameworks to test inferential capabilities.
Young Sang Choi*, Vincent Jeanselme*, Pierre Elias, Shalmali Joshi.
Adaptive AI systems and representations to improve generalizability and robustness
Enabling systems that generalize to out-of-distribution data requires adaptive AI approaches where models can improve from their mistakes. We leverage probabilistic modeling, reinforcement learning, and deep learning to overcome imperfections of observational health and medicine data and develop adaptive multimodal AI systems.
Daksh Mittal, Yuanzhe Ma, Shalmali Joshi, and Hongseok Namkoong.
Neural Information Processing Systems (NeurIPS) 2024
New computational methods and evaluations of foundation models in health and medicine
We develop and benchmark AI models in-house and in collaboration with other clinical experts and healthcare institutions. Our goal is highlight current limits and provide improvements so we can use AI models as expert reasoning agents. Our current applications are in psychiatry, cardiology, radiology, rheumatology, and neurocritical care.
Chao Pang, Vincent Jeanselme, Young Sang Choi, Xinzhuo Jiang, Zilin Jing, Aparajita, Kashyap, Yuta Kobayashi, Yanwei Li, Florent Pollet, Karthik Natarajan, Shalmali Joshi