UbiWell Lab at CHI 2025

Published on

Vedant and Ha attended CHI 2025 in Yokohama!

Overview

  • 3 full papers
  • 3 workshop papers

Full Papers

Le et al., CHI 2025

Ha presented her CHI paper on a multimodal EMA system that enables users to quickly report their activity either through a screen interaction or a voice interaction at high frequency and achieves high compliance.

μEMAs allow participants to answer a short survey quickly with a tap on a smartwatch screen or a brief speech input. The short interaction time and low cognitive burden enable researchers to collect self-reports at high frequency (once every 5-15 minutes) while maintaining participant engagement. Systems with single input modality, however, may carry different contextual biases that could affect compliance. We combined two input modalities to create a multimodal-μEMA system, allowing participants to choose between speech or touch input to self-report. To investigate system usability, we conducted a seven-day field study where we asked 20 participants to label their posture and/or physical activity once every five minutes throughout their waking day. Despite the intense prompting interval, participants responded to 72.4% of the prompts. We found participants gravitated towards different modalities based on personal preferences and contextual states, highlighting the need to consider these factors when designing context-aware multimodal μEMA systems.

Das Swain et al., CHI 2025

Vedant presented his CHI Paper on an LLM-based empathetic co-worker to help front-office workers with emotion regulation.

Client-Service Representatives (CSRs) are vital to organizations. Frequent interactions with disgruntled clients, however, disrupt their mental well-being. To help CSRs regulate their emotions while interacting with uncivil clients, we designed Care-Pilot, an LLM-powered assistant, and evaluated its efficacy, perception, and use. Our comparative analyses between 665 human and Care-Pilot-generated support messages highlight Care-Pilot’s ability to adapt to and demonstrate empathy in various incivility incidents. Additionally, 143 CSRs assessed Care-Pilot’s empathy as more sincere and actionable than human messages. Finally, we interviewed 20 CSRs who interacted with Care-Pilot in a simulation exercise. They reported that Care-Pilot helped them avoid negative thinking, recenter thoughts, and humanize clients; showing potential for bridging gaps in coworker support. Yet, they also noted deployment challenges and emphasized the indispensability of shared experiences. We discuss future designs and societal implications of AI-mediated emotional labor, underscoring empathy as a critical function for AI assistants for worker mental health.

Wu et al., CHI 2025

Siyi (from Prof. Dakuo Wang’s group) presented our collaborative work on a multimodal system to support symptom monitoring and risk prediction of cancer treatment-induced cardiotoxicity.

Despite recent advances in cancer treatments that prolong patients’ lives, treatment-induced cardiotoxicity (i.e., the various heart damages caused by cancer treatments) emerges as one major side effect. The clinical decision-making process of cardiotoxicity is challenging, as early symptoms may happen in non-clinical settings and are too subtle to be noticed until life-threatening events occur at a later stage; clinicians already have a high workload focusing on the cancer treatment, no additional effort to spare on the cardiotoxicity side effect. Our project starts with a participatory design study with 11 clinicians to understand their decision-making practices and their feedback on an initial design of an AI-based decision-support system. Based on their feedback, we then propose a multimodal AI system, CardioAI, that can integrate wearables data and voice assistant data to model a patient’s cardiotoxicity risk to support clinicians’ decision-making. We conclude our paper with a small-scale heuristic evaluation with four experts and the discussion of future design considerations.

Workshop Papers

Vedant and Jiachen both had posters at the Interactive Health workshop. While Jiachen could not attend in-person, our collaborator, Orson Xu, presented the poster about an AI system to help with contextual sensemaking of passive sensing data on her behalf. Vedant presented some initial findings from his on-going DYMOND study where he is exploring making passive sensing more “seamfull” and human-in-the-loop.

Jiachen virtually presented another short paper on a conversational agent for older adults with Mild Cognitive Impairment at the Aging in Place workshop.