Designing AI that works with people, not past them.
I study how people navigate complexity and uncertainty in digital systems — and build human-centered AI that reduces that friction. My work spans voice interfaces for older adults, LLM-supported navigation agents, and AI in health contexts.
Human-centered AILLM systemsCognitive loadOlder adultsAccessibilityHealth information
Ph.D., Computer Science
University of Illinois Chicago
M.Eng. · Korea University
B.S. Public Health · Ewha Womans Univ.
Chicago, IL
I'm an HCI researcher with a Ph.D. in Computer Science from UIC. My research sits at the intersection of human-computer interaction, LLM-supported interactive systems, and human-in-the-loop AI — with a focus on accessible interfaces for complex systems.
I investigate what happens when users hit walls: the frustration, hesitation, and behavioral signals that reveal where systems break down. Using mixed-method studies — interviews, behavioral analysis, usability evaluations — I identify these patterns, then design AI systems that address them.
Ph.D. thesis
Supporting older adults navigate feature-rich mobile UIs with voice input
C/C++ · Mobile (Android, Kotlin Multiplatform) · Web & backend (React, FastAPI) · Machine learning · LLM integration & fine-tuning
Research areas
LLM-supported navigation
Building agents that map natural-language intent to correct interface pathways without relying on technical keywords.
Accessible interfaces for older adults
Understanding how cognitive and perceptual factors shape mobile interaction, and designing voice-based supports.
Human-in-the-loop AI
Developing context-aware AI that reduces cognitive load without replacing human judgment.
AI-assisted health information systems
Designing LLM-based systems that synthesize uncertain or sparse health data and communicate reliability through visualization — helping non-expert users interpret information and make informed decisions.
Developed a Kotlin Multiplatform app delivering food safety information for expectant mothers. Implemented an LLM-based autonomous pipeline to resolve data sparsity by retrieving and synthesizing unstructured web data. Conducted iterative UX optimizations to ensure reliable interpretation of safety recommendations by non-expert users.
LLM agentsKotlin MultiplatformFastAPIUX research
Live · 2024–present
Dissertation · Phase 1 · Exploratory study
Toward a Good Design for Older Users
Qualitative investigation of how older adults struggle with mobile maps — uncovering that the core issue is feature discoverability, not motor skill or comprehension.
Think-aloudInterviewsAtlas.tin=17
ASSETS 2020
Dissertation · Phase 2 · Wizard-of-Oz
Designing a Voice Assistant for Mobile Navigation
Validated the voice assistant concept through a Wizard-of-Oz study — 88% of participants used it, and 77% of navigation failures were immediately resolved after intervention.
Wizard-of-OzVoice UIWeb appn=15
ASSETS 2020 · CHI 2023
Dissertation · Phase 3 · Controlled experiment
Exploring Effective Visual Cue Design
A/B tested four visual cue designs with older and younger adults. Found that highlighting with context or weighted zoom equalizes task performance across age groups.
A/B testingGLMMWeb appn=56
CHI 2024
Dissertation · Phase 4 · System build
Nav Nudge: Voice Assistant for Mobile UI Navigation
Built a fully working Android app using GPT-3 and Universal Sentence Encoder to interpret verbal queries and highlight matching UI features on-screen. 95% query accuracy in-the-wild.
AndroidGPT-3NLPn=10
CHI 2024
Research project · 2018–2020
myCityMeter / whatIbreathe
Android app measuring PM2.5 and ambient noise — environmental risk factors for cognitive impairment — for older adults and caregivers. Built with RESTful API and multi-sensor data pipeline.
AndroidIoT sensorsREST APImHealth
UbiComp 2018 · HCII 2020
M.Eng. research · 2015–2016
Blurry (Sticky) Finger
AR interaction technique for optical see-through displays: users point at distant real-world objects using a blurred finger and proprioception, eliminating frequent focus shifts and reducing eye strain.
3D reconstruction system for indoor crime scenes using smartphone and Kinect. Enables first-person virtual scene exploration and generates textured 3D face models from 2D CCTV footage.