Overview

Built during parental leave. Deployed.

After completing my Ph.D., I took a planned family leave — and used the time to independently design, build, and deploy MammaMe: a health information platform that helps expectant mothers find reliable food safety guidance.

The project applied everything from my HCI research — problem framing, iterative UX design, user testing — to a real-world need I encountered firsthand. It also pushed me to tackle a hard engineering problem: what do you show the user when your database doesn't have the answer?

LLMautonomous agent pipeline
HCIfull design lifecycle
Livedeployed app

1. The Problem

Pregnancy food advice is full of noise — and the stakes are high

Pregnancy food safety sits at an awkward intersection: there is an enormous amount of online content (YouTube, Instagram, health blogs), but the quality is wildly inconsistent. Misinformation is disproportionately common in this space — and the consequences of following bad advice during pregnancy can be serious.

Expectant mothers need quick, trustworthy answers to questions like "Can I eat sushi?" or "Is this cheese safe?" — but existing tools either oversimplify, over-restrict, or don't communicate how reliable their information is.

Problem Statement

How do we give expectant mothers fast, reliable food safety guidance — and help them understand how much to trust what they're seeing?

2. Data Pipeline

Crawling, extracting, and scoring a custom database

I built a custom food safety database by crawling and extracting information from multiple source types: online health articles, YouTube videos, and Instagram content. For each food item, I extracted safety guidance and assessed the reliability of the source it came from.

Crawling

Web · YouTube · Instagram

Extraction

Food items + safety guidance

Reliability Scoring

Source credibility assessment

Database

Structured food safety DB

A key design decision was to not flatten the reliability signal — rather than presenting a single verdict, the system retains source-level credibility scores and surfaces them to the user. This respects the user's ability to make informed judgments rather than hiding uncertainty behind a false sense of authority.

3. LLM Features

When the database doesn't have the answer

No database covers everything — especially for niche or uncommon foods. Rather than showing a blank result, MammaMe uses an LLM-based autonomous agent pipeline to fill the gap in real time.

✦ LLM-Based Autonomous Agent Pipeline

When a user searches for a food item not in the database, the system:

① Identifies the missing entry   ② Analyzes related ingredients and food profiles   ③ Autonomously retrieves and synthesizes information from unstructured web sources   ④ Verifies output reliability   ⑤ Returns a structured, annotated response to the user

This agentic approach means the app handles the long tail of edge cases gracefully — without requiring manual database updates. I evaluated the system's outputs iteratively and optimized for accuracy based on user performance metrics from testing sessions.

Why this matters technically

The pipeline addresses data sparsity — a fundamental challenge in domain-specific information systems. The agent doesn't just retrieve raw text; it reasons about ingredient similarity, cross-references safety principles, and synthesizes a coherent answer with appropriate reliability signals. This is the same class of challenge I later wrote about in research contexts: bridging natural-language user intent and complex structured information.

4. Reliability Visualization

Making uncertainty visible

A central design goal was helping users interpret information, not just consume it. Pregnancy health advice is a domain where over-trust can be as harmful as no information at all.

For every food item — whether from the database or generated by the LLM agent — MammaMe visualizes:

Signal 1

Source reliability score

How credible is the source this guidance came from? Medical institution vs. personal blog vs. social media post — each carries different weight, made visible.

Signal 2

Confidence indicator

For LLM-generated results, the system signals that the answer was synthesized rather than directly sourced from the curated database.

Signal 3

Source links

Users can always trace back to the original source. Transparency over authority.

Design Principle

Don't hide uncertainty — visualize it. Help users calibrate their trust rather than deciding for them.

5. UX Process

Full HCI design lifecycle

I applied the same research-informed design process I use in academic work — applied independently to a real product.

01

Problem Statement

Identified the specific failure mode: not just bad information, but users unable to assess the quality of the information they were getting.

02

Wireframes & Prototypes

Iterated on information architecture and reliability visualization — how to surface source credibility without cluttering the interface or overwhelming users.

03

User Testing with Pregnant Women

Conducted informal usability sessions with expectant mothers in my network. Focused on whether users correctly interpreted reliability signals and felt confident in their decisions.

04

Iterative UX Refinement

Revised information layout, visual hierarchy, and reliability indicators based on observed behavior and feedback. Optimized based on user performance metrics.

05

Deployment

Launched as a live application.

6. Tech Stack

Tech Stack

Kotlin Multiplatform FastAPI LLM (OpenAI) Web crawling / scraping Custom DB (food safety) Reliability scoring pipeline Agentic retrieval pipeline Information visualization