Problem
Users need a single place to understand nutrition, sleep, mood, and habits — without juggling five apps or drowning in raw numbers.
Approach
- Multimodal capture — Voice, image, text, and barcode flows all normalize into one schema before hitting the AI layer.
- Wearables — Google Fit and watch data provide context (sleep debt, activity) for recommendations.
- Embeddings + LLMs — Vector search over meals and patterns keeps responses grounded in the user’s history.
Outcome
- MVP shipped on an aggressive timeline with 400+ pre-registrations.
- Clear separation between ingestion, reasoning, and presentation layers — easier to extend than a monolithic prompt.