EmpathAR // Roadmap ← Back
Where We're Going.

EmpathAR is a live prototype built for the Creative Hackathon. This roadmap outlines what's shipped, what's in progress, and where the product goes next.


Shipped v1.0 — Core AR Engine
  • Real-time pose + face landmarking via MediaPipe
  • Social Battery score (posture, expression, movement)
  • Five battery states with contextual tip feed
  • Multi-person tracking with persistent labels
  • Gesture detection: clapping, frowning, yawning, head prop
  • On-device only — zero data transmitted
In Progress v1.1 — Signal Calibration
  • Mobile layout polish and performance optimisation
  • Improved lighting compensation for face landmarking
  • Per-user baseline calibration for frown and smile sensitivity
  • Tip relevance scoring to reduce repeated suggestions
Next v1.2 — Session Memory & Beyond
  • Session summary: battery trends over time per person
  • Group dynamics mode: room-level energy heatmap
  • Integrations: Zoom, Meet, Teams overlay via browser extension
  • EmpathAR SDK for third-party applications
  • Opt-in anonymised research dataset for social signal models
Next v1.3 — Audio Intelligence
  • Audio detection: live mic input layered alongside body language signals
  • Voice-to-text transcription of speech detected during active sessions
  • Context-aware tips personalised to what was said, not just how someone looked
Next v1.4 — Animal Mode
  • Animal social battery detection for dogs and cats
  • Species-specific skeleton remapping for quadruped pose estimation
  • Tail, ear, and posture signal interpretation per species
  • Sentiment states calibrated for animal body language
  • Accuracy improvements across breeds and coat colours

[ Launch EmpathAR ]
← Back to App MediaPipe + WebGL // On-Device