What's Next // Product Roadmap
Where We're Going.
EmpathAR is a live prototype built for the Creative Hackathon. This roadmap outlines what's shipped, what's in progress, and where the product goes next.
Phases
- Real-time pose + face landmarking via MediaPipe
- Social Battery score (posture, expression, movement)
- Five battery states with contextual tip feed
- Multi-person tracking with persistent labels
- Gesture detection: clapping, frowning, yawning, head prop
- On-device only — zero data transmitted
- Mobile layout polish and performance optimisation
- Improved lighting compensation for face landmarking
- Per-user baseline calibration for frown and smile sensitivity
- Tip relevance scoring to reduce repeated suggestions
- Session summary: battery trends over time per person
- Group dynamics mode: room-level energy heatmap
- Integrations: Zoom, Meet, Teams overlay via browser extension
- EmpathAR SDK for third-party applications
- Opt-in anonymised research dataset for social signal models
- Audio detection: live mic input layered alongside body language signals
- Voice-to-text transcription of speech detected during active sessions
- Context-aware tips personalised to what was said, not just how someone looked
- Animal social battery detection for dogs and cats
- Species-specific skeleton remapping for quadruped pose estimation
- Tail, ear, and posture signal interpretation per species
- Sentiment states calibrated for animal body language
- Accuracy improvements across breeds and coat colours