Peekr: Browser-Based Eye Tracking
Overview
Peekr is a lightweight, browser-based webcam eye tracking system that runs entirely in the browser with no installation required and no data sent to servers. The TTF-DDG completely repackaged this system over one intensive week, transforming it into a modern, production-ready platform for online eye tracking research.
Status: ✅ Completed & Deployed
Technology: Web-based eye tracking (ONNX, MediaPipe, JavaScript)
Platform: Browser (cross-platform, no installation)
Repository: github.com/HugoFara/peekr
Live Demo: hugofara.github.io/peekr
Project Background
Original Development
Peekr was originally developed by Aryaman Taore, a Stanford PhD student and visual neuroscientist working at the Dakin Lab and Stanford Brain Development & Education Lab.
Training Data:
- 268,000+ image frames
- 264 participants recruited via Prolific
- Real-world diversity (participants' own setups)
Validated Performance (after calibration):
- Mean horizontal error: 1.53 cm (~1.75° visual angle)
- Mean vertical error: 2.20 cm (~2.52° visual angle)
- Tested with 30 participants on personal setups
- No supervision required
TTF-DDG Collaboration
The TTF-DDG identified Peekr as a promising foundation for online eye tracking research and undertook a comprehensive modernization effort to make it production-ready for research applications.
TTF-DDG Modernization (One Week Sprint)
Timeline
Duration: July 18-31, 2025 (approximately one week)
Scope: Complete repackaging and modernization
Commits: ~58 commits during modernization period
Result: Production-ready online eye tracking platform
Major Achievements
1. Build System Modernization
Challenge: Original system used outdated Rollup configuration with manual WebAssembly integration.
Solution: Complete migration to Vite build system
Impact:
- ✅ One-command development server (
npm run dev) - ✅ Simplified WebAssembly package integration
- ✅ Modern JavaScript bundling
- ✅ Fast hot-reload during development
- ✅ Optimized production builds
- ✅ Cleaner project structure
Key Commits:
feat(vite): moving from Rollup to Vitefeat(builder): switch from rollup to viterefactor(vite): simplifies configuration further
2. Online Calibration System
Challenge: Original implementation lacked user-friendly calibration workflow.
Solution: Implemented comprehensive assisted calibration system
Features:
- Interactive 5-point calibration (4 corners + center)
- Visual feedback with animated calibration dot
- Automatic gaze data collection during calibration
- Least-squares error optimization
- Physical distance-to-screen parameter
- Real-time calibration quality assessment
Technical Approach:
- Separate calibration logic module
- Event-driven calibration state machine
- Linear regression for X/Y axis correction
- User-friendly interface with clear instructions
Key Commits:
feat(calibration): adds a new calibration environmentfeat(calibration): using 3 way, physically based calibrationfeat(calibration): use least-square error, and recompute distance to screenfeat(calbriation): better MSE algorithm
3. Kalman Filter Integration
Challenge: Original implementation used frame-by-frame processing with no temporal smoothing, resulting in jittery gaze predictions.
Solution: Integrated Kalman filtering for temporal data dependency
Technical Implementation:
// Two 1-D Kalman filters (X and Y axes)
import KalmanFilter from 'kalmanjs';
const kalmanFilters = {
x: new KalmanFilter(),
y: new KalmanFilter()
};
function applyFilter(x, y) {
return [kalmanFilters.x.filter(x), kalmanFilters.y.filter(y)];
}
Impact:
- 🎯 ~2x precision improvement with minimal code
- Smoother gaze trajectories
- Better handling of brief occlusions
- Reduced noise in gaze data
- No significant latency added
Key Commit:
feat(eye-tracking): adds Kalman filter in post-process(July 21, 2025)
4. Code Architecture Refactoring
Challenge: Monolithic code structure mixing UI, logic, and core functionality.
Solution: Clean separation into modular components
New Architecture:
src/
├── index.js # UI bindings and main entry point
├── core.js # Core eye tracking logic and filtering
├── eyetracking.js # Video input, face mesh, model communication
├── worker.js # Web worker for ONNX model (off main thread)
└── style.css # Styling
Benefits:
- Clear separation of concerns
- Easier testing and maintenance
- Reusable core components
- Better code documentation
Key Commits:
refactor(index): clearly separate the DOM (index), from the code (other files)refactor(index): moving to core, change in the project structurefeat(index): new code structure with UI bindings as the main entry point
5. CI/CD & Deployment
Challenge: No automated deployment pipeline.
Solution: Complete GitHub Actions CI/CD setup
Features:
- Automated builds on every push
- GitHub Pages deployment
- Build artifact management
- Version tracking
Result: Live demo automatically updated at hugofara.github.io/peekr
Key Commits:
feat(actions): build to GH Pages- Multiple CI fixes and refinements
fix(ci): broken links in GH deployement
6. Testing Infrastructure
Challenge: No automated testing.
Solution: Implemented unit testing with Vitest
Coverage:
- Core functionality tests
- Calibration logic validation
- Filter behavior verification
Key Commit:
feat(tests): adds some unit tests
7. Documentation & UX Improvements
Enhancements:
- ✅ Comprehensive README with installation instructions
- ✅ Demo video added to repository
- ✅ Version number displayed in interface
- ✅ Visual feedback improvements (color-coded tracking dot)
- ✅ Cleaner UI with better logging
- ✅ Contributing guidelines
- ✅ Detailed changelog
Key Commits:
doc(README): reorganizing docs for the final versiondoc(README.md): adds a demo videofeat(title): adds version info to the main pagerefactor(tracker): change tracker dot from red to green
Technical Architecture
System Components
1. Face Detection & Landmark Extraction
- Technology: MediaPipe Face Mesh
- Process: Detects face and extracts 468 3D facial landmarks
- Purpose: Locate eye regions for precise tracking
2. Eye Region Processing
- Left/Right Eye Canvases: 128×128 pixel regions
- Preprocessing: Image data normalization for model input
- Key Points: Facial landmark subset for gaze estimation
3. ONNX Gaze Model
- Format: Pre-trained ONNX model (
peekr.onnx) - Execution: Web Worker (off main thread for performance)
- Runtime: onnxruntime-web
- Inputs: Left eye image, right eye image, facial keypoints
- Output: Raw gaze coordinates [x, y] in range ~[0, 1]
4. Post-Processing Pipeline
Filtering:
- Kalman filters (separate X and Y channels)
- Temporal smoothing for stability
Calibration:
- Linear transformation based on calibration data
- Distance-to-screen correction
- X/Y axis independent adjustment
Output:
- Screen coordinates (pixels)
- Visual feedback (tracking dot)
- Event callbacks for application integration
Technology Stack
- Build System: Vite 7.0+
- Runtime: Browser (WebAssembly + Web Workers)
- ML Framework: ONNX Runtime Web 1.22.0
- Computer Vision: MediaPipe (via CDN)
- Signal Processing: kalmanjs 1.1.0
- Array Operations: ndarray + ndarray-ops
- Testing: Vitest 3.2+
- Linting: ESLint 9.x with @stylistic plugin
Key Features
For Researchers
✅ No Installation Required: Runs entirely in browser
✅ Privacy-Preserving: No data sent to servers
✅ Cross-Platform: Works on any modern browser
✅ Easy Deployment: Host anywhere (GitHub Pages, your server)
✅ Accessible: Participants need only webcam and browser
✅ Production-Ready: Tested and validated
For Developers
✅ Modern Build System: Vite for fast development
✅ Modular Architecture: Clean code separation
✅ Web Worker: Non-blocking model inference
✅ TypeScript-Ready: ES modules with modern JavaScript
✅ Testing Infrastructure: Vitest for unit tests
✅ CI/CD Pipeline: Automated builds and deployment
Technical Features
✅ Temporal Filtering: Kalman filters for smooth tracking
✅ Assisted Calibration: User-friendly 5-point calibration
✅ Real-Time Processing: Low-latency gaze prediction
✅ Visual Feedback: Color-coded tracking indicators
✅ Extensible API: Easy integration into experiments
Integration & Usage
Basic Integration
import * as Peekr from './path/to/peekr';
// Initialize eye tracking
Peekr.initEyeTracking({
onReady: () => {
console.log('Eye tracking ready');
Peekr.runEyeTracking();
},
onGaze: (gaze) => {
// gaze.output.cpuData = [x, y] in range ~[0, 1]
const [x, y] = gaze.output.cpuData;
// Your experiment logic here
}
});
Auto-Binding for Quick Setup
Peekr.applyAutoBindings({
buttons: {
initBtn: document.getElementById('init'),
startBtn: document.getElementById('start'),
stopBtn: document.getElementById('stop'),
calibBtn: document.getElementById('calibrate')
},
inputs: {
distInput: document.getElementById('distance'),
xInterceptInput: document.getElementById('x-intercept'),
yInterceptInput: document.getElementById('y-intercept')
},
log: document.getElementById('log'),
gazeDot: document.getElementById('gaze-dot'),
calibrationDot: document.getElementById('calibration-dot')
});
Development Workflow
# Clone and install
git clone https://github.com/HugoFara/peekr.git
cd peekr
npm install
# Development with hot reload
npm run dev
# Production build
npm run build
# Run tests
npm run test
# Lint code
npm run lint
Performance & Validation
Accuracy Metrics
Post-Calibration Performance (30 participants, personal setups):
| Metric | Value | Visual Angle |
|---|---|---|
| Horizontal Error (mean) | 1.53 cm | ~1.75° |
| Vertical Error (mean) | 2.20 cm | ~2.52° |
Test Conditions:
- Unsupervised use on personal computers
- 5-point calibration (corners + center)
- Linear fit applied to X/Y axes
- Real-world diversity in setups
Stability Improvements
Kalman Filter Impact:
- ~2x precision improvement
- Smoother gaze trajectories
- Better temporal consistency
- Minimal computational overhead
Research Applications
Current Use Cases
- Online Experiments: Remote eye tracking studies
- Accessibility Research: Web-based gaze interaction
- Cognitive Studies: Visual attention patterns
- Reading Research: Eye movement during text processing
- UI/UX Research: Gaze-based usability testing
Future Applications
- Language Processing: Online psycholinguistics
- Visual Search: Attention allocation studies
- Scene Perception: Free viewing experiments
- Comparative Research: Cross-platform studies
Knowledge Transfer & Impact
Foundation for Great Apes Eye Tracking
The Peekr modernization served as a knowledge-building exercise for the TTF-DDG team, providing critical experience for the ongoing Eye Tracking for Great Apes project.
Transferable Skills:
- Eye tracking system architecture
- Calibration workflow design
- Temporal filtering techniques (Kalman filters)
- Real-time video processing
- Computer vision integration
- User interface for eye tracking tasks
Technical Insights:
- Web-based eye tracking feasibility
- Performance optimization strategies
- Calibration algorithm design
- Data quality assurance methods
Learn more about the Great Apes project →
Broader Impact
For NCCR Research:
- Enables online eye tracking studies across work packages
- Reduces barriers to eye tracking research
- Provides open-source foundation for custom tasks
For Research Community:
- Openly available on GitHub
- MIT licensed for broad use
- Documented and tested codebase
- Active maintenance and support
Development Statistics
Contribution Metrics
TTF-DDG Modernization (July 18 - August 6, 2025):
- Commits: 58 commits in primary development period
- Total Project Commits: 89 (TTF-DDG: 63, Original: 15)
- Author Distribution: ~80% TTF-DDG contributions
- Time Investment: ~1 week intensive development
Code Evolution
Major Version Milestone: v1.1.0 (July 30, 2025)
Key Changes:
- Complete build system rewrite (Rollup → Vite)
- New calibration system
- Kalman filter integration
- Architecture refactoring
- Testing infrastructure
- CI/CD pipeline
- Documentation overhaul
Lessons Learned
Technical Insights
- Vite Migration: Modern build systems dramatically simplify WebAssembly integration
- Kalman Filtering: Minimal code investment (~5 lines) for major quality improvement
- Web Workers: Essential for keeping UI responsive during model inference
- Modular Architecture: Clean separation enables easier testing and maintenance
- Assisted Calibration: UX improvements crucial for participant compliance
Development Process
- Rapid Iteration: Daily commits enabled quick problem-solving
- Incremental Improvements: Small, focused commits easier to debug
- Documentation Matters: Clear README reduces support burden
- CI/CD Value: Automated deployment catches issues early
- Testing Foundation: Unit tests provide confidence for refactoring
Research Implications
- Online Eye Tracking Viable: Browser-based tracking sufficient for many studies
- Privacy Benefits: Local processing addresses data concerns
- Accessibility: No installation lowers participation barriers
- Calibration Critical: User-friendly calibration essential for data quality
- Temporal Filtering: Often overlooked but high-impact improvement
Future Enhancements
Planned Improvements
Technical:
- Multi-screen support
- Additional filtering options
- Calibration quality metrics
- Export functionality for data analysis
- Integration templates for common platforms
Research Features:
- Areas of interest (AOI) definition
- Fixation detection algorithms
- Saccade analysis tools
- Heatmap generation
- Data visualization dashboard
Documentation:
- Video tutorials
- Example experiments
- Integration guides
- Troubleshooting documentation
Related Projects
Within NCCR
- Eye Tracking for Great Apes - Specialized eye tracking using Peekr knowledge
- Library of Tasks - Potential integration target
- LEAPS System - Another web-based modernization project
External Resources
Citation & Attribution
When Using Peekr
Please acknowledge:
Original Development:
- Aryaman Taore (Stanford University)
- Dakin Lab & Stanford Brain Development & Education Lab
Modernization & Deployment:
- TTF-DDG (NCCR Evolving Language)
- Hugo Fara (Technical Lead)
Funding:
- Swiss National Science Foundation (NCCR Evolving Language)
License
MIT License - Free for academic and commercial use
Support & Contact
Getting Help
Technical Issues:
Research Applications:
- Consultation on integration
- Custom feature development
- Training and workshops
Contributing:
- Pull requests welcome
- See CONTRIBUTING.md
Acknowledgments
Contributors
Original Creator: Aryaman Taore
TTF-DDG Development: Hugo Fara (HugoFara)
Testing & Feedback: NCCR research community
Technology Partners
- MediaPipe: Face mesh detection
- ONNX Runtime: Model inference
- Vite: Build system
- Prolific: Original data collection platform
Funding
This work was supported by the Swiss National Science Foundation through the NCCR Evolving Language initiative.