How Gaze Works
This page explains the internals of Gaze's facial authentication pipeline. You don't need it to use Gaze, but it helps understand why it behaves the way it does.
Security warning
Gaze is currently not suitable for security-critical authentication.
It can be spoofed with a simple photo of the enrolled user, including a photo displayed on a screen.
Liveness detection, IR camera support, and other anti-spoofing protections are planned for upcoming releases.
Privacy model
- Face processing runs locally on your machine.
- No cloud account is required.
- Face embeddings are stored on disk under your local Gaze data path.
Authentication pipeline
text
Camera frame -> Face detection -> Face alignment -> Embedding -> Similarity matchHigh level:
- Camera frame is captured from your configured
/dev/video*device. - Detector finds a face and facial landmarks.
- Face is aligned into a standard input shape.
- Recognition model creates an embedding vector.
- Embedding is compared against your enrolled profiles.
If best similarity passes threshold, auth succeeds.
Why multiple captures help
Each enrollment stores multiple samples across slightly different angles.
That makes authentication more robust for:
- Small head rotation
- Minor lighting changes
- Appearance shifts (for example, glasses)
Where data is stored
Default locations:
- User embeddings:
/var/lib/gaze/users - Model files:
/var/cache/gaze - Config file:
/etc/gaze/config.toml
Components
gazed: daemon that performs detection and recognitiongaze: CLI clientgaze-gui: GTK app- PAM integration and GNOME extension for login/lock screen flow
The CLI and GUI communicate with daemon over DBus (com.gundulabs.Gaze).