Table of Contents
SocialEyes enables eye-tracking in multi-person, real-world settings using mobile eye-tracking glasses.
For a detailed description, refer to the original paper:
Shreshth Saxena, Areez Visram, Neil Lobo, Zahid Mirza, Mehak Rafi Khan, Biranugan Pirabaharan, Alexander Nguyen, and Lauren K. Fink (2025). SocialEyes: Scaling mobile eye-tracking to multi-person social settings. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery. https://doi.org/10.1145/3706598.3713910
The paper presents a hardware-agnostic system architecture for multi-person mobile eye-tracking.
This repository contains a reference implementation of the SocialEyes architecture, used in the Utility Test presented in the paper. It serves as a proof-of-concept demonstrating the viability of our system architecture in real-world contexts.
This implementation relies on the following hardware for data collection:
- Eye-tracking glasses: Pupil Labs' Neon.
- Centralview Camera: Any standard USB or network webcam. Compatibility for ArduCam's Camera Evaluation Kit is also provided but currently lacks Image Signal Processing (ISP) support from the manufacturer.
You can execute the code either with a software container (using Docker, Podman, etc.) or by building it from source. Please note that the source has only been tested on Ubuntu OS. Below are some prerequisites you may need when running the software:
- Install Docker (or your preferred containerization tool).
- Install the latest Python distribution: Download Python
To ensure dependencies are properly managed, we recommend creating a separate Python environment using either conda or virtualenv.
With Conda (Recommended)
- Install Anaconda or Miniconda from here.
- Create a new environment:
conda create -n SocialEyes python=3.10 - Activate the environment:
conda activate SocialEyes
With Virtualenv
- Install virtualenv:
pip install virtualenv - Create a new environment:
virtualenv SocialEyes - Activate the environment:
- On Linux/macOS:
source SocialEyes/bin/activate - On Windows:
.\SocialEyes\Scripts\activate
- On Linux/macOS:
-
Clone the repository:
git clone --recurse-submodules https://github.com/beatlab-mcmaster/SocialEyes.git SocialEyes -
Navigate to the code directory:
cd SocialEyes -
Build the Docker image:
docker build -t socialeyes-img . -
Mount your drive and run the container in privileged mode to allow access to local machine resources (USB/camera/network):
docker run --rm -it --privileged -v $(pwd):/SocialEyes socialeyes-imgHint: On Windows, use
${PWD}in PowerShell instead of$(pwd).
-
Clone the repository:
git clone --recurse-submodules https://github.com/beatlab-mcmaster/SocialEyes.git SocialEyes -
Navigate to the code directory:
cd SocialEyes -
Install the required dependencies:
pip install -r requirements.txt -
Run the demo script:
python demo.py
The quick start opens a demo script that links to the three main operator interfaces in SocialEyes recording mode, as defined below:
The eye-tracking data collection in our implementation is facilitated with the help of a Textual TUI. The TUI displays a list of static IP addresses that could be assigned in the associated config file (src/glassesET/config.json). Note that the eye-tracking devices can also be detected using a DNS search, however, in our tests, we could not reliably retrieve all devices on network with the Pupil-labs-realtime-api. We, therefore, use static IPs to connect to each smart-phone device accompaning the NEON eye-trackers.
On the TUI, use arrow keys to move up-and-down through the list of devices. Press Spacebar to select a device. Selected devices are indicated by a checkbox in first column. In addition, TUI has following hotkeys enabled
- Shift + Arrow Up/Down: to expand selection up/down
- Ctrl+A to select all devices
- Exclamation mark (!) to reverse selection
The bottom bar presents a list of actions that can be performed on the selected devices.
The CentralCam interface operates the centralview Camera recording. By default, the CentralCam module records video and metrics from the default webcam on a computer. The recording device and related parameters can be set using the config file at src/centralCam/config.json
Recording starts upon execution of the module and can be manually stopped at any point using a Keyboard Interrupt (Ctrl/Cmd + C)
The offline interface facilitates analysis on the recorded data. It allows downloading data from Pupil Cloud or using locally stored data to perform operations of homography, analysis, and visualisation modules.
The Offline interface can be operated using arrow keys to navigate different options and using the command line interface to input text or filepaths.
-
All module implementations are located in the 'src/' directory and can be executed independently using the respective
main.pyscript. -
Upon execution, the GlassesRecord module and the CentralCam module will spin up the TUI and the CentralCam Interface interface respectively. Use the interface to perform subsequent actions for the respective module.
-
The Homography, Visualisation, and Analysis modules provide helper functions that can be exported to analysis environments or interfaced with the Offline Interface in src/offlineInterface
- We have provided example config files for each module.
- Before running a module, make sure to update it's parameters in the respective 'config.json' file in the module directory.
All code used to generate results and figures for the SocialEyes paper can be found in 'src/analysis/Utility test analysis code/'
Example visualisations of 30 eye-tracking datasets recorded and analysed with this implementation is presented below for reference. The gaze from 30 wearers--recorded using the GlassesRecord modul--is mapped and projected to a common space, the centerview--recorded using the CentralCam module. Projected gaze from all wearers is presented as a heatmap in the central grid cell with each wearer's egocentric gaze and worldview recording in the surrounding cells. The gaze projection and display was performed with the Analysis, Homography, and Visualisation modules.
If you use any ideas from the paper or code from this repo, please cite the original work:
@inproceedings{SocialEyes,
author = {Shreshth Saxena and Areez Visram and Neil Lobo and Zahid Mirza and Mehak Rafi Khan and Biranugan Pirabaharan and Alexander Nguyen and Lauren K. Fink},
title = {SocialEyes: Scaling Mobile Eye-tracking to Multi-person Social Settings},
year = {2025},
booktitle = {Proceedings of the 2025 Conference on Human Factors in Computing Systems (CHI 2025)},
location = {Yokohama, Japan},
doi = {10.1145/3706598.3713910},
publisher = {Association for Computing Machinery}
}
This software is licensed under the Non-Commercial Research License (NCRL-1.0).
Commercial use is restricted. Please see the LICENSE file for details.
This repository includes the SuperGlue software as a submodule. SuperGlue is licensed under the "Academic or Non-Profit Organization Noncommercial Research Use Only" license. Please refer to the LICENSE file within the SuperGlue submodule for more details on its terms and restrictions.
If you find this project useful, you can support it by giving this repo a


