Infant-friendly eye-tracking calibration

Hi everyone,

I’m a developmental scientist and i have run few infant eye-tracking studies on Tobii systems. To date I’ve used integrations like psychopy_tobii_infant to conduct my experiments, and I’ve also published beginner tips in my DevStart guide.

I’d love to move entirely to a native PsychoPy workflow—for cleaner code and true cross-device support. While PsychoPy’s Tobii interface already lets you use custom calibration stimuli, there are a few key features that make the psychopy_tobii_infant workflow so valuable for infant work—and that I haven’t found in core PsychoPy:

  1. Real-time subject positioning: A live view of the eyes/head relative to the tracker to ensure proper centering before calibration (this could be done by streaming tracker data post-connection, but that seems patchy).
  2. Interactive sequence control: The ability to jump to, reorder, or manually advance individual calibration points (e.g., via number keys).
  3. Calibration feedback & reruns: Show calibration results for each point and the option to repeat only those that fall below threshold or entire calibration.
  4. Save/load calibration data: Save calibration results to disk for reuse across sessions (nice to have, but probably the least urgent).

My questions are:

  • Have these features already made it into PsychoPy’s Tobii interface, and I’ve simply overlooked them?
  • If not, is there a roadmap to add them, or would it be more practical to implement our own lightweight calibration routines in PsychoPy (bypassing ioHub)?

As a final thought, I considered writing a Python package by forking psychopy_tobii_infant to focus on these features—though that wouldn’t be device-agnostic or leverage PsychoPy’s ioHub—so I’m trying to understand what makes the most sense.

Thanks in advance for any pointers!

3 Likes