Saving eye tracker calibration data

Hi,

I’m setting up a series of experiments that will be run in one session (save for maybe some short break in between) at a single computer, with eye tracking. To avoid annoying participants it would be best if the calibration was done once, except if something happens (like if the participant moves the chair or something), meaning the calibration needs to repeated.

I searched the archives and found a thread from 2016 where the OP described a very similar situation. @Michael wrote back then:

The best strategy here is to calibrate at the beginning of the session (using the normal idea of an experimental session, referring to the overall visit for the day), and then then check the validity of the calibration between trials (i.e. with multiple trials making up your session).

Only actually re-calibrate if the calibration no longer appears to be valid. Calibrating too often can itself lead to problems: only re-do it if it is actually necessary.

Ideally record the calibration checks so you have an objective record of the calibration quality throughout the session (e.g. before each trial, show some stimuli at a succession of known locations and have the subject fixate each in true). This means that each trial of recorded data has an associated record of the calibration quality.

So it really sounds like it should be possible to save the eye tracker calibration data. But how do you actually get a hold of it and store it? After running this:

return_value = my_eye_tracker.runSetupProcedure()

return_value simply has a value of True, meaning the calibration was successful. Which is good to know, of course. But this means that the runSetupProcedure doesn’t return any actual results/data from the calibration. I’ve tried looking at the other methods that my_eye_tracker (an ‘ioHubDeviceView’, apparently) has, but none of them are related to getting the calibration data as far as I understand. I’ve read the PsychoPy book’s chapter on eye tracking, but I didn’t find a description of retrieving/storing calibration data there, either. If anyone can help me out with finding the correct way to get at the calibration data I would be very grateful :slight_smile:

What I was describing above was an eye-tracker agnostic way of validating the calibration, where one records gaze data just like you would in the experimental part of a trial, but instead of doing something interesting, you just display visual targets at known locations and ask the participant to fixate them. e.g. just randomly move a target to four corner points and a central fixation target. You can then simply visually inspect whether the participants gaze location matches the stimulus. Or you could come up with a quantitative threshold (e.g. re-run a calibration if any of the target fixations deviates from the intended location by more than x degrees).

i.e. rather than running a dedicated calibration procedure on the eye tracker, just check the quality of the data periodically, and re-run the calibration if required. Embedding recordings of actual gaze data against known target locations throughout the experiment will really strengthen your claims as to the quality of your calibration, rather than relying on the (potentially black box) calibration procedure of the tracker itself.

Thanks, that was a bit different from what I thought you were talking about then. It does seem like a good idea to check the calibration. I’ll see if I can come up with an appropriate procedure, it’s probably best to automate it as you suggest if I’m able to.

I came to realize however that both I and n_m, who started the thread I linked, had misunderstood how PsychoPy/ioHub and the eyetracker interact, or really how eyetrackers work in general. This misunderstanding was probably too basic to even catch.

For anyone who finds this thread and wonders about the same thing I did

Once you have run a calibration, the eyetracker will remember that calibration, at least if it’s a Tobii eyetracker. This isn’t related to PsychoPy or even having Python running. So if you do a calibration, close the experiment/PsychoPy, then go to start a new experiment, the eyetracker still ‘remembers’ the calibration.

What this means is that for any later experiments you can simply skip the calibration step if it’s the same participant (and a quick ‘calibration check’ at the beginning, like Michael described, says the calibration is still fine).

I would guess that the calibration is lost if you turn the eyetracker off, but I haven’t checked this myself. If you really want to make sure that the calibration is saved you can directly use the ‘tobii_research’ package, which is actually what ioHub uses under the hood for Tobii. Since PsychoPy Standalone includes this package, directly after running a calibration with ioHub and closing the ioHub connection you can do something like this:

import tobii_research as tr

filename = 'calibration_save.bin'
# assumes you only have one eyetracker connected
eyetracker = tr.find_all_eyetrackers()[0]
with open(filename, "wb") as f:
    calibration_data = eyetracker.retrieve_calibration_data()
    f.write(calibration_data)

You can find more details about this in Tobii’s documentation for the package, specifically this example and their page on calibration.

This is a great workaround for the current iohub tobii interface if you need the tobii calibration results.

We should consider adding a method to the iohub eyetracker class that returns information about the last tracker calibration (if available). The return value would be device dependent.

We should also consider adding an iohub eye tracker validation process that estimates the calibration accuracy in a device independent way and can be used by any supported iohub eye tracker as @Michael suggests.