Hello, I have collected some eyetracking data using the Gazepoint GP3 eyetracker integrated into PsychoPy through the builder.
I am trying to run a ROI analysis on the BinocularEyeSampleEvent hdf5 subfile. However, I do not understand what units the left_gaze_x, left_gaze_y, right_gaze_x and right_gaze_y are in. I assumed it was in pixels, with the center of the screen as (0,0), meaning the bottom left corner would be (-960,-540) for example. However, a lot of the data falls outside of these boundaries.
Could anyone explain what units are used for this data, and in relation to this, the possible minimum and maximum x and y values?
Thanks!
I have a GP3 and when I use PsychoPy’s “getLastGazePos()” function, the units I get are in pixels with 0,0 at the center, so I think it is in fact in pixels with 0,0 at the center.
I do also get the occasional value outside the screen area, however it seems to be skew from the calibration getting less precise when the deflection is more extreme, or the participant legitimately looking off-screen.
The eye-tracker doesn’t actually depend on the screen in any way, it’s just extrapolating from the calibration sequence to estimate the fixation location, and that extrapolation can extend beyond the screen boundaries.
In my case I usually don’t put any stimuli that close to the edge of the screen, so I usually just discard those samples as invalid anyway, but if you are getting them systematically you might want to look into applying some kind of correction to values past a certain degree of eccentricity.
1 Like
Thanks so much for your response, that makes complete sense. I have a feeling it is also due to a higher calibration error for some of the participants. Speaking of this, how do you determine a cut-off for the average calibration error? We are using 40 pixels (as I believe that is the size of the calibration dots), but I am not sure if there is a set way to do this. Thanks again!
I don’t think there’s a set approach.
My only thought is that average calibration error doesn’t account for the fact that you can have region-specific calibration errors, e.g., highly accurate on the left side on the screen and less accurate on the right. My own approach is a custom validation screen with a dot with a 25px radius (so 50px diameter) centered on where the eye-tracker thinks the fixation is, and a bunch of 10px radius (20px diameter) targets in a 3x3 grid. If the gaze-circle overlaps the target-circle at each validation point, I call it good.
1 Like