I am designing a simple eye tracking experiment that will be using % time looking at screen while watching a video as a dependent variable and providing feedback to the participant when they’ve been looking at the video for x% of an interval. I’ve been looking at other’s implementations of WebGazer for ideas on how to code the project.
I saw Thomas Pronk’s demo: Thomas Pronk / demo_eye_tracking2 · GitLab but the gaze indicator doesn’t seem to detect if the individual is looking offscreen. He edited the webgazer library to include feedback related to if the eyes are in the validation box.
I’m more interested in whether the individual is looking at the screen or not. I assume I could just reduce the number of calibration trials and focus on the boundaries, correct? Some experiments, such as the one above seem to re-center the gaze indicator at the center even when the participant isn’t looking at the screen. Does anyone have any ideas how I might be able to simplify the coding for running a simple Boolean check every 100ms (looking at screen or not), then calculate the % of True in real-time, or could you point me toward some good references?
Since I will be measuring the same participant on multiple occasions, I’d probably just need to have participants use a unique identifier each time to track them. However, I will be using different videos each time. Is it possible to upload multiple versions on pavlovia?
Thanks in advance for any input you all may have.