I am designing a simple eye tracking experiment that will be using % time looking at screen while watching a video as a dependent variable and providing feedback to the participant when they’ve been looking at the video for x% of an interval. I’ve been looking at other’s implementations of WebGazer for ideas on how to code the project.
I saw Thomas Pronk’s demo: Thomas Pronk / demo_eye_tracking2 · GitLab but the gaze indicator doesn’t seem to detect if the individual is looking offscreen. He edited the webgazer library to include feedback related to if the eyes are in the validation box.
I’m more interested in whether the individual is looking at the screen or not. I assume I could just reduce the number of calibration trials and focus on the boundaries, correct? Some experiments, such as the one above seem to re-center the gaze indicator at the center even when the participant isn’t looking at the screen. Does anyone have any ideas how I might be able to simplify the coding for running a simple Boolean check every 100ms (looking at screen or not), then calculate the % of True in real-time, or could you point me toward some good references?
Since I will be measuring the same participant on multiple occasions, I’d probably just need to have participants use a unique identifier each time to track them. However, I will be using different videos each time. Is it possible to upload multiple versions on pavlovia?
Nice to read that you’re enjoying my prototype! Here are some answers/reflections on the issues you raised:
About Point 1:
a. Shorter calibration by decreasing #trials with stimuli in the center. Sounds plausible, but I’m not experienced enough with eye tracking to make claims about this approach in advance. Just try it out? Would love to know if it works!
b. Gaze indicator is recentered when participant is not looking at the screen. Could be; my guess is that there are two cases. If the participant looks away from the screen a little bit, you get x and y values that are outside of the bounds of the screens. If they look away from the screen a lot, then webgazer returns some non-numeric value that is then converted to 0 in the PsychoJS algorithm. I’ll add two pointers in the code snippet below to get you started.
c. How to calculate % of time looked at the screen? If you log raw data each 100 ms that could be a bit much to process, but if you’re only interested in the % you could get there with two counters and logging those. I’ll add another pointer below.
This code snippet comes from the “start_webgazer” code component in the “webcam_trial” routine.
// Start eye tracking
window.webgazer
// Called on each eye tracking update
.setGazeListener(function(data, clock) {
// Andrew 1b & 1c. Play around a bit with what happens with data
// For the case "participant is looking away a bit -> gaze is outside of screen", this conversion function could help: util.to_norm. https://psychopy.github.io/psychojs/module-util.html
// For the case "participant is looking away a lot -> no data", check whether data === null
// Once you got those cases covered, add two counters (looking on-screen/off-screen) and increment the one what matches the current situation
if (data !== null) {
// Remove first element from gazes array, add current gaze at the end
window.xGazes.shift();
window.xGazes.push(data.x);
window.yGazes.shift();
window.yGazes.push(data.y);
}
})
.begin();
//.showPredictionPoints(true);
This code snippet is copy-pasted from the “tracking_code” code component in the “tracking_trial” routine.
// Update tracking square to the average of last n gazes
let x = util.sum(window.xGazes) / window.xGazes.length;
let y = util.sum(window.yGazes) / window.yGazes.length;
// Set tracking square to x and y, transformed to height units
tracking_square.setPos(
util.to_height(
[
x - psychoJS.window.size[0] / 2,
-1 * (y - psychoJS.window.size[1] / 2)
],
'pix',
psychoJS.window
)
);
Multiple versions, no problem, though since you’re working with video, please be sure that the resources associated with each version of your experiment aren’t duplicated; makes it easier on the Pavlovia server.
Thank you greatly!! I will mess around with it for the next several days and see what I get. I greatly appreciate your input and this gives me a wonderful starting place.
For the life of me, I cannot figure out why I keep getting ‘undefined’. I’ve been messing with the code for a couple weeks to understand it better. I declared the prediction variable at beginning of the tracking_trial routine
I am trying to call the prediction variable as the authors had specified, so I put this at the bottom of the Each Frame but keep getting ‘undefined’.
let prediction = window.webgazer.getCurrentPrediction();
console.log(String(prediction.x + "," + prediction.y));
I’m sure it’s an issue with declaring properly, but I’m not seeing it. Originally I tried using the getPositionFromObject function from the PsychoJS library, but it was making things overly complicated. Do I need to pass an argument directly? Any ideas?
You could ask the webgazer developers why this doesn’t work; they are quite helpful I noticed. And to be honest, I wouldn’t be able to tell you without spending a month examining their code; it’s quite some advanced stuff!
Now, in my version of the demo, I don’t get the predictions via… window.webgazer.getCurrentPrediction();
But by registering an event listener that received the most recent predictions…
window.webgazer
// Called on each eye tracking update
.setGazeListener(function(data, clock) {
if (data !== null) {
// @ayjayar; here are the predicted x and y
window.xGazes.shift();
window.xGazes.push(data.x);
window.yGazes.shift();
window.yGazes.push(data.y);
}
})
.begin();
//.showPredictionPoints(true);
Oh, no doubt it’s advanced. I was asking mostly also because you had created a few more extra objects and had been using it specifically in psychopy. I wasn’t sure if you had come across any nuances in the coding or bugs that may have explained it.
That’s where I had originally been pulling the data, but I was running into the issue of what to call. I was using toString, which was throwing errors. I may have to try to send to console log at the end of the array shift. I’ll mess around with it some more.
This is my first attempt at programming with an object oriented language in almost a decade so it’s definitely a learning experience hah!
window.webgazer
// Called on each eye tracking update
.setGazeListener(function(data, clock) {
if (data !== null) {
// Remove first element from gazes array, add current gaze at the end
console.log(String(data.x + "," + data.y));
//add parameters later
++ontask;
}
else console.log("Out of bounds")
++offtask;
})
I will obviously be finetuning it, but it seems to be working perfectly. Unfortunately, the tracking square is no longer updating. That isn’t the worst thing in the world, since I’ll want it turned off eventually, but I’ll see if I can figure that out later.
Happy to hear you’re making progress! About updating the tracking square; those are done via window.xGazes and window.yGazes, so if you’d re-add the snippet below inside of the event-listener callback, it should update again.
Per another user’s request, here is a current working, condensed version of this experiment. I took out the visual feedback component, but it did work. This just checks if you are looking at the video. I’m looking into ways to use the GazeDot from WebGazer, but it needs to be an image to use the contains(), which is what @thomas_pronk made with demo_eye_tracker.
Currently, there is some delay with tracking square updates, but based on my pilot study, there was no significant difference between expected % time on-task and measured % time on-task. Can provide data if needed.
That’s because in my example, the webgazer library is downloaded and initialized in a “Begin Experiment” code component. However, I see that newer releases allow importing it as an ES6 module, which would obviate the need for this window. trick.