I was wondering if there was a specific reason that you chose version 2.01 instead of the latest version of 2.10 of webgazer? Coincidentally, I saw on the jsPsych website that they also don’t support version 2.10 of webgazer. It looks like version 2.10 includes some Tensorflow stuff, which is supposed to increase accuracy, I think. But does this makes it difficult to work with psychopy? I’m happy to try to incorporate version 2.10 into Psychopy myself, but I wonder if you foresee any roadblocks.
Hi! No reason, it’s just that during my PsychoPy career it wasn’t out yet. Since only the minor version number went up, I assume the API is the same, so it should work out of the box. Maybe a little bit of tweaking for that feature I added in my webgazer fork, though I think the team was planning to add this functionality anyways. Hope it goes smoothly!
To anyone following this thread the version of @thomas_pronk demo that is maintained by the team has now been updated to function in PsychoPy version 2021.2.3. There were some changes between version 2021.1.4 and 2021.2 which may mean the original version will not work in later releases (downloadResources changed to prepareResources)
In your demo you remove the mouse listeners for the tracking trial with: window.webgazer.removeMouseEventListeners();
However, if an experiment is click based (fixation crosses, items, etc.), would keeping the mouse listeners help with accuracy over time? In other words, do you think there’s a problem with keeping the listeners “on” throughout the experiment?
Hey! I remove them so that successive clicks aren’t used to calibrate the eye tracker anymore. Regardless of that you can keep on using them in PsychoJS to register clicks.
I noticed your demo uses averagingWindow to smooth out the motion of the tracking square by taking an average of the last n gazes. If I’m not interested in averaging it out, could I just set averagingWindow to 1? Or should I directly save the raw gazes using window.xGaxes and window.yGazes.
In brief, I’m looking for the best way to save gaze predictions during a task and what specific variable to use for that.
There were some changes between version 2021.1.4 and 2021.2 which may mean the original version will not work in later releases (downloadResources changed to prepareResources)
I have found that downloadResources does not work anymore.
However, prepareResources does not work either.
How are you using prepare resources? and what is the error you get when trying to use it. could you share the code?
If you are looking to use it for eye tracking I’de suggest looking at the code in this post if you are looking to use it for other reasons here’s a demo I made recently I hope might help
Hi, I’m starting a new experiment and was wondering if there has been any new developments with webgazer or the demos for versions 2022.2.2. I was able to use the demos successfully last year, but just want to make sure I’m not missing anything since then when starting up again with the newest PsychoPy version. Thans!
Hi, I am a PsychoPy newbie so I’m sorry if it is a dumb question: Is there any way to run eye direction detection and SSVEP simultaneously? I need to know if a user’s eye direction confirms the approximate location of an SSVEP target (among other SSEVPs object) on the monitor.
Thanks! I noticed in the demo (version 3), that if during calibration your head leaves the validation box, the webcam thumbnail reappears and putting your head back in the validation box does not make the webcam thumbnail disappear (which blocks potential calibration points). I know @thomas_pronk dealt with a similar issue in version 2, but I was wondering if you knew how to prevent this from happening during calibration…
I see version 3 has some additional lines in the calibration_code (22-28) that version 2 doesn’t have, but I haven’t been able to resolve this myself. Any thought…?