This thread about eye tracking in PsychoJS/Pavlovia has received a lot of posts. Here is a little recap:
I have examined some libraries for eye-tracking via webcams and selected webgazer as presently being the most suitable one. Here is a paper examining how well a relatively an old version of webgazer works in three cognitive tasks.
The experiment demo_eyetracking2 illustrates how to use webgazer with PsychoJS. This experiment includes a calibration procedure and a gaze-tracking procedure.
Demo_eyetracking2 can be freely cloned and modified. Researchers have already adapted the experiment for their own needs, with discussion about one such adaptation in another thread on this forum.
An OST colleage has made a 5-step tutorial on how to customize demo_eytracking2, which can be found in this tweet.
Hi, You can also try to gather the data using RealEye.io - itās very easy - no coding required.
Then you can export what you need as a CSV file and process it in any way you like.
Thank you for all your work on this. Do you know of a demo or experiment that has implemented this and logs (x,y) coordinates to an output file? Have been trying to do this, but have not been able to. Thanks.
Do you think this could be used to track eye movements while reading a text? I donāt need it to be particularly precise; I just want to be able to tell whether their eyes are moving across the page or if they have stopped reading.
I think it would be tricky to establish whether the eyes are moving or not, but establishing whether participants are looking at the screen or not should be doable. Actually, another researcher has been developing something like that, which we discussed in another thread.
Webcam-based implementation would be very difficult. Look into the differences between shape-based eye tracking such as this (this is based on Papoutsakiās dissertation (2016)) and corneal reflection eye tracking (which is what you see in more expensive, laboratory, or over-the-counter eye tracking equipment). Corneal-reflection based eye tracking is more accurate. The type of eye tracking being implemented here is shape-based meaning it identifies a contrast between the cornea and other regions of the face. It has a higher degree of error because it is more susceptible to environmental factors such as lighting and head movements.
WebGazer, which is used here, is reliant on āimplicitā click-based calibration that continues to calibrate as you continue to click to keep the predictions based on the last ten or so clicks. If you have the individual engage in a passive task, the predictions will lose accuracy over time (I donāt have an exact number). You could include mandatory clicks at the end of each line or something, but then youād greatly impact reading fluency, both in terms of saccades and fixations.
Depending on your research question, this may not be an issue. But for any reading passages that are more than a line, the accuracy will be questionable.
@ayjayar , do you know how WebGazer compares with GazeRecorder? Is that also shape-based? From the demo, GazeRecorder doesnāt seem to use click-based calibration, but I donāt know if it will also lose accuracy over time.
With the pandemic, I am trying to find something that can be used remotely. I would not need to analyze the data at the level of saccades. Basically, what I am trying to tell is if the participantās eyes are moving across the page vs. if they are staring into space/eyes have stopped moving. Do you think WebGazer, or any other webcam based tracker, is able to achieve this?
I am not trying to hijack the thread, so please feel free to message me directly if youād prefer. Iād greatly appreciate your insight.
@dpg45 can tell you Iāve messed with GazeRecorder early on and it has many advantages and disadvantages, mainly the fact it requires software download unlike webgazer, and it is all locally managed, whereas WebGazer can be on any webpage, but it also automatically records video of the interaction.
Iād recommend starting a new thread, as this thread is specifically meant to stay on-topic of just using eye tracking in Pavlovia, without referencing other specific software. Iāll message you more if you wish.
Thank you for all your work, @thomas_pronk. What would be the best way to āturn offā WebGazer in your demo after you no longer need it? E.g., subsequent trials
Does webgazer report head position? Iāve been looking through the webgazer docs and canāt tell. Seems like this information would be necessary but I donāt see a way to access it
Thank you in advance \
Soā¦ itās there indeed, but not in a very easy-to-use format What you could do is post an issue in the webgazer repo to ask for what you need; the team is very approachable. To be sure we give them some good specs to work with, it can be useful to think it through a bit. What kind of position data would you like exactly?
I am a newbie to PsychoPyJS. I am a student and I am trying to build a small experiment where I show an image and capture frames using webcam. I have a code in JavaScript where I am already doing it but I want to integrate it in PsychoPyJS. I have been trying to do this using the Builder tool but I couldnāt do it. Could you guys please direct me to relevant document or tutorial? Any help would be highly appreciated.
@thomas_pronk It is for eye-tracking but aim of using web-cam is to capture images and send it to a server for processing. I have written a simple server side code in Python. It processes these frames and sends response to front-end.
I want to create an eyetracking experiment and as wanted to run you demo_eye_tracking2, I get the following error message:
File "\github\demo_eye_tracking2\demo_eye_tracking2_lastrun.py", line 31, in <module>
ale
NameError: name 'ale' is not defined
##### Experiment ended. #####
I am new tp python but Iāve already installed the alepython module but it does not work. Which module do you refer to in this line called ale?
Hi Esther, Iām afraid that my demo only works online (PsychoJS). For offline, weāre working on improving the way you can integrate eye-tracking, but I think it wonāt support webcams (only dedicated hardware like Tobii). I asked around and Iāll update this post once Iām sure about that. Update: yes, the offline version will only support dedicated hardware
I have dabbled with the webgazer.js for over a month now and as far as I understand the triplet in the array you were mentioning corresponds to each of the following
If the goal is to draw heatmaps, we could simply use the predictions from webgazer.js and html canvas to over lay the tracked points. Thereās an opensource library that I have recently found, called simpleheat.js
I am new to pavlovia and havenāt tried integrating it into a psychopy experiment yet, but thereās a demo I built at work hosted here.
The code can be found here
I saw your in-pavlovia tracking example, trying to adapt it to download the simpleheat library and do things inside the pavlovia ecosystem.