You can also try https://github.com/szydej/GazeFlowAPI. It use Local processing.
I just noticed I’m chatting with a developer of GazeFlowAPI. Cool
With GazeFlowAPI you can access real-time gaze and head position data from GazePointer WebCam Eye-Tracker
How to use it:
This thread about eye tracking in PsychoJS/Pavlovia has received a lot of posts. Here is a little recap:
- I have examined some libraries for eye-tracking via webcams and selected webgazer as presently being the most suitable one. Here is a paper examining how well a relatively an old version of webgazer works in three cognitive tasks.
- The experiment demo_eyetracking2 illustrates how to use webgazer with PsychoJS. This experiment includes a calibration procedure and a gaze-tracking procedure.
- Demo_eyetracking2 can be freely cloned and modified. Researchers have already adapted the experiment for their own needs, with discussion about one such adaptation in another thread on this forum.
- An OST colleage has made a 5-step tutorial on how to customize demo_eytracking2, which can be found in this tweet.
Hi, You can also try to gather the data using RealEye.io - it’s very easy - no coding required.
Then you can export what you need as a CSV file and process it in any way you like.
Thank you for all your work on this. Do you know of a demo or experiment that has implemented this and logs (x,y) coordinates to an output file? Have been trying to do this, but have not been able to. Thanks.
Do you think this could be used to track eye movements while reading a text? I don’t need it to be particularly precise; I just want to be able to tell whether their eyes are moving across the page or if they have stopped reading.
I think it would be tricky to establish whether the eyes are moving or not, but establishing whether participants are looking at the screen or not should be doable. Actually, another researcher has been developing something like that, which we discussed in another thread.
Webcam-based implementation would be very difficult. Look into the differences between shape-based eye tracking such as this (this is based on Papoutsaki’s dissertation (2016)) and corneal reflection eye tracking (which is what you see in more expensive, laboratory, or over-the-counter eye tracking equipment). Corneal-reflection based eye tracking is more accurate. The type of eye tracking being implemented here is shape-based meaning it identifies a contrast between the cornea and other regions of the face. It has a higher degree of error because it is more susceptible to environmental factors such as lighting and head movements.
WebGazer, which is used here, is reliant on “implicit” click-based calibration that continues to calibrate as you continue to click to keep the predictions based on the last ten or so clicks. If you have the individual engage in a passive task, the predictions will lose accuracy over time (I don’t have an exact number). You could include mandatory clicks at the end of each line or something, but then you’d greatly impact reading fluency, both in terms of saccades and fixations.
Depending on your research question, this may not be an issue. But for any reading passages that are more than a line, the accuracy will be questionable.
Thank you both so much for your replies.
@ayjayar , do you know how WebGazer compares with GazeRecorder? Is that also shape-based? From the demo, GazeRecorder doesn’t seem to use click-based calibration, but I don’t know if it will also lose accuracy over time.
With the pandemic, I am trying to find something that can be used remotely. I would not need to analyze the data at the level of saccades. Basically, what I am trying to tell is if the participant’s eyes are moving across the page vs. if they are staring into space/eyes have stopped moving. Do you think WebGazer, or any other webcam based tracker, is able to achieve this?
I am not trying to hijack the thread, so please feel free to message me directly if you’d prefer. I’d greatly appreciate your insight.
@dpg45 can tell you I’ve messed with GazeRecorder early on and it has many advantages and disadvantages, mainly the fact it requires software download unlike webgazer, and it is all locally managed, whereas WebGazer can be on any webpage, but it also automatically records video of the interaction.
I’d recommend starting a new thread, as this thread is specifically meant to stay on-topic of just using eye tracking in Pavlovia, without referencing other specific software. I’ll message you more if you wish.
I just realized that there is actually a discussion of other ones (e.g. GazeCloud) earlier in the thread. My mistake for not reading more carefully!
Thank you for all your work, @thomas_pronk. What would be the best way to “turn off” WebGazer in your demo after you no longer need it? E.g., subsequent trials
webgazer.end() would be a good one for that. See their API documentation over here: Top Level API · brownhci/WebGazer Wiki · GitHub
Does webgazer report head position? I’ve been looking through the webgazer docs and can’t tell. Seems like this information would be necessary but I don’t see a way to access it
Thank you in advance \
I also took a look and this is what I found:
- In the wiki I found the function
getPositions(): Tracker API · brownhci/WebGazer Wiki · GitHub
- Which I tried out by running demo_eye_tracking2, opening the console, and running:
- This returns an array with 468 elements, each of which is an array of 3 elements. Looks like 3D coordinates?
- Finally, I see in the library that
getPositions()is used to draw the face overlay: WebGazer/index.mjs at 7ff29a32b12048362750d0594ecf8375dcdd22a0 · brownhci/WebGazer · GitHub
So… it’s there indeed, but not in a very easy-to-use format What you could do is post an issue in the webgazer repo to ask for what you need; the team is very approachable. To be sure we give them some good specs to work with, it can be useful to think it through a bit. What kind of position data would you like exactly?
Is the capturing related to eye-tracking or is it about capturing video in general?