psychopy.org | Reference | Downloads | Github

Eye-tracking development for Pavlovia

Check out : https://medium.com/@williamwang15/integrating-gazecloudapi-a-high-accuracy-webcam-based-eye-tracking-solution-into-your-own-web-app-2d8513bb9865

Thanks for the tip @szydej! See my critique on gazecloud in an earlier post in this thread. Eye-tracking development for Pavlovia

You can also try https://github.com/szydej/GazeFlowAPI. It use Local processing.

I just noticed I’m chatting with a developer of GazeFlowAPI. Cool :slight_smile:

So in this repo I see I need an AppKey for it to work. Peeking into the C# and HTML5 JavaScript folder, I don’t see any code that actually processes the video, but I do see sockets being set up for connecting to 127.0.0.1. Is there another repo that does the processing then?

With GazeFlowAPI you can access real-time gaze and head position data from GazePointer WebCam Eye-Tracker

How to use it:

  1. Install and start GazePointer (download: https://sourceforge.net/projects/gazepointer/)
  2. To get your AppKey register at https://gazeflow.epizy.com/GazeFlowAPI/register/ You can use default AppKey for testing.
  3. Connect to GazePointer and start receiving gaze data.

This thread about eye tracking in PsychoJS/Pavlovia has received a lot of posts. Here is a little recap:

  • I have examined some libraries for eye-tracking via webcams and selected webgazer as presently being the most suitable one. Here is a paper examining how well a relatively an old version of webgazer works in three cognitive tasks.
  • The experiment demo_eyetracking2 illustrates how to use webgazer with PsychoJS. This experiment includes a calibration procedure and a gaze-tracking procedure.
  • Demo_eyetracking2 can be freely cloned and modified. Researchers have already adapted the experiment for their own needs, with discussion about one such adaptation in another thread on this forum.
  • An OST colleage has made a 5-step tutorial on how to customize demo_eytracking2, which can be found in this tweet.
5 Likes

Hi, You can also try to gather the data using RealEye.io - it’s very easy - no coding required.
Then you can export what you need as a CSV file and process it in any way you like.

Thank you for all your work on this. Do you know of a demo or experiment that has implemented this and logs (x,y) coordinates to an output file? Have been trying to do this, but have not been able to. Thanks.

Do you think this could be used to track eye movements while reading a text? I don’t need it to be particularly precise; I just want to be able to tell whether their eyes are moving across the page or if they have stopped reading.

1 Like

Hi @dpg45,

I think it would be tricky to establish whether the eyes are moving or not, but establishing whether participants are looking at the screen or not should be doable. Actually, another researcher has been developing something like that, which we discussed in another thread.

Best, Thomas

1 Like

Webcam-based implementation would be very difficult. Look into the differences between shape-based eye tracking such as this (this is based on Papoutsaki’s dissertation (2016)) and corneal reflection eye tracking (which is what you see in more expensive, laboratory, or over-the-counter eye tracking equipment). Corneal-reflection based eye tracking is more accurate. The type of eye tracking being implemented here is shape-based meaning it identifies a contrast between the cornea and other regions of the face. It has a higher degree of error because it is more susceptible to environmental factors such as lighting and head movements.

WebGazer, which is used here, is reliant on “implicit” click-based calibration that continues to calibrate as you continue to click to keep the predictions based on the last ten or so clicks. If you have the individual engage in a passive task, the predictions will lose accuracy over time (I don’t have an exact number). You could include mandatory clicks at the end of each line or something, but then you’d greatly impact reading fluency, both in terms of saccades and fixations.

Depending on your research question, this may not be an issue. But for any reading passages that are more than a line, the accuracy will be questionable.

2 Likes

Thank you both so much for your replies.

@ayjayar , do you know how WebGazer compares with GazeRecorder? Is that also shape-based? From the demo, GazeRecorder doesn’t seem to use click-based calibration, but I don’t know if it will also lose accuracy over time.

With the pandemic, I am trying to find something that can be used remotely. I would not need to analyze the data at the level of saccades. Basically, what I am trying to tell is if the participant’s eyes are moving across the page vs. if they are staring into space/eyes have stopped moving. Do you think WebGazer, or any other webcam based tracker, is able to achieve this?

I am not trying to hijack the thread, so please feel free to message me directly if you’d prefer. I’d greatly appreciate your insight.

@dpg45 can tell you I’ve messed with GazeRecorder early on and it has many advantages and disadvantages, mainly the fact it requires software download unlike webgazer, and it is all locally managed, whereas WebGazer can be on any webpage, but it also automatically records video of the interaction.

I’d recommend starting a new thread, as this thread is specifically meant to stay on-topic of just using eye tracking in Pavlovia, without referencing other specific software. I’ll message you more if you wish.

1 Like

I just realized that there is actually a discussion of other ones (e.g. GazeCloud) earlier in the thread. My mistake for not reading more carefully!

2 Likes

Thank you for all your work, @thomas_pronk. What would be the best way to “turn off” WebGazer in your demo after you no longer need it? E.g., subsequent trials

I think webgazer.end() would be a good one for that. See their API documentation over here: Top Level API · brownhci/WebGazer Wiki · GitHub

2 Likes

Does webgazer report head position? I’ve been looking through the webgazer docs and can’t tell. Seems like this information would be necessary but I don’t see a way to access it
Thank you in advance :slight_smile: \

Hey @r.ward,

I also took a look and this is what I found:

  1. In the wiki I found the function getPositions(): Tracker API · brownhci/WebGazer Wiki · GitHub
  2. Which I tried out by running demo_eye_tracking2, opening the console, and running: webgazer.getTracker().getPositions()
  3. This returns an array with 468 elements, each of which is an array of 3 elements. Looks like 3D coordinates?
  4. Finally, I see in the library that getPositions() is used to draw the face overlay: WebGazer/index.mjs at 7ff29a32b12048362750d0594ecf8375dcdd22a0 · brownhci/WebGazer · GitHub

So… it’s there indeed, but not in a very easy-to-use format :slight_smile: What you could do is post an issue in the webgazer repo to ask for what you need; the team is very approachable. To be sure we give them some good specs to work with, it can be useful to think it through a bit. What kind of position data would you like exactly?

Best, Thomas

Hello guys,

I am a newbie to PsychoPyJS. I am a student and I am trying to build a small experiment where I show an image and capture frames using webcam. I have a code in JavaScript where I am already doing it but I want to integrate it in PsychoPyJS. I have been trying to do this using the Builder tool but I couldn’t do it. Could you guys please direct me to relevant document or tutorial? Any help would be highly appreciated.

Thanks,
SPJ

Hey @spj,

Is the capturing related to eye-tracking or is it about capturing video in general?