| Reference | Downloads | Github

Gaze Tracking and Heatmaps for experiments on Pavlovia

Hello All,

To start off, @thomas_pronk built an amazing demo/example showing that external libraries like the webgazer.js can be included into PsychoJS (repo here).

I followed his work and took it a step further to generate heatmaps using simpleheat.js from the eye tracker position (repo here) .

Try the simple experiment here.

The next step is getting the tracking data out, from the experiment. The data would be x,y positions of gaze associated with a particular stimulus rather than numerical scores like those used in the experiment.

I am proposing the use of IndexedDB for this purpose by creating an object store for each stimulus/image (and then exporting it), and then using the data at the end of experiment to show heatmaps for each stimulus/image.

To the community,
Please suggest any alternate techniques if you think will work better.

I started this thread branching off of the main eye tracking thread because this will more likely concentrate on using webgazer.js and other libraries within Pavlovia.


Hey Saketh,

Thanks for taking it to the next level! Below are some reflections on your proposal:

  1. How do we decide whether a gaze is associated with a stimulus?
    1a. Should the gaze be inside of it or do we allow it to be slightly outside of the stimulus?
    1b. Is a bounding box sufficient or should we check the contours (for stars or triangles, for example)
    1c. Can a gaze be associated with multiple stimuli at once?
  2. About the data structure
    2a. IndexedDB sounds good; supported by most browsers, note that Safari is almost there but not yet, and capable to handle relatively large datasets (which is a good perk, since the list of gazes can become quite big over time).
    2b. What would we actually want to store on Pavlovia? Is there any way to reduce the data or should it be as raw as possible?
    2c. The x,y coordinates of my demo are relative to the PsychoJS canvas. If they would be relative to a stimulus, what are the coordinates relative to exactly? To the anchor points (which are the coordinates of the center of the stimulus by default)?

Cheers, Thomas

1 Like

A friendly colleague remarked that the Python version of PsychoPy already has an ROI component that goes a long way :slight_smile: Eye Tracker Region of Interest Component — PsychoPy v2021.2

1 Like

Hello Thomas,

I believe the gaze should be screen relative rather than stimulus relative. ( I think instead of a default canvas, we resize it as per user screen res, store that res as a key:val alongside and record all of the gaze data )

I do not completely understand this at the moment

I think it should not be at this point in time. I am only thinking of image stimuli at the moment. Please advise on the same.

2a. I guess we can have a standard naming convention the indexed db and clear the db whenever we start a new experiment or at the end of current experiment post the export procedure.

2b. Since we don’t wan’t to keep calling Pavlovia’s api’s to store gaze data, I am thinking something like a batched export from indexedDB. Like gaze data associated to each stimuli, I know my answer is vague on this, please suggest.

2c. The plotting canvas or the tracking canvas can be resized to match user resolution and store that in one of the key:val pairs, then x,y positions can be measured relative to the screensize rather than the stimulus. I am saying so because when we draw heatmaps on a different device than the one the experiment is conducted on, it becomes a painful remapping procedure. (this is half experience, half guessing.)

the ROI component is interesting. I will dig into it and see how it can be applied to webgazer

I took a closer look at the ROI component. It defines a bunch of polygons and records when a participants starts and stops looking at the polygon. This data is recorded per trial. For example, you’ve got a column called roi.timesOn. For a particular trial (row in the data) this column can have the value [4.294582699978491, 5.244570399983786, 6.495008800004143, 8.0029707000067], which means "on trial X the participant started looking 4 times at the roi, with onset times 4.2945… etc. For high-level data this seems elegant.

About drawing heatmaps from gaze data… The more I think about it, the more it might make a nice library in itself, since you’ve got this low-level dataset that can become quite big. Something like:

  • A system for managing a time-series of X,Y coordinates (based on IndexedDB)
  • Data comes in by connecting it to an input device, like webgazer (or MouseView)
  • Data comes out by connecting it to a task engine (like PsychoJS)
  • I’d guess a lot of the logic would concern converting different coordinate systems. For example, for PsychoJS it’s handy to convert webgazer coordinates to PsychoJS height units. Or for the heatmap, whatever units that library needs.
  • And finally (this one could be tricky) some way to store the time-series in a database or file. Ideally in an efficient format, so that the database doesn’t get too big?
1 Like
  • I tried wrapping my head around the ROI component, will make a demo and publish soon.

  • Your data model seems enough at least at this point of time, i.e., the participant looking at a position (X,Y), for n number of times.

  • I think we should add more context about making heatmaps from gazedata library.

  • Regarding the large dataset, I have been working with time-series data for a reasonable amount of time (12-16 months on sensor data) and I think I can answer it to best of my knowledge,

    • Yes, the database does get larger with time.
    • I can think of two ways here,
      a. Set Limits on the time for which the gaze can be recorded, i.e., the time for which the stimulus can be presented for n number of stimuli.
      b. Set limits on the number of stimuli themselves.

I can see that there could be problems with either of what I wrote above.
Another policy to ponder about is that if we would overwrite the existing gaze-data for a subject after we offload the data from the indexeddb to pavlovia. (also clear the indexeddb once the offloading promise has been executed).