An elaborate eye tracking experiment for infants with auditory and visual stimuli

Hi! I’d like to spread the word about an experiment I created for a research group at Karolinska Institutet, as I think it can be useful (by borrowing stimuli and/or code) for other PsychoPy experiment developers.

In the experiment, infants (not necessarily, but it is the intended audience) are shown visual stimuli and played audio snippets, as their gaze and pupil dilation is being tracked. I’ve now created a shareable version of the experiment, where stimuli that can’t be freely shared have been garbled. All experiment files, including instructions and stimuli, are available on GitHub:
(if you’re not used to GitHub, just use the green 'Code’ button->‘download zip’)

The experiment is created using PsychoPy Builder, but includes many code snippets for implementing custom functionality, such as animated (bouncing, blinking etc) images and real-time tracking of participant gaze/distance from screen. The experiment is thoroughly documented, through README/guide files as well as many comments in the code snippets. I also tried to make it easy for others to translate the experiment, and/or replace the stimuli.

I spent a lot of time finding appropriate stimuli/photos (faces/geometric shapes/man-made objects/natural objects), and the ‘geometric shapes’ were tailor-made for the experiment by designer Amanda Gren. I’ve done my best to document how/by whom these stimuli may be used, and where they come from.

Of particular interest might be how eyetracking is handled. When I started using eyetracking with PsychoPy, I struggled to understand how to go about it the right way. The ‘right’ (officially recommended) way is still unclear to me, but I found a system that works quite well for this experiment, so it may be useful as an inspiration for others. Do note however that what I did isn’t really in line with what seems to be recommended in the most recent versions of PsychoPy, where eyetracking has become more integrated in the Builder itself, so you’ll likely want to only use pieces of what I did. A final note on this is that I’ve added a guide on handling the output data, including the resulting HDF5 eyetracker data files, with R and Python which might also be useful.

As an add-on, I also created a ‘tkinter’ desktop app for reformatting the experiment output data, by combining the HDF5 (eyetracker/iohub) and ‘regular’ experiment CSV output files from an experiment run into a single (huge) CSV: This was mostly done to help colleagues who might be uncomfortable with working with HDF5 files, but the code I created for this might also be interesting to someone.

I hope this is useful and am happy for any comments. (though I won’t actively develop the experiment further due to lack of time)