psychopy.org | Reference | Downloads | Github

Retrieve Fixation Events using Iohub

Dear Psychopy users,

I recently began coding an eye-tracking (Eyelink 1000) experiment and used the “simple.py” (Psychopy v2021.2.0) as a starting point.

Even though I can easily access the current positions using “tracker.getLastGazePosition()” (which is great btw), I’m currently struggling with retrieving only fixation events (x, y, duration) during a trial that lasts 2 seconds.

I found the FixationEndEvent class in the documentation but I’m lost on how to get the information after the trial finished.(SR Research — PsychoPy v2021.2)

Thanks for your help!

I have a similar issue. I tried calling eyetracker.getEvents(event_type_id=53) to get fixation start events every call on window refresh event (win.flip()) since the eyetracker sampling rate is 1000hz and the window refresh rate is 60hz. However, the call returns an empty list no matter what event_type_id used. Maybe @sol would be able to help here.

The issue maybe that waiting 2 seconds to get any events will cause the event buffer to fill up and old events to be dropped. By default only the last 1024 events are kept in memory, and this includes eye samples. So if running at 1000 hz, only the last second (ish) of data will be buffered.

Updating the simple.py demo to print start and end fixation events during the trial loop seems to be working:

# at top of script / imports
from psychopy.iohub.constants import EventConstants
       # within the 'trial' loop
        fstarts = tracker.getEvents(event_type_id=EventConstants.FIXATION_START)
        fends = tracker.getEvents(event_type_id=EventConstants.FIXATION_END)
        if fstarts:
            for e in fstarts:
                print(e)
        if fends:
            for e in fends:
                print(e)

should print one line for each start / end fixation event, similar to:

FixationStartEventNT(experiment_id=0, session_id=0, device_id=0, event_id=7713, type=53, device_time=63.962, logged_time=56.450762999942526, time=56.41976299994253, confidence_interval=0.0009974001441150904, delay=0.030999999999998806, filter_id=0, eye=22, gaze_x=-33408.0, gaze_y=33280.0, gaze_z=0, angle_x=-32768.0, angle_y=-32768.0, raw_x=0, raw_y=0, pupil_measure1=32768.0, pupil_measure1_type=70, pupil_measure2=0, pupil_measure2_type=0, ppd_x=-32768.0, ppd_y=-32768.0, velocity_x=0, velocity_y=0, velocity_xy=-32768.0, status=0)
FixationEndEventNT(experiment_id=0, session_id=0, device_id=0, event_id=9706, type=54, device_time=65.96900000000001, logged_time=58.44264489994384, time=58.42664489994385, confidence_interval=0.0009652001317590475, delay=0.015999999999991132, filter_id=0, eye=22, duration=2.007000000000005, start_gaze_x=-33408.0, start_gaze_y=33280.0, start_gaze_z=0, start_angle_x=-32768.0, start_angle_y=-32768.0, start_raw_x=0, start_raw_y=0, start_pupil_measure1=32768.0, start_pupil_measure1_type=70, start_pupil_measure2=0, start_pupil_measure2_type=0, start_ppd_x=27.299999237060547, start_ppd_y=26.0, start_velocity_x=0, start_velocity_y=0, start_velocity_xy=-32768.0, end_gaze_x=-33408.0, end_gaze_y=33280.0, end_gaze_z=0, end_angle_x=-32768.0, end_angle_y=-32768.0, end_raw_x=0, end_raw_y=0, end_pupil_measure1=32768.0, end_pupil_measure1_type=70, end_pupil_measure2=0, end_pupil_measure2_type=0, end_ppd_x=27.299999237060547, end_ppd_y=26.0, end_velocity_x=0, end_velocity_y=0, end_velocity_xy=-32768.0, average_gaze_x=-469.8000030517578, average_gaze_y=99.5, average_gaze_z=0, average_angle_x=-5004.0, average_angle_y=-1059.0, average_raw_x=0, average_raw_y=0, average_pupil_measure1=1000.0, average_pupil_measure1_type=70, average_pupil_measure2=0, average_pupil_measure2_type=0, average_ppd_x=0, average_ppd_y=0, average_velocity_x=0, average_velocity_y=0, average_velocity_xy=-32768.0, peak_velocity_x=0, peak_velocity_y=0, peak_velocity_xy=-32768.0, status=0)
# .....

If you need to buffer more events, you can increase the buffer length used by the iohub device when it is configured. Currently the max number of events that can be buffered is 2048:

# Update simply.py demo config of eyelink to buffer 2048 events / samples

elif TRACKER == 'eyelink':
    eyetracker_config['model_name'] = 'EYELINK 1000 DESKTOP'
    eyetracker_config['event_buffer_length'] = 2048
    eyetracker_config['runtime_settings'] = dict(sampling_rate=1000, track_eyes='RIGHT')
    devices_config['eyetracker.hw.sr_research.eyelink.EyeTracker'] = eyetracker_config

Thank you

Sol, thank you for the quick reply. Importing EventConstants did the trick for me. Also, noted on increasing the event_buffer_length.

I have one quick question regarding a forced fixation paradigm experiment. I’m presenting a stimulus for 250ms (15 calls to win.flip() in a for loop) and this is the code inside the for loop

gpos = self.et.getPosition() # self.et == eyetracker instance attribute of iohub.device class
if type(gpos) in [tuple, list]: #following example code from demo
     fixating = self.eye.gazeOkayRegion.contains(*gpos) #visualStim circle class instance with radius == 1 degree visual angle
     if not fixating:
           #break for loop and present broken fixation text on screen

Based on the discussion thus far regarding event buffers and calling to eye sample (x, y) position every 16ms due to the monitor refresh rate, would you recommend filtering for SACCADE_START events instead since the eyetracker samples at 1000-2000hz and break out of the for loop if a SACCADE_START event is detected:

sstarts = self.et.getEvents(event_type_id=EventConstants.SACCADE_START)
if sstarts:
     break
     #present broken fixation message on screen

or would you recommend a condition statement with a short circuiting or operation such as:

not_fixating = False
gpos = self.et.getPosition() # self.et == eyetracker instance attribute of iohub.device class
if type(gpos) in [tuple, list]: #following example code from demo
     not_fixating = not self.eye.gazeOkayRegion.contains(*gpos)
s_ends = self.et.getEvents(event_type_id=EventConstants.SACCADE_END)
not_fixating1 = False
if s_ends:
     fixating1 = []
     for s in s_ends:
          curr_pos = (s["end_gaze_x"], s["end_gaze_y"])
          fixating1.append(not self.eye.gazeOkayRegion.contains(*curr_pos))
     not_fixating1 = any(fixating1)
if not_fixating or not_fixating1:
     break #break out of stimulus presentation loop

Generally I would suggest using the et.getPosition() approach. For finer control, you can access the eye samples themselves using et.getEvents(event_type_id=EventConstants.MONOCULAR_EYE_SAMPLE) if recording from one eye, or et.getEvents(event_type_id=EventConstants.BINOCULAR_EYE_SAMPLE) if recording from both. Accessing the eye samples would allow you, for example, to only trigger a broken fixation if N samples in a row were outside of the fixation region.

The advantange of using sample based data is that it will trigger once outside the fixation region regardless of the type of eye movement that resulted in the eye being there: if the eye drifts outside the fixation region it will still get caught even if a saccade event was not detected by the tracker.

The issue I see with using the start saccade events is that you do not know where the saccade has landed, which could possibly still be within the valid fixation region.

Using saccade end events would make sense if you only wanted to trigger the broken fixation state when the eye tracker detected a saccade that also landed outside the fixation region, but not trigger one if the eye drifts outside the fixation region and no saccade event was detected.

Got it! Thanks for the advice. I like the N samples in a row outside of the fixation region approach. This is the algorithm I came up with (open to better faster versions):

import numpy as np
#
N=4
ones = np.ones(N)
for i in range(15):
    samples = et.getEvents(event_type_id=EventConstants.MONOCULAR_EYE_SAMPLE)
    not_in_region = []
    eyeInRegion = True
    if len(samples) > N: #length of samples array ~33 on average, sampling at 2000Hz
        for s in samples:
            not_in_region.append(not self.eye.gazeOkayRegion.contains(*[s.gaze_x, s.gaze_y]))
        not_in_region = np.convolve(np.array(not_in_region), ones, "valid")
        eyeInRegion = False if any(not_in_region == N) else True
    if not eyeInRegion:
        #present text indicating broken fixation
        win.flip() 
        break
    #code to draw stimuli presented for 250ms if eyeInRegion True over 15 iterations of code in loop
    win.flip()

I’m just not sure if this is the fastest approach to avoid dropping of frames because the timing to loop through all events in samples in addition to the overhead of drawing stimuli afterwards before calling win.flip() may be greater than 16.77777ms (60Hz) in some instances (depending on the sampling rate of the eyetracker).

You may want to monitor the samples across the 15 frame period, instead of resetting the eyeInRegion variable on each frame. This way if the last 2 samples from frame f-1 are out of bounds as well as the first 2 samples from frame f, eyeInRegion==False.

At start of each trial and after a fixation error is detected:

# tracks number of consecutive samples out of bounds
sample_out_of_bounds_count  = 0

Within the 15 frame presentation loop:

        samples = tracker.getEvents(event_type_id=EventConstants.MONOCULAR_EYE_SAMPLE)
        for s in samples:
            if gaze_ok_region.contains(s.gaze_x, s.gaze_y):
                sample_out_of_bounds_count = 0
            else:
                sample_out_of_bounds_count += 1

Timing wise, with eyelink running at 1000 Hz, the above code takes 1.3 to 1.5 msec to execute on my PC. I think you should be fine in terms of not dropping 60 Hz frames because of this extra code, but you should check this on your own hardware.

Very simple approach. I like it. Thanks!