I am doing an experiment on reading. Participants need to focus on a fixation dot for 800 ms, then a word is shown in the periphery (onset of stimulus presentation = start time). Upon recognition, participants press spacebar, the word stimulus dissappears and participant reads the word out loud. The difference between the timepoint of pressing the spacebar and start time is the reaction time for recognition.
I am using an Eyelink 1000 for measuring the 800ms fixation.
This is the code I use for connecting with the eyelink 1000, in a file called input.py, defining all input:
if self.input_type is EYETRACKER:
iohub_tracker_class_path = 'eyetracker.hw.sr_research.eyelink.EyeTracker'
eyetracker_config = dict()
eyetracker_config['name'] = 'tracker'
eyetracker_config['model_name'] = 'EYELINK 1000 DESKTOP'
eyetracker_config['runtime_settings'] = dict(sampling_rate=1000,
track_eyes='RIGHT')
io = launchHubServer(**{iohub_tracker_class_path: eyetracker_config})
# Get some iohub devices for future access.
self.tracker = io.devices.tracker
# run eyetracker calibration
r = tracker.runSetupProcedure()
In a seperate study.py file I have my experiment routine. There I write an outfile containing condition, stimulus word and reaction time.
def exit_experiment():
with open( os.path.join( os.getcwd(), 'results_' + datetime.datetime.now().strftime( '%Y%m%d_%H%M' ) + '.csv' ), 'w+' ) as outfile:
outfile.write( 'test_type,word,reaction_time\n' )
for result in results:
outfile.write( ','.join( result ) + '\n' )
However, addtionally to condition, stimulus word and reaction time, I would like to save all the x,y coordinates of a participant’s gaze between start time and reaction time of a certain trial as well.
So, these are my questions:
- Can ioHub save x y coordinates of a participants gaze?
Because now, the coordinates aren’t saved at all.
- If so, how can I implement this in my code?
- If so, where can I define the location of the saved file?
- Can these x,y coordinates easily be coupled to the corresponding word stimulus?
- Is it possible to only save the x, y coordinates between stimulus onset and reaction time?
I hope these questions aren’t too silly, but it’s the first time I am using an eyetracker.
Thank you for your feedback.
Koen