| Reference | Downloads | Github

Eyetracking Tobii TX300

OS (e.g. Win10): Win10
PsychoPy version (e.g. 1.84.x): 2021.2.3
Standard Standalone? (y/n) If not then what?:
What are you trying to achieve?: make Eyetracking work, with Tobii TX300

What did you try to make it work?: make Eyetracking work, with Tobii TX300

What specifically went wrong when you tried that?: I have sound files and want to track the pupil. I tried working with the builder, to connect the Eyetracker with my psychopy skript and everything seems to be running, but when i look at my output Data file, it is almost empty or at least i dont get any pupil data. I have tried many solutions, but nothing has worked for me so far, because most of the time it has to be changed in the coder, but I havent worked with python before. I tried to work along this: GitHub - aleksandernitka/EyeTracking_Tobii_PsychoPy_IOHUB: a short guide on how to use iohub when using Tobii Eye Trackers with PsychoPy Builder. but sadly it didnt help.
Maybe someone has a idea, ow I can run my experiemnt and get the pupil data. Thanks, timon.

Hi timon,

Do you need to access the pupil data while the experiment is running, or only after the experiment is complete (i.e. from a data file)? Both can be done, but the code you would need to add to your experiment will differ depending on what you want to do.

Thank you

@sol, do you have code to convert the hdf5 text format to something usable?

Hi @sol ,

i just need a data file after the experiment is complete, but I dont know why it doesnt work.
Tanks for the fast help.


I would like to add here that I’ve used the Titta package (GitHub - marcus-nystrom/Titta: Python and PsychoPy interface to Tobii eye trackers using Tobii Pro SDK)for a Tobii X2-30 to collect pupil data in PsychoPy and think it’s great (it works with most Tobii trackers). If, interested, I can send you a template experiment I created in builder that collects data and outputs a nice txt file with all relevant data.

I want to love the built in ET components, but do I hate the hdf5 file output…

Hi jgeller112,

if you want a csv file, you can find it here i guess: GitHub - aleksandernitka/EyeTracking_Tobii_PsychoPy_IOHUB: a short guide on how to use iohub when using Tobii Eye Trackers with PsychoPy Builder.

thank you @jgeller112, that would be a great help.

When using iohub, all the eye samples are saved to a hdf5 file in the data folder. The file name should have the same name as your other results files, but with a .hdf5 extension.

There is an example python script in the Coder examples (psychopy/demos/coder/iohub/iodatastore/ that reads all events from a selected event table and saves them to a tab delimited text file.

HDF5 files can be also be viewed using the HDFView application.

To really make use of the events in the hdf5 file, your experiment will also need to save start and stop trial info so that the events can be split into trials when you read them. This is usually done by sending experiment messages to the hdf5 file during the experiment, or using a condition variables table and saving trial start and stop times to two columns of that table.

If you do not like working with the hdf5 file format, you can always read the samples yourself during the experiment and save your own results file.

Thank you

1 Like

@sol Thank you for this!

I’m working on a couple other hdf5 file reading demo’s that I hope will be included with the next release in January:

  1. read eye samples and groups them into trials using experiment Messages
  2. read eye samples and groups them into trials using the condition variables table. This version also saves the trial condition variables as extra columns for each sample.

They both will work reading hdf5 files created using the psychopy/demos/coder/iohub/eyetracking/gcCursor/ Coder demo.

For example, reading MonocularEyeSamples and grouping them into trials based on ‘TRIAL_START’ and ‘TRIAL_END’ experiment messages:

import sys, os
from psychopy import core
import psychopy.iohub
from psychopy.iohub.datastore.util import displayDataFileSelectionDialog, ExperimentDataAccessUtility

SAVE_EVENT_TYPE = 'MonocularEyeSampleEvent'
SAVE_EVENT_FIELDS = ['time', 'gaze_x', 'gaze_y', 'pupil_measure1', 'status']

def getTime():
    return core.getTime()

if __name__ == '__main__':
    # Select the hdf5 file to process.
    data_file_path = displayDataFileSelectionDialog(psychopy.iohub.module_directory(getTime))
    if data_file_path is None:
        print("File Selection Canceled, exiting...")
    data_file_path = data_file_path[0]
    dpath, dfile = os.path.split(data_file_path)

    datafile = ExperimentDataAccessUtility(dpath, dfile)

    # Create a table of trial_index, trial_start_time, trial_end_time for each trial by
    # getting the time of 'TRIAL_START' and 'TRIAL_END' experiment messages.
    trial_times = []
    trial_start_msgs = datafile.getEventTable('MessageEvent').where('text == b"TRIAL_START"')
    for mix, msg in enumerate(trial_start_msgs):
        trial_times.append([mix+1, msg['time'], 0])

    trial_end_msgs = datafile.getEventTable('MessageEvent').where('text == b"TRIAL_END"')
    for mix, msg in enumerate(trial_end_msgs):
        trial_times[mix][2] = msg['time']

    scount = 0

    # str prototype used to select samples within a trial time period
    sample_select_proto = "(time >= %f) & (time <= %f)"

    # Open a file to save the tab delimited output to.
    output_file_name = "%s.txt" % (dfile[:-5])
    with open(output_file_name, 'w') as output_file:
        print('Writing Data to %s:\n' % (output_file_name))

        # Save header row to file
        column_names = ['TRIAL_INDEX', ] + SAVE_EVENT_FIELDS

        for tindex, tstart, tstop in trial_times:
            trial_samples = datafile.getEventTable(SAVE_EVENT_TYPE).where(sample_select_proto % (tstart, tstop))
            # Save a row for each eye sample within the trial period
            for sample in trial_samples:
                sample_data = [str(sample[c]) for c in SAVE_EVENT_FIELDS]
                scount += 1
                if scount % 100 == 0:

    print("\n\nWrote %d samples." % scount)

Hi @sol , so do I get it right, that I have to set a hdf5 file first with events and then run the experiment? Because I sadly still dont undestand, how to get my eyetracking data with the script I created in the builder. When I run my experiment with the builder, I get a hdf5 file, but there are no data, that i want. When I run the experiment you gave me, I get a textdocument data file, which is nice, but I dont know, what exactly I have to change, to get the pupil data, that I want. If i get it right, I can set the start and end points from my routines, that I have created in the builder? This is something I have to do, but I dont know how. My question is: Can I get the pupil data, just with a psychopy experiment, which I have done in the builder or do I need the coder too, because I dont have any experience in python?
Maybe you can have a look at my experiment. I uploaded it her:
It is very long, because i created a new routine in the builder, for every soundfile, because I hoped that this will allow me to get a better output for the pupil data. It is not nice, but i just need the eyetracking results.

Thanks for your help

Hi Timon,

You should be able to run your Builder experiment and then run a python script that processes / parses the hdf5 file that was saved.

The eye samples should be automatically saved to the hdf5 file. Tobii will save BinocularEyeSample events that will be in the BinocularEyeSampleEvent hdf5 table.

To be able to split the saved sample data into trials, you will need to use a Code component in the experiment to write a message to the hdf5 file indicating when each trial starts and stops.

To save experiment messages to the hdf5 file you need to send them from a code component in Builder. For example at the start of a trial routine you could send:


and at the end:


To use the hdf5 file you currently need to have a script that can parse it, like an adapted version of the one included above.

Would you like me to post a version of one of the Builder demo’s that writes out trial start and stop messages as well as a version of the above parsing script that works with BinocularEyeSamples?

Hi @sol,

so did I get it right, that I first run my experiment and after my experiment created the hdf5 file, I use it as input for the script that can parse it?
I just tried using io.sendMessageEvent(text=“TRIAL_START”) and io.sendMessageEvent(text=“TRIAL_END”) at the beginning and end of every trial routine in the coder, but sadly the experiement didnt run.

The version of the builder that can write start and stop masseges and the parsing script would be nice.

Thank you

Hi Timon,

Please find an example builder project attached. I’ve added a code component send_trial_msgs to the trial routine that sends a start and stop trial message to the hdf5 file. Update the project’s EyeTracking properties to set the sampling rate for your Tobii; the one I have was running at 60 Hz.

eyetracking.psyexp (20.7 KB)

After running the project there will be a hdf5 file in the data folder.

Open the attached script in Coder and run it. Select the hdf5 file that was saved by the experiment. It will save a .txt file in the same directory as the script. (2.7 KB)

The txt file will have a header row and then one row for each sample that occurred between the TRIAL_START and TRIAL_END messages.

Thank you


Hi @sol,

thank you for you help. I tried the script today and it seems, that everything runs perfectly. I could not have done ist without your help.


Hi @sol,

I have another question. My experiement runs perfectly thanks to your help. But I dont understand, why there are alway 0.5 seconds of missing data, everytime the experiement goes from one item to another. In the data file there are just 0.5 seconds, wehre the eyetracker does not give any data. I ran your experiment aswell and there it was the same thing. Maybe you know what is going on.

Thank you, Timon

Hi @timon,

If you mean that there is a gap in samples between the end of one trial and the start of the next trial, that is expected since the eye tracker recording is starting / stopping each trial and it takes a bit of time to finish one trail and start the next (stopping and starting eye tracker recording itself can take time).

Looking at the Python code generated for the Builder demo I provided, each trial ends up doing things in this order:

  1. Send “TRIAL_START” message
  2. Start eye tracker recording
  3. prepare / draw trial stim code
  4. trial logic …
  5. Stop eye tracker recording
  6. End of trial code
  7. Send “TRIAL_END” message
  8. repeat 1 - 7 for each trial

If you want to start recording after calibration and continue recording until the end of the trial block, you can use a code component to start and then another to stop eye tracker recording. Please see the attached example:

eyetracking_v2.psyexp (22.7 KB)

It starts eye tracker recording in instr->code->End Routine and stops recording after the trails have run in end_recording->code_2->Begin Routine.

You will still have some delay between finishing one trial and starting the next, but the iti should be less since the eye tracker is not starting / stopping recording each trial.

Hi @sol ,
I tried your experiment today and there was no delay, thats really good. The problem is, that we dont work with a loop, because we want to control the amount of time that is passed between the trials and the start point where a picture ended and a new one starts. So I tried to project your experiment on ours, but it didnt work. Do you have a example on how to avoid the gap in samples between the trials, if we always use a new trial in the psychopy builder? Probably the way created the experiment is not the best, but it works except of the gaps in samples every time a new trial starts.
Eyetracker_Psychopy_experiment_10-10-21.psyexp (34.2 KB)

I uploaded a example, maybe this helps to understand my problem. Maybe you know a better way. :slight_smile:

Tanks for the help,

Hi @timon,

If each of the routines in your project is a trial,try putting a code component to start recording right after calibration and another code component to end recording at the end of the experiment.

I’ve attached a version of your project file with this added; I was not able to actually test it because I do not have the needed image or sound stim.

Eyetracker_Psychopy_experiment_10-10-21_v2.psyexp (37.2 KB)

Thank you

Hi @sol ,

I already tried that, because I wanted to create the same experiement that you did with the loop. Sadly this did not work for me. Maybe I can work with the loop aswell, but I have not figured it out yet. Do you have another Idea, how I can avoid the gap in samples?

Thank you,