Simulating eyetrackers

Hi,

I’m developing an eyetracker experiment, which will be run on a computer using a Tobii eyetracker. I’ve already put together a few other PsychoPy experiments involving a Tobii eyetracker, but this one will be a bit more involved since the calibration procedure needs to be customized, in order to make it more infant-friendly. I’ll also have to make it possible for the researcher to monitor participant gaze. And with the pandemic going on, I need to do as much work as possible from home, where I don’t have an eyetracker. This means that ideally I would do most of the development without an eyetracker, and so the ability to simulate one would be very useful.

When I’ve looked at the docs about ioHub’s eyetracker support, I’ve noticed that the SR Research implementation’s device settings include the setting simulation_mode: [True/False]. The settings for Tobii and GazePoint don’t, however.

I’m wondering, is it possible to enable a ‘simulation mode’ in Tobii/GazePoint configurations as well? Eg by editing something in the .yaml file, or doing something in Python code after importing configurations from the .yaml file. If this isn’t possible, I’d appreciate any tips on how I might achieve something similar. I could of course just insert a mouse component and use its x/y coordinates, but this would complicate things when I try to tweak the calibration procedure and make sure it works as expected.

I’m wondering, is it possible to enable a ‘simulation mode’ in Tobii/GazePoint configurations as well?

Not in the same way EyeLink does. The eyelink mouse simulation mode lets you use the mouse on the eyelink host PC to simulate eye motion / events, which means it is feeding the ‘raw’ mouse data through the Eyelink calibration. However this means the eyelink Host PC hardware is required.

Just started working on adding a ‘mouse’ based eye tracker to iohub that generates iohub eye samples and events based on mouse data, but it does not do an eye tracker specific calibration, or need any eye tracker hardware to be connected. It will let you us the mouse to simulate eye movement and events during the experiment. This will be useful for general experiment design and testing when you do not have access to the eye tracking hardware, but will not be useful in developing custom calibration routines like you require.

1 Like

Just started working on adding a ‘mouse’ based eye tracker to iohub that generates iohub eye samples and events based on mouse data, but it does not do an eye tracker specific calibration, or need any eye tracker hardware to be connected.

Cool! Actually, after creating this thread I was looking at the codebase for the eyetrackers. I thought that it might be possible to create a mock eyetracker by essentially copying the code for the tobii eyetracker.py, replacing all method code so that methods return mocked values or mouse coordinates where appropriate. Maybe this is similar to what you’ve started working on? Anyway, I figured that then, since the interface would be the same, the ‘mock tracker’ could be used with the calibration procedure described by tobiiCalibrationGraphics.py. This would mean that I could make a copy of that script and create a custom calibration procedure, using the ‘mock’ eyetracker during development. But this is all very hypothetical, since I don’t know if I will have enough time in the project I’m working on to do this, or if we’ll end up using something else than PsychoPy that would already have infant-friendly calibration.

If I would have the time to attempt the above, do you think it’s worth a try, or do you see an obvious obstacle that would make it a waste of time?

(by the way, the documentation/comments for the eyetracker/calibration code is really helpful, that and the sensible method naming/separation of concerns are in large part why I even thought it might be feasible to do the above)

This is very similar to what I started to work on, you can see the changes made so far here. Note that this is based on, and targeted for, the DEV branch of psychopy. It is also brand new, never tested by anyone but me, code right now.

One major difference is that I’m thinking the mousegaze calibration screen will just end up being a static screen saying “No calibration needed for the mouse, press space to continue…” or something. Right now it just prints something to the console and continues on. :wink:

I can now also see the benefit in what you are wanting to do, which is to just use the mouse to drive the calibration graphics logic (and any changes you are making) so you can see how it looks visually.

This issue with this idea is that it will not be easy to implement in a general purpose way given how iohub implements calibration graphics. Each eye tracker has separate calibration logic and also completely seperate graphics code. This is mainly because each eye tracker maker ‘handshakes’ with experiment calibration graphics in a very different way, so it was easiest / quickest to just have completely separate calibration gfx code.

This could be improved by developing a level of abstraction between the eye tracker hardware calibration handshaking and the experiment calibration graphics that was used by all eye trackers in iohub. Then the iohub graphics code could be made much more modular. This is not a small job though so I’m not sure if or when this could be changed.

by the way, the documentation/comments for the eyetracker/calibration code is really helpful,

Thank you for the positive feedback. Developer docs for the iohub common eye tracker interface is an area that needs some improvement (well creation actually).

1 Like

This is very similar to what I started to work on, you can see the changes made so far here. Note that this is based on, and targeted for, the DEV branch of psychopy. It is also brand new, never tested by anyone but me, code right now.

That’s very quick work, I’ve only looked at the code so far (I’ll probably actually clone and try things out next week) but it looks like you already did a lot. Again, I don’t know how much time I’ll be able to spend on this but if I get the chance I’ll try to report any issues I bump into.

This issue with this idea is that it will not be easy to implement in a general purpose way given how iohub implements calibration graphics. Each eye tracker has separate calibration logic and also completely seperate graphics code. This is mainly because each eye tracker maker ‘handshakes’ with experiment calibration graphics in a very different way, so it was easiest / quickest to just have completely separate calibration gfx code.

Right, that makes sense. I’d noticed that the code was structured very differently for the different eyetrackers, with tobii being the only with a separate file for the calibration graphics, if I understood correctly.

This could be improved by developing a level of abstraction between the eye tracker hardware calibration handshaking and the experiment calibration graphics that was used by all eye trackers in iohub. Then the iohub graphics code could be made much more modular. This is not a small job though so I’m not sure if or when this could be changed.

Yes, that sounds like it would require a lot of effort and fairly intimate knowledge about all three types of eyetrackers (or really eyetracker interfaces, I guess). If I get the time, I’ll see if it’s possible to modify the mouse tracker you created into a ‘tobii-like’ tracker which could be used with the tobii calibration procedure, though it probably and unfortunately wouldn’t be very useful for PsychoPy in general of course.

Hi again,

After spending a fair bit of time on setting things up I finally got the eyetracker simulation going.

I was able to run all the iohub demos. The mouse eyetracker works great, and I think this is something that could really help a lot of people get going with designing eyetracker experiments in PsychoPy :slight_smile:

I had a go at creating a customized version of the mouse eyetracker, that specifically mocks tobii’s interface for calibration: https://github.com/datalowe/psychopy/tree/dev/psychopy/iohub/devices/eyetracker/hw/mouse_mocktobii (I realize now that maybe using the ‘dev’ branch rather than starting a new one specifically for this wasn’t the best choice, as it complicates doing PR’s for any minor fixes I do)

The calibration procedure runs, and so at least I can use it for seeing what any changes will look like. It doesn’t do anything fancy with eg storing gaze coordinates though. Also, there’s an issue where if I try to ‘gaze around’ with the mouse eyetracker, keeping the right button pushed down and moving the cursor, I get this:

** WARNING: Mouse Tap Disabled due to timeout. Re-enabling…: 4294967294

I’ve spent a lot of time navigating the code base, so if you have an idea right away about what’s causing the issue then I’d be happy to hear it.

1 Like

Thanks for testing it out and using it in such a smart way. :slight_smile:

Now that I see what you have done in terms of the MOCK tobii, it would probably be useful to other Tobii users that do not have access to the eye tracker hardware. I really like how you were able to just import the existing TobiiPsychopyCalibrationGraphics and work from that instead of recreating a totally new graphics class. Nice.

This looks like it is coming from the iohub macOS mouse back end. How often is it happenning when you use the Dev branch?

I’m going to be changing the macOS mouse / keyboard implementation to use a callback thread on the iohub server, instead of polling as is currently being done, and that will fix this issue. It maybe a couple weeks before I get to that though.

Thanks again, very cool.

Great to hear some feedback, thanks :smiley:

I refactored ‘eyetracker.py’ so that this MOCK tobii now inherits from the default mouse tracker. I’m thinking that if what I’ve done would end up being merged into the main project, it’s best to keep duplicated code at a minimum. This means that I’m a bit unorthodox, relative to the iohub code’s general structure, in that a relative import from another eyetracker’s module is done. I hope this is fine, since the inheritance pattern makes sense here.

I also made some minor updates to the ‘mock_tobiiwrapper.py’, but those shouldn’t be a problem. I did remove the explicit inheritance from object for the wrapper class itself, even though an explicit inheritance is done in the original, since this is the more ‘modern’ Python way. But if it might cause issues with backwards compatibility for whoever’s still using Python 2, I can add the inheritance back in of course.

This looks like it is coming from the iohub macOS mouse back end. How often is it happenning when you use the Dev branch?

It only happens when I’m running the mocked calibration procedure, I haven’t seen it otherwise. I found that the message originates from here, but I don’t know what it is I’m doing to cause it. Maybe there’s something going on where the calibration/calibration configuration leads to trouble? Specifically, I’m running the gcCursor experiment. The only thing I’ve changed (locally) is the related .yaml file, to this:

monitor_devices:
    - Display:
        name: display
        reporting_unit_type: pix
        color_space: rgb255
        device_number: 0
        physical_dimensions:
            width: 590
            height: 340
            unit_type: mm
        default_eye_distance:
            surface_center: 500
            unit_type: mm
        psychopy_monitor_name: default

    - Keyboard:
        name: keyboard

    - Mouse:
        name: mouse

    - Experiment:
        name: experimentRuntime

# MouseGaze Simulated Eye Tracker Config (uncomment below device config to use)
    - eyetracker.hw.mouse_mocktobii.EyeTracker:
        enable: True
        name: tracker
        controls:
            move: RIGHT_BUTTON
            blink: [LEFT_BUTTON, RIGHT_BUTTON]
            saccade_threshold: 0.5
        monitor_event_types: [ MonocularEyeSampleEvent, FixationStartEvent, FixationEndEvent, SaccadeStartEvent, SaccadeEndEvent, BlinkStartEvent, BlinkEndEvent]
        calibration:
            # THREE_POINTS,FIVE_POINTS,NINE_POINTS
            type: FIVE_POINTS

            # Should the target positions be randomized?
            randomize: True

            # auto_pace can be True or False. If True, the eye tracker will
            # automatically progress from one calibration point to the next.
            # If False, a manual key or button press is needed to progress to
            # the next point.
            auto_pace: True

            # pacing_speed: the number of sec.msec that a calibration point
            # should be displayed before moving onto the next point. Only
            # used when auto_pace is set to True.
            pacing_speed: 1.5

            # screen_background_color specifies the r,g,b background color to
            # set the calibration, validation, etc, screens to.
            # Each element of the color should be a value between 0 and 255.
            screen_background_color: [128,128,128]

            # The associated target attribute properties can be supplied
            # for the fixation target used during calibration.
            # Sizes are in pixels, colors in rgb255 format:
            target_attributes:
                outer_diameter: 35
                outer_stroke_width: 2
                outer_fill_color: [128,128,128]
                outer_line_color: [255,255,255]
                inner_diameter: 7
                inner_stroke_width: 1
                inner_color: [0,0,0]
                inner_fill_color: [0,0,0]
                inner_line_color: [0,0,0]
                animate:
                    enable: True
                    movement_velocity: 750.0  # 750 pix / sec
                    expansion_ratio: 3.0  # expands to 3 x the starting size
                    expansion_speed: 45.0  # exapands at 45.0 pix / sec
                    contract_only: True

By the way, using h5py, pandas and seaborn, I put together a short script, that I’m using in a jupyter notebook, to get a quick overview of the recorded gaze:

import pandas as pd
import h5py
import numpy as np
import seaborn as sns

fpath = ('/path/to/file')
h5f = h5py.File(fpath)

dset = h5f['data_collection']['events']['eyetracker']['MonocularEyeSampleEvent']

darr = np.array(dset)

df = pd.DataFrame(darr)
h5f.close()

sns.scatterplot(x='gaze_x', y='gaze_y', hue='time', data=df)

It’s not pretty, but this way you can get something like this, just to see that things are working correctly. (maybe you’re already doing something similar but more refined)

Based mostly on these plots, the data from the actual demos (ie not calibration) seem to be correct, so whatever is happening with the mouse during calibration, it doesn’t seem to affect experiments.

We should definately consider that and what you have done looks good to me. :slight_smile:

It seems to be an issue I introduced into Dev. I’ll work on fixing it by end of the month. I think it can just be ignored for now, it is not causing a crash or anything from what I can tell.

Nice.

iohub does not have any data visualization functionality built into it. Currently the only hdf5 file reading related code that is included with iohub is in iohub.datastore.util.py and it is not specific to eye tracking in any way. Frankly, looking at how easy it is for you to access the file using h5py, I’m thinking that when we do rework the file reading code switching to h5py for reading should be on the todo list.

Take care.