Great to hear some feedback, thanks
I refactored ‘eyetracker.py’ so that this MOCK tobii now inherits from the default mouse tracker. I’m thinking that if what I’ve done would end up being merged into the main project, it’s best to keep duplicated code at a minimum. This means that I’m a bit unorthodox, relative to the iohub code’s general structure, in that a relative import from another eyetracker’s module is done. I hope this is fine, since the inheritance pattern makes sense here.
I also made some minor updates to the ‘mock_tobiiwrapper.py’, but those shouldn’t be a problem. I did remove the explicit inheritance from object
for the wrapper class itself, even though an explicit inheritance is done in the original, since this is the more ‘modern’ Python way. But if it might cause issues with backwards compatibility for whoever’s still using Python 2, I can add the inheritance back in of course.
This looks like it is coming from the iohub macOS mouse back end. How often is it happenning when you use the Dev branch?
It only happens when I’m running the mocked calibration procedure, I haven’t seen it otherwise. I found that the message originates from here, but I don’t know what it is I’m doing to cause it. Maybe there’s something going on where the calibration/calibration configuration leads to trouble? Specifically, I’m running the gcCursor experiment. The only thing I’ve changed (locally) is the related .yaml file, to this:
monitor_devices:
- Display:
name: display
reporting_unit_type: pix
color_space: rgb255
device_number: 0
physical_dimensions:
width: 590
height: 340
unit_type: mm
default_eye_distance:
surface_center: 500
unit_type: mm
psychopy_monitor_name: default
- Keyboard:
name: keyboard
- Mouse:
name: mouse
- Experiment:
name: experimentRuntime
# MouseGaze Simulated Eye Tracker Config (uncomment below device config to use)
- eyetracker.hw.mouse_mocktobii.EyeTracker:
enable: True
name: tracker
controls:
move: RIGHT_BUTTON
blink: [LEFT_BUTTON, RIGHT_BUTTON]
saccade_threshold: 0.5
monitor_event_types: [ MonocularEyeSampleEvent, FixationStartEvent, FixationEndEvent, SaccadeStartEvent, SaccadeEndEvent, BlinkStartEvent, BlinkEndEvent]
calibration:
# THREE_POINTS,FIVE_POINTS,NINE_POINTS
type: FIVE_POINTS
# Should the target positions be randomized?
randomize: True
# auto_pace can be True or False. If True, the eye tracker will
# automatically progress from one calibration point to the next.
# If False, a manual key or button press is needed to progress to
# the next point.
auto_pace: True
# pacing_speed: the number of sec.msec that a calibration point
# should be displayed before moving onto the next point. Only
# used when auto_pace is set to True.
pacing_speed: 1.5
# screen_background_color specifies the r,g,b background color to
# set the calibration, validation, etc, screens to.
# Each element of the color should be a value between 0 and 255.
screen_background_color: [128,128,128]
# The associated target attribute properties can be supplied
# for the fixation target used during calibration.
# Sizes are in pixels, colors in rgb255 format:
target_attributes:
outer_diameter: 35
outer_stroke_width: 2
outer_fill_color: [128,128,128]
outer_line_color: [255,255,255]
inner_diameter: 7
inner_stroke_width: 1
inner_color: [0,0,0]
inner_fill_color: [0,0,0]
inner_line_color: [0,0,0]
animate:
enable: True
movement_velocity: 750.0 # 750 pix / sec
expansion_ratio: 3.0 # expands to 3 x the starting size
expansion_speed: 45.0 # exapands at 45.0 pix / sec
contract_only: True
By the way, using h5py, pandas and seaborn, I put together a short script, that I’m using in a jupyter notebook, to get a quick overview of the recorded gaze:
import pandas as pd
import h5py
import numpy as np
import seaborn as sns
fpath = ('/path/to/file')
h5f = h5py.File(fpath)
dset = h5f['data_collection']['events']['eyetracker']['MonocularEyeSampleEvent']
darr = np.array(dset)
df = pd.DataFrame(darr)
h5f.close()
sns.scatterplot(x='gaze_x', y='gaze_y', hue='time', data=df)
It’s not pretty, but this way you can get something like this, just to see that things are working correctly. (maybe you’re already doing something similar but more refined)
Based mostly on these plots, the data from the actual demos (ie not calibration) seem to be correct, so whatever is happening with the mouse during calibration, it doesn’t seem to affect experiments.