I’m looking at an experiment with several collaborators who want to record eye data to monitor vigilance, but timing is very limited and they don’t want to do full eye-tracking calibration. I poked around the eye tracking in ioHub and pylink (these are EyeLink devices at all our sites). I know that video must be leaving the device since it is shown on the EyeLink computer (proprietary OS), but I’m not sure if that’s part of the UDP stream that the device is emitting, or how to get into it.
Is anyone <ahem @Michael > familiar with examples where the raw video has been accessed, as opposed to the calibrated gaze position? I don’t see anything on the docs, or in notes from previous workshops.
In the pylink api docs from SR, I do see a few things that hint this should be possible: e.g. the IN_IMAGE_MODE can be if displaying grayscale camera image, you can set broadcastOpen to “allow a third computer to listen in on a session between the eye tracker and a controlling remote machine”, but I’m not able to put it all together into video (vs. samples of gaze position).
An alternative is to use a dvi to usb device and screen-capture that, although if it were accessible from the UDP that would be preferable. Ideally this will be synced to the start of a resting scan, so aligning the trigger time is important (though not impossible with the screen-capture within a reasonable amount of time error).
Thanks in advance for any help or suggestions! Sorry my google-foo is weak on this one.