EyeLink - Saving raw video

Hi all,

I’m looking at an experiment with several collaborators who want to record eye data to monitor vigilance, but timing is very limited and they don’t want to do full eye-tracking calibration. I poked around the eye tracking in ioHub and pylink (these are EyeLink devices at all our sites). I know that video must be leaving the device since it is shown on the EyeLink computer (proprietary OS), but I’m not sure if that’s part of the UDP stream that the device is emitting, or how to get into it.

Is anyone <ahem @Michael :wink: > familiar with examples where the raw video has been accessed, as opposed to the calibrated gaze position? I don’t see anything on the docs, or in notes from previous workshops.

In the pylink api docs from SR, I do see a few things that hint this should be possible: e.g. the IN_IMAGE_MODE can be if displaying grayscale camera image, you can set broadcastOpen to “allow a third computer to listen in on a session between the eye tracker and a controlling remote machine”, but I’m not able to put it all together into video (vs. samples of gaze position).

An alternative is to use a dvi to usb device and screen-capture that, although if it were accessible from the UDP that would be preferable. Ideally this will be synced to the start of a resting scan, so aligning the trigger time is important (though not impossible with the screen-capture within a reasonable amount of time error).

Thanks in advance for any help or suggestions! Sorry my google-foo is weak on this one.

Hi Erik, I know this is possible with the SMI systems: they can stream individual jpeg frame images via UDP (although they can also be saved to memory and then written to disk). I did actually write an iOS test app a few years back to try that out. Worth sending the stream to a separate computer than the one receiving the gaze data, however.

The limitation with the DVI screen capture is that you will only be getting one frame per display screen refresh (say at 60 Hz). This will likely be a massive downsampling of the number of frames being generated by the eye tracker itself (for EyeLink, up to 2kHz).

We have used the technique of saving eye images (to disk) to analyse in an fMRI vigilance task. The frames were analysed for eye closure and so on: the images were too poor to support really reliable gaze tracking.

Great to hear this can be done - I figured it was probably possible but wasn’t sure how large a task it would be. If you can dig out the old iOS app I’d be very appreciative, but can also give it a go and find the right API calls if that’s not available.

Hadn’t thought about downsampling from the screen capture; I dont’ think we’d need 2kHz, but you’re probably right that > 60Hz would be better. I think that’s about the same level of analysis we’re planning to do.

Thanks!

That old code was for SMI, and dealt with the system directly via UDP messages rather than the now-blessed method via the API or via ioHub, so wouldn’t be useful to you (and now that I recall it, was limited by a packet-size constraint in iOS so only small portions of the images came through).

Just looked though an old Eye Link PDF manual I have: it doesn’t mention anything, but it is over 8 years old…

But you might want to contact Britt Anderson at Waterloo https://uwaterloo.ca/psychology/people-profiles/britt-anderson, although he seems to be be directly with PyLink dealing rather than via ioHub, that might send you down the right path (if he accesses this functionality): http://stackoverflow.com/questions/35071433/psychopy-and-pylink-example

Yeah, the Manual isn’t very helpful about accessing video directly - looking again it’s still hard to tell if any of the samples or measurements are actually frames of video. I’ll ping @brittAnderson and see if he’s got ever done something like that; would be happy staying within the happy walled garden of ioHub but I’d be alright with pylink for this limited functionality. Thanks Mike!

There is some discussion at the EyeLink forums about video capture and integration. They don’t have a lot of support. I have an EyeLinkII with a scene camera. If you look at the scene camera manual (which you can download from the sr research support site - free registration) you will see some instructions for capturing video, splitting the screen and the like. I have been slowly playing with all this. I set up my display/experiment machine to talk to the eye link host just setting up the intranet from a linux command line and using ethernet. This allows me to use the pylink library commands to get the host and display talking to each other so you can record things in data files on the display machine, and put events into the edf file on host machine. Meanwhile, you split the output from the host computer to both a monitor and an av overlay box which is also getting the analog from the scene camera. With proper calibration you can have fixed reference points on your visualization (using the IR emitters at the corner of your display monitor). Now you can overlay, in my case the scene camera, on top of the eye link display from the host computer. This then gets sent to an a/d converter box that I use firewire to send this back to the display machine (it doesn’t have to be the display machine - it is just convenient at the moment). On that machine I can run dvdraw (an old linux utility from the days when people wanted to capture video on their computers from video cameras and the like and there were not a lot of out of box solutions). This will capture the raw video (in my case from the scene camera with eye link reference overlays and with the time stamps from the eyelink host). Ultimately the value from this Rube Goldberg contraption is that you can use the power of the display computer to put events in the edf log that allow you to avoid hand coding events, but you still have raw video to look at for display and for validation.

At the moment this is a side project that gets worked on an hour here, an hour there, and ultimately the goal is to run the experiment on a secondary small lcd (I have ordered a 5" screen) that participants will hold in their hand like a phone, and this will let me get high speed, event driven eye contingent eye tracking experiments on a small display without the need to hand code events, and allowing the head tracking correction of the eye link to compensate for hand and head movement. If you do something similar (and more easily) I would love to hear about it.

However, as you will notice, there is nothing very psychopy about the above. You could do this with any set up that used python. We happen to like psychopy as a library for generating the visual stimuli, but often just use Python libraries directly for the ancillary experimental tasks.

Hope this helps. If you have a better method please share. If you have suggestions to simplify, we are happy to hear, or to try and test things, but our turn around time will be rather slow. This also may sound like I am more knowledgeable than I actually am. I am just a blind pig rooting under an oak tree looking for acorns.

Cheers,

Hi Erik,

As other users have pointed out there is no way to export video directly from the EyeLink Host machine. The video which can be accessed during camera setup is not available during recording. You can split the Host PC display image to a third party capture device as you were discussing but you’ll only be able to see the eye images in the bottom left hand portion of the host display during recording.

What exactly are you trying to do with the video? If you can explain a bit more about what you’re hoping to do with the video image I may be able to help you find a solution in the EyeLink data or via the gaze data available over the link through Pylink.

Best,

The program is dvgrab not dvdraw, sorry for any confusion, but with a properly calibrated display you should still, I believe, be able to project the center of fixation on the image. Though a proof of concept will have to wait a while.

Thanks @brittAnderson and @dan; apologies for dropping the thread.

What you suggested is exactly what we ended up doing, using an a/d converter and then only grabbing a small aperture of the eye from the whole screen directly through the python bindings to opencv (instead of dvgrab).

We’re really just using this for the bare minimum analysis to make sure sure that subjects are awake (the “task” is simply to stare at a fixation crosshair, so there’s nothing interesting to overlay or do gaze tracking on). Another collaborator is doing the analysis but I think it will probably boil to down to something simple like blink rate / eye closure as well, similar to what @Michael did. We also had (and were fine with) massive downsampling (we ended recording at 30 fps), which was fine for what we needed.

Capture code for what we ended up with is on github in case anyone ends up following the thread.

There’s only one small outstanding problem: we use a separate thread to write images to disk in real time with the multiprocessing library, but something about that causes the psychopy window to lose focus, which makes the computer “thunk” or “beep” when a keypress comes in. Weirdly event.getKeys() still grabs presses correctly from the queue, but the OS complains, the keypresses aren’t written in the .log, and some sites have to manually turn the sound off to avoid blaring the beeps at participants. I’m experimenting with when exactly that focus gets lost and when I have to reactivate the window to make sure presses are grabbed, and I’ll come back and let you know when I figure it out.

EDIT: I was drawing the opencv frames inside a screen refresh loop, but that meant the opencv window was being created inside, on the first pass through the loop. To fix this, you can use cv2.namedWindow to initialize the cv window outside and then immediately refocus onto psychopy with win.winHandle.activate() outside the loop, making sure focus isn’t stolen.

Thanks again for the guidance and suggestions,