SMI HiSpeed eye-tracking data recording with iohub: (around 5% of the) samples are missing

Dear all,

I have been (almost) successfully recording eyetracking data with an SMI HiSpeed eyetracker.
I use a two computer setup: the first [fairly old] computer manages the eyetracker, the second computer runs PsychoPy 1.82.01 to show stimuli and to record a HDF5 file data with eyetracking samples thanks to iohub (for some obscure reason, I couldn’t get the eyetracker/experiment to work with 1.84.02, but my question here is not about this point). The two computers are communicating through Ethernet.

My problem is: I only get around 94-95% of the samples in my HDF5 file (whereas 100% of the samples are present when I save with the iView X program the last data stored in the buffer on the computer that takes care of the eyetracker). I was able to determine this by inspecting the HDF5 file, the timestamps between successive samples and the number of samples I should get according to the duration between my trial start and my trial end. The samples are recorded every 2ms - more or less (as sampling rate = 500 Hz) - but from time to time, 3 or more samples are missing in a row.

Other than this problem, everything is fine, I don’t get any error message when I run the experiment.

Do you have any idea of what could be creating this problem?
Do you know of a test I could run to find out what is going wrong: Whether it is a communication problem, a CPU/RAM problem, a programming problem (because of my bad programming) which creates some latency/overwhelms the 2nd computer?

Thank you in advance!

Coco

Configuration:
Eyetracker: SMI HiSpeed @ 500 Hz (binocular)
1st computer (SMI): OS: Windows XP, CPU: Pentium D 3.4GHz, RAM: 2GB; ethernet: Intel 82566DC Gigabit Network Connection; running iView X 2.8.26
2nd computer (PsychoPy): I tried 2 different ones and had the same problem:
2a) OS: Windows 7 64bits, CPU: i5-2450M @ 2.5GHz, RAM: 4GB; ethernet: Intel 82579LM Gigabit Network Connection; running PsychoPy 1.82.01
2b) OS: Windows 10 64bits, CPU: i5-4300U @ 1.9GHz, RAM: 8GB; ethernet: Intel Ethernet Connection I218-LM; running PsychoPy 1.82.01.

My script is based on the gc_cursor_demo - which seems ok to me, except for a redundancy between lines 194 and 212 and also the following:

kb.waitForPresses()

which returns the error:

AttributeError(<psychopy.iohub.client.ioHubDeviceView object at 0x0A47A430>, ‘waitForPresses’)

So I replaced it by the following (or something comparable):

while not kb.getEvents():
    self.hub.wait(0.2))

My educated guess (but it is just a guess) is that it’s not exactly going to be a CPU/RAM problem with those builds (unless you saw more drops with the older windows 7 computer), and it might not even be an issue with the program. The test for whether it’s your program is to try a version that basically strips out all the stimuli and trial structure and JUST records fixations for a couple minutes (depending on how often the drops show up). If the drops still show up then, it’s not your program, it’s either the SMI computer having issues transmitting or a background process or something on the psychopy computer introducing lag.

Hello Jonathan,

I was wondering whether someone else had the same problem, but so far it doesn’t seem to be the case.

As you suggested, I will try to just record fixations and will keep you posted after the 6th of April when I am back.

Thank you for your advice!

Coco

Hello Jonathan, hello to you all,

Following your advice, I ran the minimal following code:

from psychopy import visual,core
from psychopy.data import TrialHandler,importConditions
from psychopy.iohub import (EventConstants, ioHubExperimentRuntime, module_directory,
                            getCurrentDateTimeString)
import os

class ExperimentRuntime(ioHubExperimentRuntime):

   def run(self,*args):
        tracker=self.hub.devices.tracker
        # Start by running the eye tracker default setup procedure.
        tracker.runSetupProcedure()
        # Start Recording Eye Data
        tracker.setRecordingState(True)
        core.wait(5.0)
        # Stop Recording Eye Data
        tracker.setRecordingState(False)
        # Disconnect the eye tracking device.
        tracker.setConnectionState(False)

####### Launch the Experiment #######
runtime=ExperimentRuntime(module_directory(ExperimentRuntime.run), "experiment_config.yaml")
runtime.start()

The recorded data shows missing samples in a very consistent way: 1 to 3 samples (sometimes - but extremely rarely - it is more than 3 samples; most times it is 2 samples) are missing then 30 to 33 samples (most often it is 31 samples) are recorded and so on (1 to 3 samples missing, 30 to 33 recorded…).

Any idea what I could do next to try to fix this problem?
Shall I contact ioHub developers (@sol ?) or SMI (or other people?) to submit this issue?

Thank you for your attention!
Best,

Coco

Pls send me an example .hdf5 file from the minimal test you ran that has the missing sample issue; I can look at the time stamp delta’s and calculated delays for each sample. Also you can look at the event id field of each sample; if samples received by iohub from the SMI device are not being saved to the hdf5 file, there would be gaps in the event id’s.

Also pls include what smi system you are using and at what sampling rate, as well as version of psychopy.

Depending on the model, some smi systems switch sampling rate to a lower rate when the eye is lost, and then put it back to the intended rate when the eye is being tracked. This results in samples with different inter sample intervals because the eye tracker is changing it’s sampling rate. So it is possible what you are seeing is the behaviour of the eye tracker hardware.

Hello Sol,

Thank you very much for your fast answer!

Please find on WeTransfer the following files (available for a week):

  1. “events.hdf5”: a .hdf5 file from the minimal test
  2. “2017-04-13-simple_test Samples.txt”: the same data directly exported from iView (as you can see, all the samples are in this file).
    Sorry for sharing on WeTransfer, let me know if you prefer me to upload the files somewhere else.

You will notice that:

  • the coordinate systems are different between the two files (it is the usual screen coordinate system for the SMI file, whereas it is centered and comparatively ‘y-reversed’ for the .hdf5 file).
  • I was more or less trying to fixate the center of the screen, at least at the beginning of the recording.

Configuration information:
Eyetracker: SMI HiSpeed @ 500 Hz (binocular)
PsychoPy 1.82.01 (I haven’t been able to run my experiment on PsychoPy 1.84.02 standalone version, as explained on the PsychoPy GitHub website)
1st computer (SMI): OS: Windows XP, CPU: Pentium D 3.4GHz, RAM: 2GB; ethernet: Intel 82566DC Gigabit Network Connection; running iView X 2.8.26
2nd computer (PsychoPy): OS: Windows 7 64bits, CPU: i5-2450M @ 2.5GHz, RAM: 4GB; ethernet: Intel 82579LM Gigabit Network Connection; running PsychoPy 1.82.01

Let me know if you need anything else.

Thank you!
Best,

Coco

Thanks for the reply and files.

You will notice that:
  • the coordinate systems are different between the two files (it is the usual screen coordinate system for the SMI file, whereas it is centered and comparatively ‘y-reversed’ for the .hdf5 file).

iohub saves data in the same screen coord space setup / used by psychopy for the experiment window. This is always origin == screen center. So this is expected.

Regarding the gaps in sample time, I see what you mean in the hdf5 file; but am not sure what is going on. iohub receives events / samples from the smi tracker via a callback function registered with the iView C library; it is /not/ polling for samples. Since the event ID’s assigned by iohub are sequential, even when the time delta between two samples is >= 4 msec, then this suggests that for some reason iohub is not receiving the callback from the SMI library for the samples in question. Also, the logged time is the time that the callback was run for each iohub sample. The dt between device times vs logged times of adjacent samples seem similar; i.e. if there is a 6 msec gap between 3 sample tiomes, the logged times for the two samples is also about that.

What does the pupil value represent in the txt file? It does not seem to even be close to what is saved by iohub. Are different pupil measures being saved to the txt file vs. streamed to iohub? It would be good to get them the same so we can use pupil size to match up txt sample lines to hdf5 sample lines without relying on anyone’s time stamps.

I would also strongly suggest you get the latest psychopy version running, even if you manually apply the fix that was found for the issue you mentioned, as that in itself may fix the issue. It may also be good to ensure you are using recent versions of the SMI tracker and client C SDK.

Thanks,

Sol

Thank you for answering, taking time to analyze my problem and for your explanations.

I am not sure about the Pupil information in the .txt file (you are mentioning the “Pupil Confidence” column, right?) but I will check the SMI documentation on Tuesday, send you a screenshot of my export parameters for the “iView Converter” (if relevant), or more likely export with the pupil size (to have comparable files, as you suggested) as well as run the experiment on PsychoPy 1.84.02.

The SDK is the latest version, downloaded a few weeks ago on SMI website.
Regarding the SMI tracker, if you are talking about iView, unfortunately we are using 2.8.26 version and I am afraid we can’t afford to upgrade if we have to pay for it, but I can check if updates are free.

I will keep you posted.

Thank you again for your attention!
Have a nice weekend,

Coco

Regarding the SMI tracker, if you are talking about iView, unfortunately we are using 2.8.26 version and I am afraid we can't afford to upgrade if we have to pay for it, but I can check if updates are free.

I do not know if the older iView ET app version could be it or not.

Out of curiosity, what happens if you run your 500 Hz system at a lower sampling rate if possible, say 240 - 250 Hz. Do you still see gaps in the iohub sample stream compared to the eye sample file saved by SMI? Or if you record at 500 hz but monocular only (if possible).

It is hard / impossible for me to really help troubleshoot this as I do not have the smi model that you are using. Could you block off a couple hours sometime middle to end of next week with the tracker and we could look into this more together via teamviewer or something, assuming the display / psychopy PC is accessible via the net?

Thanks,

1 Like

Following your advices I have been running tests yesterday. Below are the results.
I indicated the number of samples recorded to give you an idea.
The hdf5 files are available on WeTransfer for about a week.

PsychoPy 1.82.01 vs 1.84.2

Upgrading to the lastest PsychoPy version doesn’t fix the probem of missing samples: version 1.82.01 gets 2355 samples; version 1.84.2 gets 2354 samples, which is also wrong (it should be around 2500 samples in both cases).
Following this result, all the recordings for the following tests have been made with PsychoPy version 1.82.01.

Uncalibrated vs calibrated (don’t ask me why I did this… :slight_smile: I was just trying as many things as I could!..)

For some reason, 500 Hz Binocular recordings made without calibration don’t have missing samples: I had 2502 samples in the uncalibrated recording (of course the values are all zeros so this is totally unuseful but you can still check that the timing is perfectly ok!).

500 Hz Binocular vs 500 Hz Monocular (left and right)

This is when things are getting interesting: “500 Hz Monocular left eye” gave 2495 samples; “500 Hz Monocular right eye” gave 2497 samples - which is great, no samples are missing. Switching to monocular seems to fix the problem.

500 Hz Monocular vs 1250 Hz Mococular (left and right)

Finally, “1250 Hz Monocular left eye” gave 6200 samples (which is fairly good IMO); “1250 Hz Monocular right eye” gave 6235 samples. It should have been 6250 samples in both cases, so we have more than 99% of the samples which is acceptable I think.

As a conclusion

If I understand well, and based on the above results, the problem seems to be coming from SMI/iView C library, not from iohub so I guess when using our eyetracker, we will just record monocular data (as binocular is not crucial in our studies, whereas timing is).
It might not be necessary for you and I to meet via Teamviewer, as I consider that my problem is solved, but let me know if you think otherwise.
If you confirm that a solution has been found, I will just close this discussion.

Thank you very much for your time and help, that has been tremendously helpful!

Nota bene

For your information, during the tests, I noticed that the track_eyes setting in the yaml file wasn’t taken into account (whatever I wrote for this parameter, it was working perfectly - this is just to complement one of your previous posts).

Best,

Coco

1 Like

The problem perhaps stems from the specifics of UDP protocol. When transfering datagrams with such frequency as 1250 times a second, multiplied by two (for each eye), the network can easily get congested, I believe. Therefore, somewhere in-between, the packets get dropped (perhaps, not even get sent by iView because it decides it is already too late to send them).
So, whilst you’ve found the solution, it is possible that the problem itself is not investigated thoroughly.
Anyway, saving the experiment-crucial data by means of transfering it via the network to your custom high-level script instead of trusting the highly optimized manufacturer-provided C library is wrong by definition. I would never rely on that.