EGI Netstation - visual timing latencies

Hi all,
I am in the process of replacing EPrime with Psychopy in my EEG lab. The EEG software I am trying to integrate with is EGI Netstation 5.3. Using “import egi.simple as egi” I can control Netstation fine and send triggers associated with stimuli. My problem is in the variability of the timing of these triggers.

To test the accuracy of the trigger timing, i.e. what is the latency delay between an image appearing onscreen and an associated trigger code being recorded in the EEG file, I have a flashing white square appearing onscreen with a photometer embedding a code at the onset of the square in the EEG. The latency of the photometer code is compared to the latency of the trigger code sent from Psychopy. Currently the mean delay is ~47ms, not a problem in itself, but the variability is very large (most values greater than 6ms from the mean), see the attached jpeg

Using exactly the same hardware and Eprime I can get the latency delay down to a mean of 17ms with an SD of 2ms. So fundamentally I know that the hardware is capable of doing what I need it to. My spec and code are detailed at the bottom of this message.

Given this situation I have the following specific questions:

  1. Has anyone else set up Psychopy successfully with EGI Netstation and verified the trigger latencies?
  2. Should I consider replacing graphics cards if I know that with EPrime the latencies are stable? When I run the benchmark wizard I do get an warning about refresh stability, see attached excel file. benchmarkReport.xlsx (10.2 KB)
  3. Is there anything dumb in my code that could be causing latency variability, e.g. using callonFlips or the way I have structured my loops?
  4. Are dual screens likely to make timing problems worse? I have sometimes found this to be the case in EPrime.

Many thanks
George Stothart

########## tech spec ##############
PC - Dell Optiplex 7020, i5,3.3Ghz, 4GB RAM
Monitor - Dell LCD E2214Hb
Graphics card - Intel HD Graphics 4600
Visual timing hardware - EGI provided photometer with DIN converter

######## code used for visual timing tests ###########

from __future__ import absolute_import, division
import pandas as pd # these are for reading from excel files
import random, os, glob, pylab
from psychopy import visual, logging, core, event, data , sound, gui,locale_setup# import some libraries from PsychoPy
from psychopy.constants import (NOT_STARTED, STARTED, PLAYING, PAUSED,
                                STOPPED, FINISHED, PRESSED, RELEASED, FOREVER)
import numpy as np  # whole numpy lib is available, prepend 'np.'
from numpy import (sin, cos, tan, log, log10, pi, average,
                   sqrt, std, deg2rad, rad2deg, linspace, asarray)
import os  # handy system and path functions
import sys  # to get file system encoding
import time
import csv #useful for handling csv files

totalloops=30
duration=10

import egi.simple as egi 
ms_localtime = egi.ms_localtime     
ns = egi.Netstation()
ns.connect('10.10.10.42', 55513) # sample address and port -- change according to your network settings
ns.initialize('10.10.10.42', 55513) 
ns.BeginSession()     
ns.sync()     

#create a window
mywin = visual.Window(fullscr=True, monitor="testMonitor", units="deg", color ='black')
mywin.setMouseVisible(False)

whiterect = visual.Rect(mywin, width=10,height=10,pos=[0,0],fillColor='white')
blackrect = visual.Rect(mywin, width=10,height=10,pos=[0,0],fillColor='black',lineColor='black')

ns.StartRecording()

count=0
while count < (totalloops):
    ns.sync()     
    mywin.callOnFlip(ns.SendSimpleEvent,'stm-' 'stm-', timestamp=egi.ms_localtime())

    for frameN in range (duration):
        whiterect.draw()
        mywin.flip()

    for frameN in range (duration):
       blackrect.draw()
       mywin.flip()

    count+=1

mywin.close
ns.StopRecording()
ns.EndSession()

Could you surround your code with triple backticks please so it gets formatted correctly?

Sure, no problem, have amended the original post

Cool, thanks.

So there’s nothing wrong with the structure of your code. You’re using the callOnFlip correctly too. There are a couple of places where you are missing brackets (e.g. myWin.close() and probably ms_localtime = egi.ms_localtime() ) but I don’t think those are affecting anything consequential.

I don’t know much about egi/pynetstation but, with the simple parallel port or labjack communications I’ve used we get much much better results than that (triggers with no delay and lag and sd both under 1ms).

I couldn’t tell from your post the direction of the lag. Is the trigger signal delayed (in which case maybe the SendSimpleEvent call may be slow), or the stimulus (in which case the monitor may be to blame)?

I doubt that the graphics card is really to blame but if you could run the PsychoPy demo called “timeByFrames” and post the resulting figure here (you can save it as a png file) that would help rule that out.

cheers,
Jon

Hi Jon,
thanks for the quick response, I added the brackets but as you predicted they didn’t affect things.

  1. We’re slightly hamstrung by the EGI hardware which needs triggers to be sent over TCP/IP, so parallel port isn’t possible. Happy to bypass the egi/pynetstation module, if you know of any good examples of sending triggers over TCP/IP that’d be very helpful.

  2. The triggers appear before the stimulus, in every instance. I have re-rerun the timing test with 100 loops to demonstrate this in a bit more detail. The mean delay between trigger and the stimulus appearing onscreen is 44ms (SD 9ms). The time between each stimulus (i.e. each loop) should be exactly 20 frames, i.e. 333ms. It is 70% of the time, the other 30% its lasts an extra frame, i.e. 350ms. I can’t see any relationship bw the trigger/stimulus delay and the stimulus-stimulus time. I’ve included an excel sheet with the timing info summarised to help illustrate these relationships more clearly. Example timing info for JP.xlsx (37.0 KB)

  3. I ran the timeByFrames demo (thats a useful tool!) and the PNG is attached.

    .

Cheers
George

Hi George,
I am wondering if you have made any progress on this issue. We’re having the same issue with very similar values: mean offset of 36.4ms and SD of 7.1ms.
If you have, could you post how you managed to reduce the delay and variability?

Thanks,
Hak

Hi Hak,
thanks for posting. Since I posted I EGI tech support have said they cannot help any further, but have put me in contact with two other labs that have tried to integrate Psychopy and EGI hardware. The first lab have had the same problems as us, and as a last attempt are going to try use http://www.blackboxtoolkit.com/urp.html to send triggers, i.e. bypass the TCP/IP triggers entirely. The other lab has not responded, but when/if they do I’ll post their response up here.

So it seems a few people have tried, and so far everybody has failed. The best chance probably lies in either a) developing your own TCP/IP communication protocol in python (this is beyond my skills), b) bypassing the TCP/IP setup and using DIN, Parallel Port to send your triggers or 3) using an external piece of hardware, e.g. blackboxtoolkit.

If you make any progress post it up here, I will do the same and ask the labs I have been speaking to to also do this. Hopefully we can get a mini-forum going to solve this.
Many thanks
George

Hi George,
Thanks for the information and suggestions.

Just to be clear, the response pad can also send stim triggers as well as response triggers?
I guess the idea is that Psychopy sends (stim) triggers to the response pad which sends triggers to the Net Station?

As you suggested, I will post my progress.

Thanks a lot,
Hak

Hi Hak,
to be honest, I’m not entirely sure how they’re going to to use BBTK to send triggers. But they’re trying it out in the next few weeks and also using the code I posted to see if they can improve timing stability. I’ll post it up here when there’s any development
cheers
George

Hi all,

I’m a heavy user of the pynetstation (called egi in psychopy) module and have made some edits/updates to better utilize a pretty fantastic module. One thing I see often that causes timing issues is the use of egi.ms_localtime(), so I’ll make some clarification here as to how it works.

In each call to egi.ms_localtime() the function will do a high-precision timestamp of the moment the function is called and return that timestamp value.

The egi.ms_localtime() function is used throughout the egi module, working behind the scenes. For instance, when you call egi.sync(), you’re actually sending a message to NetStation that says, “Recalibrate event timing to this most current timestamp I’m sending you.” You must send egi.sync at the start of every trial so that any clock drift is accounted for throughout your experiment. In your sample testing code @gstothart , this is done perfectly.

On to the meat: callOnFlip()
This function in PsychoPy takes whatever argument you pass it and waits to execute the function the moment the back-buffer flips to the screen (correct my terminology if I’m mistaken Jon). While intuitively this seems like the best choice for sending timestamps, it depends upon your calling of a timestamp to occur at the time of the flip. In your sample code @gstothart, you made the same mistake I made for years trying to get this to work: you called egi.ms_localtime() in your callOnFlip() statement. What happens here is egi.ms_localtime() has parenthesis, and thus is actually calling the function at that moment (as opposed to when the argument is run after the flip), returning a timestamp value. What you want is to have egi.ms_localtime() to be called when the flip occurs, not when you load callOnFlip().

To circumvent that issue, I’ve made an edited version of egi.simple which has another function called send_timestamped_event() and SendSimpleTimestampedEvent()which is functionally identical to send_event() and SendSimpleEvent() with the added bonus of calling the timestamp at function run-time as opposed to the current procedure of passing a timestamp as an argument. I’m uploading a copy to this post, but am in the works of making this update a post in my github account for future uses of the module in PsychoPy and other software.
simple.py (34.0 KB)

In order to take advantage of this, replace your python file simple.py in your egi folder in your PsychoPy distro with the one I’ve uploaded, and replace your line:
mywin.callOnFlip(ns.SendSimpleEvent,'stm-' 'stm-', timestamp=egi.ms_localtime())
with
mywin.callOnFlip(ns.SendSimpleTimestampedEvent,'stm-' 'stm-')
(Also, why do you have 2 'stm-' 'stm-' in your event code? Not sure how that works with the function)

This should cut out a majority of your “jitter” or variance in offsets.

Unfortunately, your graphics card is integrated on the motherboard, and those tend to have little precision in timing (as can be seen by the timeByFrames results). I recently upgraded one of our systems with a simple Nvidia $35 card and it works much more reliably. Not to say changing your code with my updated egi.simple won’t help, but potentially you’ll still have some unmanageable variance as a result.

I know this was a lengthy response, but it hopefully will help many others in the future as they search the internet for offset resolutions.

Best,
Josh

P.S. : What confuses many about the function egi.ms_localtime is how it is called in example scripts. The original programmer of the module wrote an example script where ms_localtime = egi.ms_localtime is called, and later on in a call to send_event() they pass ms_localtime() to the timestamp variable. What they have set up is storing the function egi.ms_localtime as a variable called ms_localtime (the reuse of the name ms_localtime is a bit confusing) so that whenever they want to run the function egi.ms_localtime(), they would call the new variable with parenthesis at the end: ms_localtime(). if you were to set timestamp=ms_localtime without parenthesis the experiment would crash because you aren’t passing a value to timestamp, you’re passing a function which is not expected in the code.

3 Likes

amazing, thanks Josh, this looks incredibly helpful. I have a busy day in the lab ahead of me tomorrow! I’ll post up my progress.

Good catch Josh! I hadn’t spotted that but your analysis looks spot on to me. Many thanks.

Would be great if we could get your changes incorporated upstream on the pynetstation/egi lib. @gaelenh was maintaining the package and I don’t know if he’s still working on it.

If nothing else we should have some documentation on this on the PsychoPy docs

thanks for your work :slight_smile:

Hi Josh,
so I implemented your changes as suggested, see code at bottom of this message.

First the good news: The mean delay shortened to 17.19ms with a SD of 3.81ms. This is considerably better than before, so your changes make a big difference. I think there is more I can do though to improve things though, you mentioned a simple Nvidia card that worked well, which model was it?

The sort of bad news: Out of 1000 trials, 19 had latency delays of close to 0, i.e. the trigger and stimulus arrived simultaneously. If you remove these outliers the SD comes down to 2.2, which is pretty good. See the file for the raw timing data and a couple of graphs:
Josh psychopy for forum.xlsx (246.1 KB)
Do you have any thoughts on why this might be?

These tests were all done with a single screen, but FYI double screen made no difference, you get similar results.
Thanks
George

from __future__ import absolute_import, division
import pandas as pd # these are for reading from excel files
import random, os, glob, pylab
from psychopy import visual, logging, core, event, data , sound, gui,locale_setup# import some libraries from PsychoPy
from psychopy.constants import (NOT_STARTED, STARTED, PLAYING, PAUSED,
                                STOPPED, FINISHED, PRESSED, RELEASED, FOREVER)
import numpy as np  # whole numpy lib is available, prepend 'np.'
from numpy import (sin, cos, tan, log, log10, pi, average,
                   sqrt, std, deg2rad, rad2deg, linspace, asarray)
from numpy.random import random, randint, normal, shuffle # you can either just import random, then call random.shuffle or you can import the specific elements of random, e.g. shuffle, then you can just call shuffle
import os  # handy system and path functions
import sys  # to get file system encoding
import time
import csv #useful for handling csv files


totalloops=1000
duration=10

import egi.simple as egi 
ms_localtime = egi.ms_localtime()     
ns = egi.Netstation()
ns.connect('10.10.10.42', 55513) # sample address and port -- change according to your network settings
#ns.initialize('10.10.10.42', 55513) 
ns.BeginSession()     
ns.sync()     


#create a window
mywin = visual.Window(fullscr=True, monitor="testMonitor", units="deg", color ='black')# to set a window use size [800,600] and remove the fullscr arguement, you can choose "deg" = visual angle, or "pix" for pixels, this then use the approp units in your image defs below
mywin.setMouseVisible(False)# stops the mouse appearing ontop

whiterect = visual.Rect(mywin, width=10,height=10,pos=[0,0],fillColor='white')
blackrect = visual.Rect(mywin, width=10,height=10,pos=[0,0],fillColor='black',lineColor='black')


ns.StartRecording()



count=0
while count < (totalloops):
    ns.sync()     
    mywin.callOnFlip(ns.SendSimpleTimestampedEvent,'stm-')

for frameN in range (duration):
        whiterect.draw()
        mywin.flip()

    for frameN in range (duration):
       blackrect.draw()
       mywin.flip()



    count+=1

mywin.close()


# close up NS
ns.StopRecording()
ns.EndSession()     

Hi,
as far as I know, the best way to synchronize with the EGI is to implement an NTP client which connects to the NTP server running on the Netstation machine and keeps updated a local clock. By this way, there’s no need to explicitly send a sync command at the beginning of each trial (which by the way comes at the cost of an RTT): you just query the local clock. Some while ago (as part of the PsyScope project) I’ve implemented a little C module (and relatile python wrapper) that does exactly that, running the NTP exchange on a thread. On a direct network connection, the precision was below the millisecond.

I’ve also coded a couple of C modules which implement the Netstation protocol.
If someone is interested in having a look, I’m happy to share.

Cheers,
Luca

Just in case anyone else is looking: @luca.filippin has uploaded the code to https://github.com/lucaf/NTPSync

I looked into this avenue (NTP) and found it doesn’t exist on amps prior to the 400 series. Unfortunately, I think a majority of researchers are still on 300 series amps (which is terrifying as the computers which can still run NetStation 4.x are close to 10 years old).

I’ve mentioned this elsewhere but if there is enough demand I’ll work on adding NTP to the code for 400 series users. Just let me know here or via email (joshua.e.zosky@gmail.com).

This discussion helps me a lot to implement sending triggers from psychopy to the EGI system. I will try my script tomorrow and see if the amplifier will receive any triggers. I assume they are sent via the ethernet cable? Or does the experiment need to run on the same laptop that records the EEG? I’m pretty new to all of this.

Regarding ns.sync(): I’m running and experiment where two images are subsequently shown. In my case there are several seconds pause in between the two. Do I need to use ns.sync() before the presentation of each image or only before the first? It said “in each trial”, but maybe this didn’t include this case.

Also, is it possible to get a bit more explanation on how to implement the NTP thingy? I see a bunch of scripts and an empty readme, I don’t really know how to go from there. Running the setup.py file let to the error '‘Unsupported platform’, which makes me think that maybe indeed I am supposed to run psychopy from the mac that is used for recording the EEG, at least here it’s checked whether the computer runs on macos. I’m on win 11 by the way.