Regarding ‘knowing’ precisely the time of stimulus onset, at least one other forum posting refers to the .tStart attribute of your component of interest as providing this information.
I’ve had a quick look at the code and it seems that .tStart is actually up to one framerate before the actual onset, so on a 60Hz monitor up to ~16.7ms before actual onset.
I’ve taken a quick look at the value of win.lastFrameT and for me this is typically ~7us (ie tiny) indicating that actual stimulus onset (ignoring hardware lag in the video card and the monitor itself) is typically close to one frame rate after the value of the .tStart.
Have I go t this about right?
Are there any other quantifiable psychopy related aspects I should consider if I want to get as accurate value for stim onset as possible? (I need this for purposes other than getting a precise keyboard reaction time)
Also, do keyboard/mouse reaction times go from the time of the actual screen onset or do they relate to .tStart and are therefore possibly all ~one frame rate longer than the actual time (ignoring usb delays)
As you’ve seen, the code for each component inevitably has to run in an inter-frame interval before the first frame on which that component becomes visible. So yes, the actual screen offset time will be approx. 16 ms after the .tStart of the component, but conversely, the offset of the stimulus will be approx. 16 ms after the the scheduled end-time, so the total duration of the stimulus should be correct.
If you want to know the precise time at which the first image of a component was displayed, then you should pass some function to be executed in the win.callOnFlip() function. e.g. this code which is inserted if one selects “sync RT with screen” in Builder:
win.callOnFlip(key_resp_2.clock.reset) # t=0 on next screen flip
As above, that is optional. i.e. if you select “sync RT with screen”, then the RT begins counting from very close to the time of the actual screen refresh, rather than up to 16.7 ms earlier (from the .tStart value).
Since the ‘Sync RT with screen’ checkbox is selected by default on the keyboard component, RTs should be as accurate as we can reasonably hope.
Regarding code called in the ‘Begin Routine’ part of a code component, I think it’s useful for people to note that this code is likely execute close to one frame’s duration prior to stimulus onset (typically ~16.7ms) as this can have important implications.
For example, whilst parallel port triggers sent using the parallel port I/O component cause the following code to be generated: win.callOnFlip(p_port.setData, int(1))
so that the trigger is sent close to stimulus onset (good) there are good reasons why people often use the code block to manually send triggers themselves (instead of using the parallel component) however this will likely cause the trigger to be sent ‘early’ (typically ~16ms).
If these triggers are being sent to synchronise with EEG or Eye-tracking data or to trigger TMS.tDCS/tACS triggers then that delay could well be very significant.
If I’ve understood things right, then I’d suggest that anyone manually sending parallel port triggers do so by syncing with the win flip similar to how you described above.
e.g.
# put your trigger code in a function like this
def my_send_trigger():
print("in my_send_trigger")
can then in the Begin Routine section have something like this: win.callOnFlip(my_send_trigger)
… as this should synchronise much closer to the stimulus onset.
Please correct me if I’ve misunderstood and you disagree or have anything to add.
An alternative is to put code in the “each frame” tab, but execute only when frameN == 1. i.e. this code would excite very soon after the first (i.e. when frameN == 0) refresh has occurred on this routine (or use whatever frame count is appropriate for components that start later in a routine). That is, this code would execute during the second inter-frame interval, rather than the first, and will be immediately after the first update image has been shown.
This won’t be as precise as calling a function when the window flips, but will likely be within a millisecond or two after the relevant screen refresh, as opposed to up to 16.7 ms early.
Short story: you’re right about the values but it should not be affecting anything unless you’re creating your own custom data files based on those values.
The tStart value does, as you say, reflect the time that the stimulus was requested, but is not used in any analyses. It get’s used internally for timing stimulus presentations. I’ll need to think about whether it would be better to time stimulus duration from this or from an updated value after the first frame flip.
The RT recorded in csv files comes from the mouse and keyboard objects and these have the option to ‘sync to screen refresh’ which allows you to choose whether you want the timer to start at the screen refresh associated with the visual stimulus. That box should be unticked for other stimuli, like sounds, which will aim to start as fast as possible and don’t give us any information about what actually happened.
The recorded time in data files and in log files is based on what actually happened (using logOnFlip() which is like callOnFlip(). Thus the log file gives the time the frame actually flipped for a visual stimulus and is correct even if something happened like a dropped frame.
Yes, to send a sync pulse that is based on a visual (not an auditory) stimulus you should do it in a callOnFlip() function
Note that sending triggers NOT by using the callOnFlip() approach (which is used in the demo) will likely lead to triggers being sent close to one frame rate early (typically ~16.5ms) which will give misleading timestamps for EEG, eye-tracking etc experiments.