I am designing a visual target detection task (in a visual segmentation experiment). Here, participants have to respond to a given target stimulus from a 12-item stimulus set upon appearing. 192 image stimuli are presented in a stream with 800 ms stimulus duration and 200 ms ISI. As I didn’t want the loading of individual images to take time (so that the timing would become less precise; we have experienced it beforehand using another software), images were concatenated into an .mp4 video stream with the given stimulus durations and ISIs (800 and 200 ms, respectively). Participants respond to the target stimulus in this stream by pressing the space bar. This is recorded by a keyboard component which starts and ends simultaneously with the video stream. Then, reaction times for each target is calculated compared to the target’s predefined onset time in the stream.
Here is an example of these streams (without the keyboard component, just to illustrate the experiment design):
However, I am concerned about the time lags of both the .mp4 video file and the response component. I understand from the timing study that there is an about 42 ms lag for responses and a -2.5 ms lag for visual stimuli on Win10 platforms. My first question is that does it apply for video stimuli, as well? My second question is that to what extent does it change for longer stimuli (in this case, a 192200 ms long stimulus)? My third question is whether a lag in the video stimulus causes a lag in the response component, as well (provided that they are in the same routine)? In sum, could this method be appropriate to measure RTs? We would like to achieve at about 50 ms precision.
The timing study emphasises that each lab should validate their own studies. Is there any way to validate the time precision of web-based tasks like this?
Alternatively, is there any way to log lags?
Thank you very much for your responses,