psychopy.org | Reference | Downloads | Github

Timing of images off on Pavlovia?

URL of experiment: Pavlovia

Description of the problem:
I have some image/text components that are supposed to appear at specific times during the trial. The times that they appear/disappear differs depending on the condition.

This is how I have the image components set up:

This is my excel file:
Screenshot 2020-07-09 at 20.38.47

The first image/text change should line up with the high-pitched beep in the audio file that plays during the trial. The second change should come when the beeps in the audio file stop and it goes to silence.

This all syncs up perfectly offline, but the timing is off online.
How do I fix this?

Thank you!

Unfortunately I don’t think it’s possible for sounds to be specified accurately online. How much unwanted variability are you getting?

Thank you for your reply, @wakecarter!

On trials with high-frequency beeps, the picture change is approx 3 seconds off. The trials with medium-frequency beeps, the picture change is about 600ms off. The picture change on the trials with low-frequency beeps is on time.

It’s really weird because I first run 3 practice trials, and the timing is fine on all of those. I set the duration of each picture stimulus in the code on the practice trials (see below) and have the trials run in sequential order (high-freq trial first, then medium, then low). listenDur = duration of the first image. tapDur= duration of the second image.

if practCond == 1:
    listenDur = 5
    tapDur = 10
elif practCond == 2:
    listenDur = 7
    tapDur = 14
elif practCond == 3:
    listenDur = 8
    tapDur = 18

For the real trials, I have the picture stimulus durations in an excel file and the different trials come in random order. Depending on which sound file is picked from the excel file, the corresponding image durations are then also picked.

Screenshot 2020-07-10 at 11.44.42

I originally had these set in a code component, similar to the practice trials, rather than an excel file.

Like this:

cond = thisTrial['condition']

if cond == 'high':
    listenDur = 5
    tapDur = 10
elif cond == 'med':
    listenDur = 7
    tapDur = 14
elif cond == 'low':
    listenDur = 8
    tapDur = 18
    

I’d call the condition from the excel file on each trial and set the correct image durations accordingly. But JS didn’t like ‘thisTrial’, so I changed it to use the excel file.

Perhaps I could either:
A - edit the listenDur and tapDur values for the real trials to account for the lag. So, change 5 to 2 to account for the 3 seconds of lag. However, I wonder if the amount of lag will be the same for each computer/internet connection?

B - remove the excel sheet. Use code to randomly pick a condition, and then set the sound component and listenDur / tapDur values based on which condition is picked. This would get around JS not liking ‘thisTrial’. I’m not 100% sure on exactly how to do this. I think I’d need to put all the conditions into a list, then generate a random number on every trial and index into the list? I’d also need some code to make sure that each condition is picked an equal number of times. Perhaps, like ‘if cond1 = picked ‘100 times’, pick again…’. Not sure if this would solve the image timing issue…

If it comes to it, I think this might just be something that I can live with. I am mostly interested in analysing the participants’ ability to continue pressing the space bar at a consistent pace after the sound has stopped. Therefore, even if the picture that tells them to start responding in time with the sounds is a little delayed, they should have still begun responding by the time the sound stops. So, this shouldn’t interfere with my data collection. It just means that I can’t also analyse their ability to match their responses to the timing of the sounds (as they might not have started responding yet).

Thank you for all your help!

Duration of sound files doesn’t work online. You need to have files of the correct length or use sound.stop() in code.

Thank you for your reply, @wakecarter. However, I’m not setting the duration of the sound files. I’m setting the duration of the image files, which start/stop relative to certain events in the sound file (a high-pitched beep or the beeps stopping). When I say ‘beeps stopping’ this doesn’t mean that the sound file ends. On every trial, the sound file runs from the beginning to the end of the trial, I’ve just included silence in the file from the point at which the beeps are supposed to stop until the end of the trial.

listenDur and tapDur actually set the endpoints of two images. For example, when the high-frequency sound file is selected, the listenImage lasts from the beginning of the trial until 5 seconds into the trial (set by listenDur). After 5 seconds, the tapDur is then shown and ends at 10 seconds into the trial. After that picture was finished, a third and final image is shown and ends when the sound file ends (which is also the end of the trial).


Sorry for the confusion!

I’ve found the solution!

My code for the practice trials:

if practCond == 1:
    listenDur = 5
    tapDur = 10
elif practCond == 2:
    listenDur = 7
    tapDur = 14
elif practCond == 3:
    listenDur = 8
    tapDur = 18

…was leaving listenDur set as 8 seconds (so the first picture changed at 8 seconds into the trial) and tapDur as 18 (so the second picture change happened at 18 seconds into the trial). Then on the real trials, it wasn’t being updated/wasn’t drawing a new value from the excel file for some reason. This explains why the first picture change on the real trials was off by about 3 seconds for the high-frequency trials but fine for the low-frequency trials.

I removed this code from the practiceTrial routine and made it so that listenDur and tapDur are obtained from the excel file, as they are for the real trials - and it worked! Perfect timing!