Draw image at constant timing during soundstream

This is a beginner question.
I would like to draw an image on the screen while participants listen to a long soundstream.
The image should appear every 1482 ms, and the file audio is 5 minutes long.
Obviously I don’t want tp add an image component hundreds of times, setting a different start timing for each one.
I don’t know whether to use a loop or a code, or both.

Both.

  • Start by just creating a routine that shows an image for a fixed period and loop around it to update the image for the required number of iterations. I don’t know the significance of the 1482 ms display duration, but note that any such period needs to be an integer multiple of the frame duration of your computer’s display. e.g. for a 60 Hz screen, each refresh lasts 16.666… ms, so the closest duration to what you want is 89 refreshes, for a duration of 1483.33 ms.

  • Then create and play a sound in code. That way, it will keep on playing, regardless of what is happening with Builder routines (i.e. as it isn’t defined as a Builder graphical sound component, it won’t stop playing when the routine ends). To do this, insert a code component on the routine that shows the images. In the Begin experiment tab, define the sound with something like:

long_sound = sound.Sound('some_filename.wav')

Then start it playing on the first iteration of the loop:

if your_loop_name.thisN == 0: # only on the first iteration of your loop
    long_sound.play() # will keep playing until it ends

Thank you very much! That helped a lot, it works perfectly.
Unfortunately, a different kind of problem came out.

I decided to use two images instead of one: a square and a triangle. They appear on the screen every 89 refreshes, while a long sound is playing. This loop works just fine.
Participants have to press a key only when they see the triangle image, and I would need to record the RT relative to the triangle image only.
If I use the keyboard component in the builder, the image stream would be dependent on when and whether participants press the key, and would not follow the usual timing.
I would like to record key response and RT on the appearance of the triangle without altering the timing of the streaming.
Should I add a different routine? Should it be inside the loop?

Simply:

  • uncheck the Force end of routine option on your keyboard component.
  • give the keyboard component a fixed duration of 89 refreshes as well.

The keypress will be recorded but it won’t have any other consequence.

Perfect, it works very well. thank you very mush.

Unfortunately a third, possibly more difficult problem emerged. This is related to the first question I asked.

As you suggested, I created a routine that shows an image for a fixed period and loop around it to update the image for the required number of iterations. My screen has a refresh rate of 60 Hz, so I used a duration of 89 frame refresh for my images to be synchronised with the sound-stream. I created and played the sound in a code, exactly as you proposed.

Everything seems to work fine, but after the fifth or sixth image a delay in the image presentation starts to appear, and the image starts not to be synchronised to the sound anymore.
I thought the problem could depend on the refresh rate of my monitor, so I changed it and I used a different one, with a refresh rate of 85 Hz. I changed the duration from 89 to 126 frame refreshes, but still, after a few well synchronised images, they start not to be synchronised to the sound anymore.

As the time I specified is fixed, do you think the problem could reside in processing the loop itself? Could the loop reiterations cause this progressively accumulated delay? Ultimately, is it possible to perfectly synchronise images to a 4 minutes-long single soundstream, without accumulating delay?

An issue here can be that it takes a finite amount of time to read an image file from disk. This time might exceed the time available within a single screen refresh, meaning that you ‘drop’ a frame.

e.g. say on the second trial you are about to display a new image, which takes 25 ms to do. There isn’t time to do that within the first 16.66 ms of the trial, so the code is not able to complete in time to update the image. You might not notice anything visible on screen, as the previous image will simply keep redrawing. But what this means is that the previous image was on screen for 90 rather than 89 frames. The same thing will happen with the current one, and these errors accumulate over time. They can become noticeable with a long, consecutive task like yours. So first, try optimising the file loading and drawing:

  • use a 60 Hz screen rather than anything faster: this gives you longer on each interval. e.g. 16.66 ms compared to 11.76 ms for an 85 Hz screen.
  • ensure that the images are already scaled for the size that they will be displayed on screen. e.g. if the image is to be 1024 × 768 on screen, make sure the image file is that size, rather than a multi-mega pixel file straight from a camera. That means the file size will be smaller and it won’t need so much decompression and re-scaling every time it is opened.
  • use a fast computer with a good graphics card, to speed up importing and processing of the image.
  • use a solid state flash drive rather than a spinning disk hard drive to speed up the reading time from disk.

This may or may not be sufficient to give you the performance you need. If not, you need to look at loading the image before it is needed (also known as ‘caching’). Builder provides a good way to do this if you have down time within your trial, like a fixation period. But if you have consecutive image displays with no period in between, it really becomes necessary to use some code to do what you need.

Lastly, things might work acceptably if you shift to specifying durations in terms of time rather than frames, which might stop the errors accumulating as it isn’t affected by losing track of the dropped frames. This would all need to be tested very carefully, though.

Hello Michael,

I tried to optimise the file loading as you suggested. I used a 60hz screen with an already scaled image in a very fast computer with a good graphics card (I could not find a solid state flash drive).
I still had more or less the same delay.
As I need to have this experiment done asap, I manually added all the images I needed (164), each with its own different time-onset in milliseconds.
Although it is very inelegant, the synchronisation is perfect, so the problem was in the amount of time needed to read the image from the disk in the loop.

The thing is that now I don’t know how to collect and store the correct responses and the RTs relative to the images.
I mean, I have 164 image-component (3/4 of them are a square picture, 1/4 of them are a triangle picture).
I want the participants to press a button only when they see a triangle, and to store this information, along with the RT.
I cannot use an excel file with both images in a column, because only one of the two images would be read and played, therefore I cannot use the $correct _answer column either. How could I do?

Also, I am adding a feedback routine, so that participants could see how many triangles they were able to detect.
I’m using this code in the Begin routine

Corr = loop1.data[‘key_resp.corr’].sum() #.std(), .mean() also available
meanRt = loop1.data[‘key_resp.rt’].mean()
msg = “You got %i trials correct (rt=%.2f)” %(nCorr,meanRt)

It worked with the version of the experiment with the loop, but now I don’t know how to use it as I don’t have a key_resp anymore. I guess the answer to this problem resides in the answer to the previous one.

Thank you so much