Text stimuli and audio stimuli out of sync for one participant

I’m having a syncing problem with audio and text stimuli:

• I’m playing MIDI files using a MIDI player which I made.
• The Text stimulus is a TextStim
• The following is within a while loop that keeps running win.flip() until the while conditions are met,

This sets the audio playing:

if audio_stimulus.status == NOT_STARTED:

The text stimuli are timed to appear on screen at specific moments while the audio plays, for example:

if t >= 1.8 and text_stimulus.status == NOT_STARTED:
text_stimulus.status = STARTED

When I beta tested the code this worked fine on other computers (although only the person using MacOS worked up to the relevant part of the experiment for this particular issue), and it works fine on my own computer . However, a participant (based in Italy) has found that the stimuli are appearing out of sync. Either:
• The text is appearing early
• The audio is playing late

Checking the timing of commands on his computer vs mine, I found that:

For ‘if audio_stimulus.status == NOT_STARTED’
• This happens for me at t = 0.15s
• This happens for him at t =0.08s
So a small difference really

And, for if ‘t >= 1.8 and text_stimulus.status == NOT_STARTED’
• This happens for him at 0.01s after 1.8s
• This happens for me at 0.02s after 1.8s
So again a small difference really

My next thought was that maybe psychopy is giving the instruction to play the audio at the right time, but the audio isn’t actually playing at that right time on his computer. I thought, perhaps incorrectly, that testing when audio_stimulus.busy() returns ‘1’ would show this. It returned 1 at exactly the same time as ‘if audio_stimulus.status == NOT_STARTED’, so that didn’t help. Is there some other way I might be able to check this?

This is his system:
Microsoft Windows 10 Home
Version 10.0.14393 build 14393
Processor Intel® Core™ i5 CPU M 520 @ 2.40GHz, 2400 Mhz, 2 core, 4 processor logici
pc model: lenovo thinkpad x201

This is mine:
macOS Sierra v10.12.6
MacBook Pro (13-inch, Mid 2012)
Processor: 2.5 GHz Intel Core i5

The experiment is written in PsychoPy v1.90.2

N.B., He gets a message when PsychoPy opens, that I don’t:
pyo version 0.8.7 (uses single precision)

Any thoughts?

(With apologies for it being written in a now old version of PsychoPy. I started writing the code back when it was new, I promise… it’s taken me a long time to write it. Currently the code is out and being used by participants, so I do appreciate any help that can be offered. Thanks :slight_smile: )

Hey guys. Sorry to bump this up the list, but I’m struggling a bit with this. Would really appreciate some help with it if at all possible?



Hi, I don’t know if I can be of much help but at least I can ask a few things:

  • Do I understand correctly that your participants are downloading your .py files and running them on their own computers using their own installation of psychopy?
  • Do they use PsychoPy standalone, or do they install psychopy from the command line/using a package manager or something similar?
  • If users do not use PsychoPy standalone, how do you make sure that they use the same Python version as you do?

Just googling the message you described, I noticed that pyo 0.8.7 is “available for python 2.7, 3.5 and 3.6”. If the participant would e. g. be using 2.7 while you’re on 3.5 or 3.6, then that is bound to cause some issues.

I know you said that the experiment is already out there since a while back, but if you’re going to continue the experiment for a considerable amount of time and have participants from different parts of the world do it, you might want to see if you could transfer to doing online testing. If that would turn out to work well (I haven’t used the online functionality much personally), then you could possibly have a bit more control of what’s happening, and not have to lead participants through what might be complicated instructions to them.

Thanks Arboc!

In answer to your questions:
• Yes
• They’re using PsychoPy standalone (v1.90.2), which I’ve included for them to install, in the experiment folder. I’ve included it for Mac OS and Windows. My own system is MacOS and this particular participant is using Windows
• As above. However my own copy of PsychoPy is telling me it’s using Python 2.7.12, whereas his is telling him he’s using Python 2.7.11. Not sure why they’re (admittedly very subtly…) different…

Thanks for doing the google. We’re both on Python 2.7. That said, the pyo version is probably relevant given it’s a sound timing issue… I wonder why he’s getting the “pyo version 0.8.7 (uses single precision)” message…

You’re obviously dead right about online experiments moving forward but yes, it’s out and in use so that’s not really an option now. Also it uses several sound files which (I’m guessing… I haven’t looked into online experiments yet…) wouldn’t be feasible to upload to an online experiment in terms of memory available.

Great, using standalone PsychoPy and including it in the bundle sent to participants makes sense.

I looked up the issue a bit more and it seems more like a pygame question rather than a PsychoPy-specific problem. This old thread is probably the most relevant thing I stumbled upon https://groups.google.com/forum/#!topic/pygame-mirror-on-google-groups/mP2P3QfSoV4 Otherwise I don’t think I know much about what can be done, as I’ve only used the standard PsychoPy sound components. From the previous thread you linked to, from '18, it sounded like jon’s opinion was that timing discrepancies are pretty much to be expected with pygame’s sound capabilities.

Also it uses several sound files which (I’m guessing… I haven’t looked into online experiments yet…) wouldn’t be feasible to upload to an online experiment in terms of memory available.

I don’t think that in and of itself would be an issue, since I’ve seen online PsychoPy experiments which use quite a lot of images, which I can’t imagine would be bigger than your .midi files unless there is a massive amount of them or they’re unusually long/hi-res.

Sorry I couldn’t help out more, good luck :slight_smile:

1 Like

Thanks for this, much appreciated!

The pygame timing thing… it seems to be a full 0.6 seconds out of sync which is quite massive isn’t it? I’d always presumed when people talked about precise timing they meant far smaller units…

People on that thread seem to be talking about the buffer size. Perhaps that would help? I’m a bit clueless when it comes to this…

It’s currently set to:

freq = 44100
bitsize = -16
channels = 2
buffer = 1024

Perhaps I should try some alternative settings… I’ll look into it. Thanks again :slight_smile:

OK so having read this I thought i’d try asking him to reducing the buffer size (it was 1024, so I had him try: 512, 256, 128, 64, 32, 16, 8, 4, 2, and 1).

Apparently he took it all the way down to 1 and didn’t notice any appreciable difference on the latency.

Any other ideas?

This is all based upon a class that I made ages ago for playing MIDI using pygame:

class midiPlayer(object):
    status = NOT_STARTED
    setVolume = 1
    def __init__(self, Sound):
        self.Sound = Sound 
    def play(self):
        freq = 44100    
        bitsize = -16   
        channels = 2    
        buffer = 1024   
        pygame.mixer.init(freq, bitsize, channels, buffer)
        self.status = STARTED 
    def stop(self):
        self.status = FINISHED
    def setSound(self, music_file):
        self.Sound = music_file
    def busy(self):
        return pygame.mixer.music.get_busy()

The participant has found that by taking:

pygame.mixer.init(44100, -16, 2, 1024)

out of the class, and having that bit set up earlier in the code, the 0.3s latency is removed. So…:
a) what consequences could that have on the class? Should that be fine to do? The player still seems to be playing the files for him, so I guess it’s fine? However it’s so long since I built the class (and it’s the only class that I’ve every built) that I can’t remember why I had it in there in the first place…
b) is there anything you can think to do to lose the remaining 0.3s latency?