Repeating running experiment through coder leads to increasing delays

Hi,

We’ve been trying to run an experiment that consists of 8 runs of a script through coder (with the only difference between runs being the excel file providing the trials).

After each run has completed, the screen returns back to the coder view, but there is a delay where PsychoPy becomes unresponsive. This delays us loading the next run, and the delay appears to be cumulative, with each time we run the experiment the delay getting longer - up to 3 minutes in some cases.

To be clear there are no hdf5 files being saved, no interaction with other complex hardware etc. It’s a simple experiment running on a PC (or Mac) displaying stimuli on a mirrored screen, collecting button responses.

This is PsychoPy 2024.1.2 and the same behaviour is observed on Windows and Mac (we thought it might be due to older hardware - apparently not!).

Any suggestions for what this might be due to, or how to fix it would be much appreciated.

Best wishes,
Jon

This is a tough one without more diagnostic information. If this is the same across 2 computers (especially of different OS), then obvious things such as disk/ssd degradation, heating, etc. are probably ‘off the table’. When your experiment is in the unresponsive state (if I read correctly it is after a run), can you go into the terminal in OSX and run ‘top’ and look for memory or CPU pigs related to python and other applications? I ran into something similar years ago when people would double click instead of single click a launcher icon so I had 2 instances of the same experiment competing to hog cpu. Looking at ‘top’ showed me both. Are you appending a data file for each run or creating new? Can you look for a memory leak in psychopy. This idea comes from the logic that the the psychopy version is the same in both cases.

These are all just guesses from experience and you will have to chase this down by eliminating several possible causes. If you can set up a virtual env and use a newer psychopy for comparison, that might help isolate too. I know very well that just getting everything to run stably is quite a feat and moving to a newer psychopy and/or python version may not be trivial.

The ‘brute force’ method is to reduce your experiment to basics (small number of trials, auto generated responses instead of user responses, minimal screen draws, etc.) and see if the problem continues. If not, slowly add in your other parts until it breaks. Just to be clear, there is NO network involvement or other peripherals? ( always good to turn networking off, unplug printers, prevent auto updates, make sure there are no browsers, etc., though having the same problem on 2 different machines makes these less likely.) Poke around and report… oh is there by chance a large logfile being generated … serious disc/ssd writing? Sometimes these can be thousands of buffered lines eating memory and slowing things down when flushed.

I had the same problem occur as well. From my testing it has something to do with how psychopy runner access disk space (or ram) when you run the same experiment with a different excel file (specifically when you have custom code as part of the experiment).
The only way I found to fix this problem is to just exit out of psychopy after the experiment ends, and then open up a new instance of it. It’s probably faster to do so rather than waiting for psychopy every run of it.
You could also chain your experiment into different blocks (each grabbing a different excel file) if each run is the same. Then include a break routine in-between each iteration.

Issac