Repeating running experiment through coder leads to increasing delays

This is a tough one without more diagnostic information. If this is the same across 2 computers (especially of different OS), then obvious things such as disk/ssd degradation, heating, etc. are probably ‘off the table’. When your experiment is in the unresponsive state (if I read correctly it is after a run), can you go into the terminal in OSX and run ‘top’ and look for memory or CPU pigs related to python and other applications? I ran into something similar years ago when people would double click instead of single click a launcher icon so I had 2 instances of the same experiment competing to hog cpu. Looking at ‘top’ showed me both. Are you appending a data file for each run or creating new? Can you look for a memory leak in psychopy. This idea comes from the logic that the the psychopy version is the same in both cases.

These are all just guesses from experience and you will have to chase this down by eliminating several possible causes. If you can set up a virtual env and use a newer psychopy for comparison, that might help isolate too. I know very well that just getting everything to run stably is quite a feat and moving to a newer psychopy and/or python version may not be trivial.

The ‘brute force’ method is to reduce your experiment to basics (small number of trials, auto generated responses instead of user responses, minimal screen draws, etc.) and see if the problem continues. If not, slowly add in your other parts until it breaks. Just to be clear, there is NO network involvement or other peripherals? ( always good to turn networking off, unplug printers, prevent auto updates, make sure there are no browsers, etc., though having the same problem on 2 different machines makes these less likely.) Poke around and report… oh is there by chance a large logfile being generated … serious disc/ssd writing? Sometimes these can be thousands of buffered lines eating memory and slowing things down when flushed.