Hi all,
I am running a study which has 8 separate runs, each a 6-minute experiment (exact same structure, but different stimuli). The first few runs are usually fine, however around run 4-5, after finishing the experiment, it takes increasingly longer for me to be able to start the next run, going up to 2 minutes of buffer after the last run. PsychoPy starts showing the “loading” icon instead of the mouse, and would not let me move on.
This is always after the experiment ended - the full screen mode is exited, and the Runner window shows, so it is not due to the experiment not finishing correctly. All data is also saved correctly afterwards.
When running the experiment, I have no other tab open, and also any Excel files that PsychoPy reads in are also closed.
I have seen this discussion, which was the most relevant so far: Experiment crashes when it runs for too long - #6 by Michael . Therefore, I tried looking at the RAM while in pilot mode - it does not seem to increase over time. The CPU does increase while the computer’s buffering after the experiment.
I have also tried both 1) opening all 8 .py files at the beginning, and running the relevant one, then closing it, and running the next one, and 2) opening the .py files one at the time, closing when finished, and opening the next one. It does not make a difference.
I would like to know if this has happened to someone else too, and if there are any known solutions on how to decrease the buffer time after exiting the experiment?
Some additional info:
OS: Windows 11 Enterprise
PsychoPy version: 2024.1.4, Py 3.8
Standard Standalone Installation?: yes
Do you want it to also run online?: no
This design only has 2 custom code features:
- A screen at the beginning, and at the end which waits for a trigger from the fMRI scanner, and once getting the trigger, it moves onto the first trial / exists the experiment with logging the info “experiment over”.
- I have ISI with variable length (read in from an input file), however when I specified the length as a variable from the input file, the automatic non-slip timing was not effective, and lead to significant delays. Therefore, I applied this solution: Non-slip timing for variable trial length information from input file - #2 by Michael , and therefore I’m running the .py file.
Initially, I though it may be due to high level of information saved / buffered during the task. However, the task is made of presenting 4 words on the screen, and recording button press response and reaction times from the participants, so it should not be extremely demanding for the software / my computer. As it is an fMRI study, I also have two “checkpoints” - one at the beginning of experiment, and one at the end of experiment, to ensure the timing of the experiment is in sync with the timing of the fMRI machine. I need data logging to be on “info” level, to ensure that this synchronisation has happened correctly, However, I do not count or log the triggers during the course of the experiment - therefore this long buffer should not be due to this logging of information, as it is not an extremely high amount of information.