Split a big dataset to increase the loading of the experiment

Hello,

I’m writing this post because I’m currently developing an online experiment on Pavlovia, and I’ve run into an issue I don’t understand.

Here’s how my experiment works:
Four images are displayed to the participant, and they must select one. An audio file is played to tell them which image to choose.

Each of the four images (Stim1 to Stim4) and the audio are pulled from a file set as a parameter in the trial.
There are four difficulty levels, each with its own stimulus database. This results in several hundred, possibly over a thousand, media files in total.

I’m concerned that loading such a large resource base might be too demanding, which could negatively affect the performance of the experiment.
However, in a single session, only 20 trials will be run, meaning that no more than around 100 files are actually needed.

Each of my stimulus sets is created from an Excel file.
So I was wondering:
Would it be possible, at the launch of the experiment, to generate a smaller file that randomly selects 5 combinations per difficulty level, and then have the experiment use only this reduced file?

Is that feasible in Pavlovia/online mode, or would I need to rethink my approach?

Thanks in advance, and have a great day!

Hello @Magicdjez

Would this help Wakefield's Daily Tips - #54 by wakecarter

Bet wishes Jens