psychopy.org | Reference | Downloads | Github

Looped Routine only displays one component

URL of experiment: Pavlovia

https://run.pavlovia.org/andrewlampi/emotion_recognition_test1/html/?__pilotToken=c81e728d9d4c2f636f067f89cc14862c&__oauthToken=c6fd7d088b43f9d117de28aac457241e658df2262b0554c3db1dd9beefa1196d

Description of the problem:
Hi there,
I’ve been working on uploading a trial paradigm in preparation for an actual emotion recognition/categorization study. I have participants go through training routines (hard coded, not randomized in a loop), with three components: image stimuli, keyboard responses, and a text legend for response options (e.g. Happy=S, Sad=D, etc.). I then have actual trials with the exact same components, nested under a random loop.

The training routines work exactly as intended:

but once the experiment enters the loop, only the image is presented.. The text legend is absent, and no keyboard response is accepted/will progress to the next image. This is only an issue when running the study online, as it works as intended on my local device (MacbookAir running Catalina 10.15.2).

Any advice on how to avoid this problem?

Thanks so much–

The general advice to debugging is to simplify.

I took a look at your study but there are loads of routines that are almost identical and trying to work out what exactly differs becomes hard. Start with a tiny version of your study containing just the routine you need (“recog_trial”) and see if that works. Then put it into a loop where the only difference is the thing that you’re changing in your conditions file. Does it still work then? If it does then it isn’t loops that are the issue, but a subtle difference between your many routines right now. If it breaks once in the loop then, yes, this might be an issue with loops (although I can’t imagine what would cause that). At that point you’ll have what we call a minimal working example of the problem - there are only a few things for us to look at to try and work out what’s wrong - you can share that and we can better debug for you. We can’t easily help debug an experiment with the co9mplexity of what you shared.