Description of the problem: Hi all, I have just create a new experiment but unfortunately it takes my loops as routines and cannot define. Problem is in the recognition phases, encoding has no problem. So, if you check my recognition condition file and codes to help, I would be grateful. An example: in recognition_cold loop, I have 3 underloops as trials, trials_2 and trials_3 for familiarity routine, recollection routine and new routine respectively. In the Java Script it is written as “trialsroutinebegin” is not defined, because trials is a loop for familiarity routine. familiarityReps, recollectionReps and newReps refer to my code components to detect just the next routine according to participant’s response (f/r/n). So, they do not have any trials, just routines have scales.
Can you please be more specific about your question?
I checked your experiment, and it has hundreds of repeating routines and loops, making it difficult to understand.
What exactly is the problem you are facing? Which error do you receive?
Creating another experiment with the same problem but in a simpler structure would be a good idea. Just a few routines/one loop to demonstrate your problem.
Hi wakecarter, Yes I actually use 2023.1.3 but when I try to sync by using 2023.1.3, software and version seems UNKNOWN. When I use 2023.1.0, yes I can sync it, because we made some editions in this version. It automatically receives 2023.1.0 as original version. So, as you said there is a confusion about routines and loops. It perceives my loops as if they are routines. I do not know how can I explain more clearly. How can I use 2023.1.3 directly?
As @Chen wrote, the way you constructed your experiment makes it very difficult to debug. You have programmed the experiment against the design principles of PsychoPy/Pavlovia, e.g. introducing countless routines with the same setup instead of reusing routines.
The fact that Pavlovia reports an unknownplatform version indicates that it cannot recognise the software version in which the experiment was programmed. So there is probably a bug in the code. Does Pavlovia at least recognise the software platform?
I ran your experiment on Pavlovia in Pilot mode using version 2022.2.5 (I have not yet updated). The experiment ran without any problems. It will certainly run in Running mode.
I noticed that your last routine, debrief support, has no timeout. To end this experiment, participants must press Escape. Pressing Escape is equivalent to aborting your experiment. If you do not use the Save incomplete data option, no data will be saved. Display the debriefing support for a fixed amount of time or ask your participants to press a key to end the routine.
This is really a good new! However, I cannot run in a pilot mode how can I use 2022.2.5 version in Pavlovia? By the way while you were running did the scales of familiarity, recollection or new change according to your response? For example, when you press “F” you should see only familiarity scale not all?
you are not using the same routine multiple times. You have, for instance, one routine called enc_trash and another routine called enc_justice. They have the same setup but they are two different routines. It would be better to have just one routine and determine the stimuli to be displayed differently.
All encoding different routines (40?) could be condensed in just one routine. The same holds for the recognition routines. Also, the condition-file, recognition.xlsx, that you are using has a different number of parameters in various loops, e.g. loop recognition_sweet 11 parameter, loop recognition_trash 13 parameters. I guess that is due to the fact that you edited the condition-file recognition.xlsx after setting up the loop recognition_sweet. Alas this does not seem to cause a syntax-problem.
Keep in mind that each routine you add will add a column to your result file.
Yes, actually this is a preparation behavioural experiment for fMRI, so I designed experiment according to fMRI. If I use all encoding items in the same routine I am not sure whether we can detect the order of the stimulus in fMRI.
Your experiment ran for about 50 minutes, I hardly pressed a button. I just let it run. But yes, when I pressed f, a familiarity rating came up. As others and I have said before, your setup makes debugging the experiment for syntactic and semantic errors really difficult.
You could try to set the PsychoPy version to 2022.2.5 via Use PsychoPy version in the Edit experiment settings-menu.
There is no way to set the version directly in Pavlovia. You have to set it via PsychoPy and sync.
Simply by using a column that codes the condition. PsychoPy/Pavlovia will automatically save the presented stimulus.
The following toy example shows numbers (1-5), small letters, numbers (10-14) and capital letters in blocks. Each block is randomised. This seems similar to your encoding phase. This experiment uses one loop and two routines, one to display the stimuli and one to select the stimuli. The first two columns of the data file contain the presented stimulus and the condition to which it belongs.
I tried to sync your demos. Unfortunately, they are not appropriate to my experiment. I need branched experiment going on according to participants responses, the codes you used does not allow I use different scales for different responses. Thanks for your support.
In this way, can I attain different response data for familiarity, recollection and new? I need see these responses separetely in my output. Actually you are right, we used 4 scale for recolelction before, and so I designed separately but after that my supervisors changed the decision as 3 for all of them. I can use just one scale. By the way, slider responses looks as different trials now independently from the stimuli, how can I adjust this? I need to see slider responses for each stimuli in my data.