Experiment stops in pavlovia at the beginning of loop - version of psychopy different?

URL of experiment:
https://run.pavlovia.org/TheaIonescu/sh-replication2025_ro/?__pilotToken=c9f0f895fb98ab9159f51fd0297e236d&__oauthToken=7d38f03e37fecee23bcbc8a4c66317ca6bad9382a261efde6fd660c56de225a8

Description of the problem:
In short, I have an experiment that includes informed consent, some demographics and then 4 separate tasks that must be presented in random order. The experiment runs fine in builder, but when I start it in Pavlovia, it stops at a gray screen after the demographics routine, when the loop to randomize the tasks should start.

What is weird is that this experiment is a copy of a previous one, that we used in Pavlovia to assess about 200 participants in 2022. This copy was made because we had to remove some routines, that were no longer necessary. However, non of the removed routines were from the tasks, so the structure of the experiment is the same.

I found in some other threads that code components might be an issue - I have some code components that I use to show feedback to participants and compute & save correct answers for open-ended items in training and test loops. Is it possible that there are differences in the ways these code components work in the last version of Psychopy vs. the one in 2022 (PsychoPy 2022.1.1, version: 3.8.10)?

Hello

Pilot tokens expire after one hour. :grinning:

You can always revert back to a previous PsychoPy version by setting the version in the Experiment tab (Use PsychoPy version).

Best wishes Jens

Sorry, I forgot about link expiration :woman_facepalming:.

I tried to set the PsychoPy version to 2022.1.1 from the Experiment tab and synchronized with Pavlovia. However, now the screen remains at „Initializing the experiment…” and I found the following error in the Inspect section of the site.

image

The legacy Psychopy file is in the repository, but there is no .js file that matches the name of this file, as i it for the original one. I don’t know how to generate the .js file for this legacy task.

image

I didn’t manage to fix this problem and it is getting a bit time sensitive.
I don’t have the agreement of the other authors to share the task file - is there another way I could get some help?

I leave below some pictures of one of the code component. I use it to show feedback to the participant, based on their response (they have to write in with the keyboard a solution to an equation) and to save the correct response in the database. Is it possible that something is wrong here? Or in the loops?

This is the routine structure (smallest loop is for the training trials, middle loop is for the entire math task and the outermost loop is the task randomization one)

This is the inside of the Math_feedback routine (where the code component is)

This is the code itself (it generates some values for a $feedback_trial variable that I show inside the text component afterwards)


Hello @doris.rogo

It is difficult to help you if you cannot provide details of your programming. You can always create a toy version of the experiment that still throws the error and share it. Otherwise everybody will be just guessing.

Why do you think the feedback routine is throwing the error? What is the purpose of '"'+ and ‘+’"'`? What does a solution to the equation look like?

Do you import any Python libraries? Do you have more code-components? Do you have code-components in a Before experiment tab?

You can try the experiment offline in a browser. If it fails to compile into PsychoJS, the runner should tell you the line where it went wrong.

Best wishes Jens

Hello!

I am not sure that the code components are the problem. The symptom is that the experiment stops at a gray screen in Pavlovia, when the task loops should start. The first 4 routines (that are not embedded in a loop) run normally, but when the tasks should start, I can only see a gray screen. I read on the forum that similarproblems might be because of the coding components (here: Experiment running in builder but stops after 1 trial in pavlovia). This is why I started checking them. However, I am not a programmer and I am not that familiat with coding to understand if the problem is really there or I should check some other aspects of the task.

Regarding the questions, I have 4 different tasks that participants complete in a random order. For 3 of the tasks I use code components to show feedback to participants after each response they input and offer them the correct solution if their answer is wrong.

  • I do not use Python libraries.
  • I only have code components inside routines that are part of a trial loop (training and test) and the code is in the Begin routine part. The code routine si accompanied by a Text routine, that shows the text for the feedback.
  • The code components have 2 roles: (1) show feedback and/or (2) save scored answer in data file (1 if correct, 0 if incorrect)

I will exemplify for the Math task:

The participant is shown an equation on the screen - say 1+2+3. They ;hit a button on the screen when they are finished calculating, input their result in a text-box (ex., 6) and hit a submit answer button on the screen. At this point, the experiment shows them feedback - ‘Correct’ or ‘Incorrect. The correct answer is $correct answer for 1+2+3 equation’, based on their input answer.

This is the structure of the training routine - show equation > input answer > show feedback

I have a conditions file where I have all equations and their correct answer in separate columns (correct answers included initially as numbers, but then changed to strings, see below). I wanted to use this last column to determine the feedback, show the correct answer and save the score (1/0) for each equation in the data file. To do this, I made a code component where I wanted to first determine if the answer is correct and, based on this, to show the feedback - the code I showed in the last post (if input answer = correct answer in conditions file > show correct feedback, else > show incorrect feedback + correct answer).

However, because I needed to add the correct answer to the string “Incorrect. The correct answer is…”, psychopy wouldn’t concatenate the correct answer unless it had “” around it in the Excel file (the error said it couldn’t concatenate string + integer). So all correct answers are written “6” in the Excel file and I used the ’ " ’ to make the input answer a string, so I can compare it with the one in the Excel file.

Is it likley that this is the problem? What else could it be?

Thank you and sorry for the long text.

Hello @doris.rogo

Ok, a number read from Excel is usually of type integer if you just write a number without enclosing ". However, the answer given by the participant is of string type. So you need to convert it to the same type.

if textbox.text == str(corr_answer):
    feedbackText = "correct"
    correctAns_tr = 1
else:
    feedbackText = "incorrect. The correct result is " + str(corr_answer) + "."
    correctAns_tr = 0

Math_tr_trials.addData('training math correct', correctAns_tr)

should do the trick.

Did you initialize your variables in a Begin Experiment tab?

feedbackText = "Oops, that went wrong"

As I understand it, your experiment did not go to feedback. So I doubt the error is there.

Do you have any empty rows in your Excel file by accident? Check the number of conditions reported in the loop. Do you have as many conditions, aka rows, as you expect? In the following example there are two rows. Is the condition file stored in the same directory as the experiment? Is there no absolute path specified for the condition file?

Are you using variable condition files? If so, have you included them in Online Resources under Experiment Settings?

You can disable a routine in the routine settings to determine where the experiment stops working.

As you can see, there are several places where things can go wrong. Not providing enough information, for example in the form of a toy experiment, or not giving access to the experiment, will only prolong the time it takes to find a solution. It also indicates mistrust of people who volunteer to help.

Best wishes Jens

Would it be ok if I gave you access to the project in pavlovia instead of posting the link to the experiment here?

Hello @doris.rogo

giving access to the repository would allow me to fork the experiment and test run it here. You can remove access to repository afterwards.

Best wishes Jens

Thank you, I added you as a Member in gitlab. Can you access it?

Hello @doris.rogo

I see the project but I cannot fork the project.

But anyway. When I search the console for errors, there were none, a new display appeared. I did not quite get what was displayed because it disappeared when I clicked in the window. Some time later a second display appear, something like Etapa 3 aso. Then a third display appeared

Could it simply be that you misdefinded some duration? PsychoPy uses seconds for durations.

Best wishes Jens

Hello @JensBoelte !

So, it seems you are right and the problem was with the durations of the routines. I set them correctly in seconds, BUT, in order to set the duration of the routine, I set the same duration for all elements inside the routine instead of using the Duration slot in the Routine settings button (I don’t remember if that button was added in newer versions of Psychopy or it was already included in the 2022.1.1 version).

The way in which I set the duration meant that the elements would stop showing on the screen after the click (i.e., the element that was meant to end the routine), but the routine would only stop when all the elements would time-out, including the click - so the screen stayed gray for the remaining duration.

So the problem was fixed by setting the duration of each routine in the Routine settings button and removing the time limits of the elements inside the routine, where they weren’t necessary for the trial.

Thank you very much for your help and sorry if I caused you frustration :grimacing: :sweat_smile: