Scoring expressions now broken (work in preview mode, but not in official 'run' or data ouput)

I am having trouble with the scoring expressions in a survey that used to score properly. The numbers crunch fine in preview mode, but when I burn a testing credit and ‘run’ for real, it fails. While it appears that the individual items in the survey still output the proper values, all of my scoring expressions (some more complicated than others) appear to be broken. After some preliminary support from pavlovia team, we suspect this has to do with the somewhat recent ‘blocking’ structure introduced in Pavlovia. I’m wondering whether others are coming across this issue and whether there’s an easy fix. I ran this survey on ~120 people in July and was hoping to follow-up and re-administer later this month, but can’t get it working.

Have you tried running it with the older link?

Ah ha- thanks for the idea.

Will give it a try tomorrow. Thanks!

Update: When I revert to an older version of the survey, the scoring works as it should- so it would indeed be possible to readminister the survey exactly as it ran in summer of 2024. However, because I want to add a few new questions when I administer this time around, I sought a more thorough fix. Pavlovia support crew had suggested that because the underlying structure of a pavlovia survey has changed since I designed the original survey, I should add information about block to all items referenced in my scoring expressions. For example an expression that used to read “{PANAS.1} + {PANAS.2} + {PANAS.3}” would be changed to “{block_1/PANAS.1} + {block_1/PANAS.2} + {block_1/PANAS.3}”. This was also the change I needed to make so that my dependent logic worked and a question appeared when the right conditions were met (e.g. someone answered yes to a previous item. Thanks to @Becca and @wakecarter for the ongoing support in troubleshooting this one!