I’ve an info sheet / consent for survey. It has 5 yes/nos on the first page. It has 2 html questions that follow on the second page. one of the html questions displayed if not all 5 were yes, the other displayed if all 5 were yes. Previously this worked as intended.
Today is the first time in a while that I’ve re-tested this survey. its broken.
I’ve re-done the visible if logic with the gui, ive changed logic gates seen if chat gpt thinks ive done something daft etc. its not working, but its not even predictable in its not working. changing things that shouldnt impact visibility logic (like adding or changing the logic on the second question) is impacting the first question being shown.
I’ve also got a debrief survey that also used to work, uses different syntax than those produced by the gui. the visibility logic there is also broken. The gui seems to be reading the old syntax as intended, but the survey is not displaying the right questions.
If you share the edited link then that is what the participants will see.
There are currently four different survey “interpreters” on Pavlovia, all of which will run any survey in your library. However, 2024.2.0 or 2024.3.0 is needed for multi-block designs.
{q1} = true and {q2} = true and {q3} = true and {q4} = true and {q5} = true
works in preview, not in run
{block_1/q1} = true and {block_1/q2} = true and {block_1/q3} = true and {block_1/q4} = true and {block_1/q5} = true
works in neither.
incidentally I used single digits as these questions inherit details from the name of the survey in the experiment. so you end up with things like consent/consent1 which just looks ugly. if this is a possible problem generally, the default interaction between builder and surveys is going to push people toward doing things like single digits
same behaviour. works in 2024.1.0. only works in 2024.2.0 in preview. whatever is going on seems pretty fundamental and not easily fixable by an end user amending a survey.