Task crashed or froze for 30% of online participants

just thought I’d bump this up…

any thoughts?

Hi @Anthony, that error looks like you are setting the text of a text component using an actual text component. E.g., instructions_1.setText(instructions). Instructions_1 and instructions are both text components, that is why you get the [Object object] text. If you change this, you can go back to using the conditions file.

Hi @dvbridges,

I’m not sure I follow your meaning. I have a loop that looks like this:

image

and is defined as this:

I have one text component called instructions_1 which looks like this:

and a conditions file that looks like this:

This set up has no code associated with it and produces [object Object].

As an aside, do you know if the \n in the text within the conditions file will be interpreted as a new line?

Thanks,
Anthony

You also have a text component called “instructions” created in Builder. Not sure which routine though, but it is there. So what is happening, is that your ‘instructions’ variable from you conditions file is being overwritten by the text component called “instructions”, so when it prints, it prints the text representation of the text object.

Fancy that, you’re right of course @dvbridges, and renaming the condition has solved that problem.

Any idea how I can include line breaks in text brought in from a conditions file?

The answer to this is to us ALT ENTER within the excel cell to create a new line.

I’ll be in touch when I’ve finished optimising! (or the next error I can’t solve, whichever happens first!)

Thanks!

2 Likes

Hi @dvbridges,

It’s been a while but I’ve now finished optimising the task. I’ve completely restructured the flow, cutting the number of routines from 26 down to 14. Hopefully this will do the trick.

If the offer to take a look is still open, that would be very much appreciated.

Since we don’t really know what was causing the problems when running it for real through prolific last time (since I was unable to replicate any of the reported errors on my machine), can you think of a way I can test it to minimise the risk of it crashing on so many people?

Thanks again,
Anthony

Hi @Anthony,

Did optimising the routines resolve this issue? I’m having a similar problem running my experiment through Prolific - some participants went through the whole experiment fine, and others got stuck on “Initialising the experiment” at the start. They were also using Chrome or Edge, but I’ve successfully run through the whole task on Windows 10 with Chrome many times.

Would love to hear if you were able to figure out what was happening!

Thanks,
Hilary

Hi @Hilary,

Optimising definitely helped to a degree - but ultimately I had to split my task into four parts. My task was ~55 minutes long and even when it was fully optimised there were problems. I dug deep into the log files to see what was happening and optimised where I could but ultimately it seems to me that long, complex tasks are not handled all that well on pavlovia. Now I have four instances of the task on pavlovia so that the data can be saved intermittently and then participants get automatically forwarded to the next part. I tried with longer two or three instances, but the only way it seemed to be stable was with four. A little frustrating as it adds a bit of time to the overall length an already long task, but at least participants can make it through without it crashing.

Hope that helps and good luck!
Anthony

1 Like

This still seems to be an issue with 2020.1.3 but I’m not sure about 2020.2.3, which should have better memory stability.

Oh really? Interesting… Unfortunately my code breaks when I sync it to pavlovia from 2020.2.3. I think it’s something to do with a different way it handles .setText in the code components. Either that or the changes to how it handles breaking out of trial loops.

To be honest though since I’ve finally got a working version in 2020.1.3 (which took forever) I haven’t been bothered to dig into it too deep because I’ve already sunk enough hours into designing this task!

Future experiments will be coded from scratch in the latest version though.

Hi @Anthony, for splitting your task into 4 parts, does the ptp need to re-enter their ID for each part, or it sounds like you were able to automatically pass it through? Also, does having 4 instances mean that you need to pay separately for each part (i.e. instead of paying .20 per ptp for the whole exp., you’re paying .20 per ptp for each of the 4 sections, totaling to .80)? My exp. will be around a half hour and am trying to plan ahead a bit to prevent against potential missing data. Thanks!

Hi @sawal,

No, they don’t need to enter the ID multiple times. If you go to the online tab of Experimental Settings, you can enter the url participants get redirected to on completion of the task under ‘Completion Link’. I give an example below where you can see they will be redirected to the pavlovia address of the task and the bit after the ‘?’ will take the ID participant field of the current experiment and populate the field in the next.

$"https://run.pavlovia.org/AccountName/TaskName/html?participant=" + expInfo['participant']

For this same reason, participants only need paying once - they’ll only be redirected (via the completion link again) to prolific after the final run of the task.

1 Like

Awesome, thanks Anthony! I’ve seen the redirect but wasn’t sure if it would still count as a separate experiment within pavlovia or not. It’s been easier to debug experiment chunks separately for me, so I’d prefer to keep 3/4 chunks as separate but linked if there aren’t any drawbacks.
Thanks for the tip re carrying ptp ID# through!

It will count as a separate experiment for Pavlovia, but not Prolific

Hi @sawal,

As @wakecarter states - it will be charged as separate experiments on pavlovia. In a previous message you were asking specifically about prolific and it didn’t occur to me about the costs on pavlovia.

Hi @Anthony, yeah I realize I wasn’t clear in my original question re pavlovia. It makes sense since you are technically engaging in a new exp for each pavlovia redirect (while leaving and returning to prolific only once), but it is unfortunate for experiments that have a few different parts

Hi, I have tried to split my task into 3 parts as @Anthony suggested. It said that I have to assigned credit to access at my second part. So I assigned 1 credit to my first part because you said that participants only need paying once, but it said again that my second part needs a credit. If I add one to my second and third parts, does it gonna use the 3 credits???
Also, I there any option to show the ending message (“Thank you for your patience…”) only at the “real end” (after the third part)?

Hi @debarras,

Yes, when I said they only need paying once I was referring to Prolific, not Pavlovia. Sorry for the confusion.
With regards your second question I normally put a text component as a final routine saying something like “Thanks for you completing this part of the task. Please click OK after your data has been saved and you will be redirected to the next part of the task.”

1 Like

I am experiencing this issue with my task. In my task, participants are presented with 30 separate pairs of images and rate each pair. If I fly through the task, everything works great. If I spend a few seconds on each pair or leave a pair up for an extended period of time, the experiment crashes and goes to the screen you depicted. I also see the same error in the JS console. Is there a solution to this besides optimizing the script? The image files are quite small as it is and it is a fairly short task (approximately 10 min).

Thanks.