| Reference | Downloads | Github

Allowing others to pilot your experiment? (on Pavlovia or other platforms)

Previously titled:
“Piloting experiments from Pavlovia’s Explore page only sometimes works”

So I checked the Pavlovia site for more info (Pavlovia), and it seems that it’s intended that piloting others’ experiments isn’t supposed to work at all… So now the issue (for the Pavlovia developers at least) is that it sometimes DOES work rather than it sometimes doesn’t. I’m leaving the original post below since it still is an inconsistency that probably needs to be addressed, but my main point in this post was that I would like to find a way that I CAN let others demo my experiments (without running it since I don’t need the data) to build a portfolio. Pavlovia now seems to be off the table… but does anyone have any experience running psychoJS experiments off another platform or a fresh website? Thanks

Hi, I’m trying to figure out why clicking “run” on an experiment on the Explore page that is set to piloting only sometimes allows you to successfully run the experiment, while other times it gives this error:

[experiment name] is currently in PILOTING mode but the pilot token is missing from the URL.

If you are the experiment designer, you can pilot it by pressing the pilot button on your experiment page.

Otherwise please contact the experiment designer to let him or her know that the experiment status must be changed to RUNNING for participants to be able to run it.

For example, here are a few cases that run and some that don’t, but all of them are set to piloting. Note: This is as of 05/05/2021 and these experiments could potentially get deleted or change status. I tried to choose examples that looked like they’d be available for awhile.

Experiment: Julia Sadka / Lexical Decision Task · GitLab
URL when you click “run”:

Experiment: Visual Cognition Lab / FaceDiscriminationTaskPilot04 · GitLab
URL when you click “run”:

Experiment: gishi / MentalRotationTask · GitLab
URL when you click “run”:

Experiment: Divya Chari / Digit Span · GitLab
URL when you click “run”:

I provided the running links so that you can see there is no pilot token in the first two that’s allowing them to run, so I don’t understand how the issue for the second two would be that the token is missing like the error says. Also, I noticed that the two that do run are saving data in GitLab if I partially run them; I don’t quite understand why this would happen if it’s set to piloting and not running. Does anyone have any idea what the difference is?

If there’s no reliable way to allow anyone who is not the experimenter to pilot an experiment on Pavlovia, I would like to figure out another way to do that so I can start building a UX portfolio with my experiments (meaning I don’t even need to save any data from them), so if anyone has other ideas please let me know!! (Including using another platform to host them, or getting them to run off a website starting from scratch)

A pilot link is only valid for about an hour.

What I do is set the experiment to running, add a small number of credits, turn off save incomplete data and add a final routine which doesn’t end (sometimes containing feedback).

You can try out my demos here

1 Like

Oh! This is a great idea, thank you!

Instead of having a routine that runs indefinitely, I added a line of code for it to quit automatically and still be considered ‘incomplete.’ I had to create a new routine to put at the end with a code component and a blank text component, the code being:

Begin Routine:

// quitPsychoJS(message, isCompleted)*
// message (str): text that displays on exit dialogue box*
// isCompleted (bool): consider the run complete (true) or incomplete (false)*

quitPsychoJS('', false);

The duration of the text is arbitrary (and it could even be set indefinitely), but it needs to be long enough that the experiment can successfully exit without continuing to run more code.

More explanation about this that is probably unnecessary but here for anyone who’s curious:

Note: This is all assuming that the setting to save incomplete data is set to off, meaning whether or not the data downloads after fully piloting an experiment is an indicator of whether it’s been marked as complete or incomplete.

In the JS code, quitPsychoJS() is already called in two scenarios:

  1. quitPsychoJS('', true) when a participant fully runs the experiment, so the data is complete.
  2. quitPsychoJS('The [Escape] key was pressed. Goodbye!', false) when they exit the experiment early, so the data is incomplete.

I changed the message in my call to it so it would be distinguishable from the first. When attempting to put this code in a few different ways, it seems like these other calls are still running after it because I had some odd results.

  • When code is added to the already existing final routine instead of a new one:
    • Message displayed: Varies… sometimes blank (#1), sometimes correct, sometimes it switches from one to the other.
    • Data downloads: Yes, so it was incorrectly marked as complete.
    • What I think happened: The code in #1 is supposed to occur immediately after the end of the last routine, so it seems like my manual call to quitPsychoJS didn’t stop more code from being executed, meaning my code and #1 were both executed.
  • When code is added to its own routine without the text component:
    • Message displayed: Mostly #2, but I’ve seen it switch from correct to #2 also.
    • Data downloads: No! It was marked as incomplete.
    • What I think happened: #2 was still executed. The condition for that is if the escape button is pressed OR if psychoJS.experiment.experimentEnded, so I assume that registers as true after my code is executed, meaning my code and #2 are both executed.

So, if the routine is extended for whatever length of time by adding another component that has a set duration, it keeps that other code from also executing and confusing the parameters.

P.S. Sorry for the deleted post, I somehow posted it prematurely and panic-deleted instead of editing.

1 Like