Important: policy change on Pavlovia for

We recently sent an email out (copied below) letting people know of an important change in policy on Pavlovia whereby you can now choose what to do with partial data (store and consume a credit or don’t store). Sadly it looks like that email was registered by Google as spam and so many people didn’t see it.

The key is:

  • You can now choose to store partial data from participants but if you do so a credit will be consumed (if you aren’t running under a site license)
  • by default this is turned on, but you can turn it off on the dashboard for each project.

So you might now be consuming credits when participants quit the study and your participants might need warning that aborting the study does not delete their data.

best wishes,
Jon

Hi there,

We are writing to inform you about an upcoming change to how we handle experiment results when participants fail to complete a study (‘incomplete runs’).

TL:DR We have introduced a new setting on the Pavlovia project pages that determines whether data from incomplete runs are stored (and whether a credit is consumed in this case), taking effect on Tuesday, June the 30th.

Up until now, when a participant pressed the Esc key to interrupt a study, the partial data were saved (without a credit being consumed) whereas when a participant closed the browser tab we treated that action as consent withdrawal and, consequently, did not save the data (and did not consume a credit).

We have identified three potential downsides to this policy:

  1. Did the participant intended to leave the experiment or merely closed the window by mistake?
  2. Is it appropriate to keep the data from an incomplete run? We opened a discussion on the forum a month ago (Discussion: What to do with partial data when a participant quits? - #4 by wakecarter) and have come to the conclusion that this would best be left up to the researcher. We urge you to think carefully about whether a participant expects their data to be deleted when they withdraw, and what your ethics approval/information/consent form says about this scenario. If you intend to keep the partial data when your participants exit we believe that, morally, it would be best to inform them that you do. Legally, it will depend on your local rules and laws.
  3. Should credits be consumed for partial data? Those among you working with Participant Credits rather than a Site License may or may not want to spend a credit for incomplete data.

We will be implemented the following solution:

  1. We will ask the browser to warn participants before they leave the page that data are about to be lost. You might have seen such warnings before when leaving a web page with a partially filled form. That technology is somewhat limited: it’s browser specific, we can’t change the text included in the warning, and if a browser doesn’t permit the message at all (now or in the future) we can’t do anything about that. As you may know, browsers are constantly evolving and changing what is or is not permitted, to make sure that programmers don’t abuse the functionality to prevent people from leaving their malicious site. At first, this will apply to the latest versions of the library (from 2020.1 onward). We will propagate this behaviour back to older versions in the coming weeks.
    2 & 3. To make it possible for you to decide how to handle partial data, you will find a new control in the Dashboard, on the project page, in the ‘Saving Results’ section, asking you whether to save incomplete results. You can then decide to:
  • save incomplete results: a credit will be consumed for those of you not covered by a license
  • not save incomplete results: no credit will be consumed
    That new control will take effect on Tuesday, June the 30th.

Best wishes,

The Pavlovia team.

1 Like

Hi,

I tried using this feature, as it seems a good idea in my experiment, which apparently is challenging for the system and tends to crash. However, all I get from aborted (but reportedly nearly finished) experiments are empty csv’s. Is this the expected behaviour? If not, is there something with data logging practices that might cause this?

As my experiment is a long test which cannot be repeated, it’s getting to be quite a problem to lose so much data.

I can’t make the repository public, as the study is currently collecting data and that should be private, but I attach here the .psyexp-file for the experiment, if you wish to examine the code.
eventsegmentation.psyexp (22.6 KB)