Pavlovia: Session file variables not transferred to log file for last participants (first observed Sept, 2, 2019)

URL of experiment:
https://run.pavlovia.org/kogpsy/dfg_ens_de_003/html/
and
https://run.pavlovia.org/kogpsy/dfg_ens_us_001/html/

Description of the problem:
The experiment was running perfectly and we collected valid data the mid/end of August. When re-collecting some participants beginning of September with the very same experiment, suddenly data is missing in the log files generated on Pavlovia. They still include the variables created by the experiment but do not transfer the data from the session files used for the trial loop to the log file anymore.

Example of lines from a correct log file:

5408,probe_diameter_at_test_start,antwort_groesse,trials.thisRepN,trials.thisTrialN,trials.thisN,trials.thisIndex,trials.ran,trials.order,trialnummer,rot1_groesse,rot1_xloc,rot1_yloc,rot2_groesse,rot2_xloc,rot2_yloc,rot3_groesse,rot3_xloc,rot3_yloc,blau1_groesse,blau1_xloc,blau1_yloc,blau2_groesse,blau2_xloc,blau2_yloc,blau3_groesse,blau3_xloc,blau3_yloc,gruen1_groesse,gruen1_xloc,gruen1_yloc,gruen2_groesse,gruen2_xloc,gruen2_yloc,gruen3_groesse,gruen3_xloc,gruen3_yloc,probe_xloc,probe_yloc,participant,date,expName,psychopyVersion,frameRate
,44,44,1,0,30,0,1,29,4,51.62768358,-144,52,56.79876052,-51,49,51.30158552,257,140,39.09265065,44,-142,37.99933842,256,-58,48.0677359,-144,-157,47.08880931,254,-155,48.98064732,-156,-46,50.50008863,-147,148,-51,49,15KF57,2019-08-15_09h15.18.614,experiment,3.1.2,60
,67,67,1,0,30,1,1,29,14,37.127822,-55,142,47.11021144,-157,153,41.36627356,-240,-150,61.13070583,54,-50,85.52061295,46,152,87.07777725,-41,-155,84.85695295,244,-45,77.28523486,-249,-52,98.42412362,51,46,-41,-155,15KF57,2019-08-15_09h15.18.614,experiment,3.1.2,60
,88,88,1,0,30,2,1,29,22,26.8098682,242,-43,55.62624008,249,160,45.97597117,251,56,20.55368169,-248,40,19.08521902,-52,53,40.69083059,156,140,47.85675197,-43,-158,59.63812839,-248,-142,59.56995501,46,-43,242,-43,15KF57,2019-08-15_09h15.18.614,experiment,3.1.2,60

Example of lines from the problematic log file created in the last days:

6179,probe_diameter_at_test_start,antwort_groesse,participant,date,expName,psychopyVersion,frameRate
,26,89,17CJ76,2019-09-02_12h36.11.502,experiment,3.1.2,60
,77,40,17CJ76,2019-09-02_12h36.11.502,experiment,3.1.2,60
,40,101,17CJ76,2019-09-02_12h36.11.502,experiment,3.1.2,60

Important detail:
If we run the experiment by hand, the log files are still created correctly. Only participants send from MTurk to the experiment cause in the creation of the corrupt log files. You could check your server log files what additional parameters MTurk adds to the URL and whether they might cause the issue.

Further details:
Both experiments are hosted on Pavlovia (created with Builder 3.1.2) and participants are recruited through MTurk with participant codes set through the URL parameter, that is something like: https://run.pavlovia.org/kogpsy/dfg_ens_us_001/html/?participant={participant_code} with {participant_code} being set with MTurk batches to actual participant codes, such as 1, 2, etc (Note: Actual participant codes include some letters such that they are harder to guess and the experiment will not run properly if you do not enter a correct participant code. @pavlovia team: If you need a valid participant code for testing, please send me a PN and I will provide you one).

Potentially important:
We also bought some Pavlovia credits on September 2 as we did not know that it will stay for free for one additional month. I am not sure, but the logging errors might have started to occur only after we bought the credits. So it might also be related to the credits being in our account.

I hope you can identify and resolve the issue soon because we tried to re-collect the participants multiple times, always resulting in the same error.

I don’t think this is related to credits, but we’ll look into the issue right away :frowning:

I was just getting my experiment ready for testing online and I have the same issue.

Variables supplied to PsychoJS for trials are not being recorded. Other variables remain unaffected (keyboard responses, reaction times, anything recorded in the expInfo object, and any manually recorded variables put in the JS code).

I used Psychopy Builder (along with custom code) 3.1.5 for the project.

As a temporary hack solution, you should be able to add custom JS at the end of the trial’s routine for each trial variable to ensure the variables are recorded in the output spreadsheet:
psychoJS.experiment.addData('variable_from_trial', variable_from_trial)

Also, to clarify, I encountered this just when piloting my study. After the pilot session, a .csv with the experiment results downloaded, but the trial variables were missing. I’ve updated my experiment’s javascript with the solution above, but won’t be able to properly test it until my pavlovia experiment updates itself with the new code from gitlab; from my experience this tends to take a couple of hours or so.

Update:
I just tested this and it works as a temporary fix. You need to add code to the end of routine in your trial loop. Some example code:
psychoJS.experiment.addData("stim_id", trials.trialList[trials.thisIndex]["stim_id"]);

where trials is the name of your TrialHandler object and stim_id is whatever you want to record from your condition file for your trials.

1 Like

Dear @jon, do you have any new insights on what might have introduced this Bug to Pavlovia and when it is going to be fixed? We would like to complete data collection for the two studies. Thanks!

1 Like

Hello @frank.papenmeier,

I am sorry to read that your experiment stopped working. Thank you for providing such a detailed explanation!

The issue here does not have to do with MTurk but, rather, with the fact that your experiment is using the generic, latest version of the library (i.e. core.js, data.js, etc.) and that we recently made changes to it in order to better handle certain loop scenarios.

As you may know, Pavlovia and PsychoJS are still very much under active development. We are trying to make changes to the library and to the back-end as transparent as possible to the experiment designers and to the participants but we regularly need to do deep changes, which sometimes also require the JavaScript code to be generated in a different way.
What happened here is that we have altered the way PsychoJS handles loops, in order to accommodate more scenarios. To do so, we had to change both the library and code generation. Unfortunately, because you are using the generic, latest version of the PsychoJS library instead of a specific version, you only got half of it: i.e. your experiment with the old code now uses the new library. Hence the problem.

The easiest way to deal with your issue is to modify the head of your experiment.js file and use the 3.1.0 version of the library, i.e.:

import { PsychoJS } from ‘https://pavlovia.org/lib/core-3.1.0.js’;
import * as core from ‘https://pavlovia.org/lib/core-3.1.0.js’;
import { TrialHandler } from ‘https://pavlovia.org/lib/data-3.1.0.js’;
import { Scheduler } from ‘https://pavlovia.org/lib/util-3.1.0.js’;
import * as util from ‘https://pavlovia.org/lib/util-3.1.0.js’;
import * as visual from ‘https://pavlovia.org/lib/visual-3.1.0.js’;
import { Sound } from ‘https://pavlovia.org/lib/sound-3.1.0.js’;

I have tested it on my end and it is working like a charm. This will also protect you against future changes.

Alternatively, you could regenerate your experiment code with the latest version of PsychoPy. That should also work.

I completely understand that you could not possible guess that we made deep changes and I apologise the mishap. @jon and I have been thinking about ways to clearly communicate those situations to the experiment designers. We should have a solution in place in the coming weeks, most probably using the message section of the pavlovia.org dashboard, emails, and warning the designer before they change the experiment status to RUNNING that they are using a generic version.

Certainly, the take-home message is that once you are satisfied with a given version of the library and with your experiment code, it is probably a good idea to “lock in” the library, by using a given version, rather than using the latest, generic version, which is susceptible to change.

Alain

1 Like

Hello @kevinhroberts,

I believe your problem is of the same nature as that of @frank.papenmeier, to whom I have just replied (see above). I would encourage you to use the 3.1.0 version of the library, or to regenerate the code using the latest version of PsychoPy.
Let me know if the issue persists!
Cheers,

Alain

Dear @apitiot,

thanks for this detailed information. If will try it soon and report back whether it worked.

As I generated the experiment using the Builder, I suggest that the Builder should “lock in” the library instead of using the latest generic version. Doing so, one would get reproducible results when using a specific version of PsychoPy/Builder.

All the best,

Frank.

Hi @frank.papenmeier, you can select which version of PsychoPy you wish to use in Builder from Experiment Settings > Use Version drop down menu. This will enable PsychoPy to write version required into the JavaScript, and also compile the code using that particular version of PsychoPy.

Hi @dvbridges,
just tried setting the version the way you described it, but it will set the lib versions as follows which looks wrong to me (the version number should not be quoted, should it?):
import { PsychoJS } from ‘https://pavlovia.org/lib/core-‘3.1.0’.js’;
import * as core from ‘https://pavlovia.org/lib/core-‘3.1.0’.js’;
import { TrialHandler } from ‘https://pavlovia.org/lib/data-‘3.1.0’.js’;
import { Scheduler } from ‘https://pavlovia.org/lib/util-‘3.1.0’.js’;
import * as util from ‘https://pavlovia.org/lib/util-‘3.1.0’.js’;
import * as visual from ‘https://pavlovia.org/lib/visual-‘3.1.0’.js’;
import { Sound } from ‘https://pavlovia.org/lib/sound-‘3.1.0’.js’;
Thus, I went with editing the experiment.js by hand as suggested by @apitiot and this seems to work so far.
All the best,
Frank.

Thanks @frank.papenmeier, looks like a bug in version 3.1.0. If instead you choose 3.1, or 3.1.1, the output should be correct. That would save having to edit the JS files each time.

Dear @apitiot
I have been experiencing a similar problem with my log files (see Incomplete Log Files in Pavlovia - #2 by jon) and would like to try to see if your suggested solution works. Where exactly do I implement this change? Is this something that I do in the code on my project in gitlab? Or should I do this in builder and re-upload the experiment on Pavlovia?

Hi,

I am running on the same problem, even after update the URLs in the module section of the script. Any idea of what could solve the issue?

OK, I emptied the cache and that solved the issue – phew!

I also wanted to underline how important it would be to set up some sort of communication tool about eventual changes, so to prevent unwanted mishaps.

Thanks!

1 Like

The options that Alain is pointing to are to either:

  • recompile your script from the latest version of PsychoPy
  • compile your script with a fixed version (in the experiment settings select useVersion 3.1.0 or similar)

The general recommendation is that once an experiment is working and ready to start data collection, you should set it’s version to be fixed at the version you have been developing on. So you would work on the latest version during development but then set use version when you run to stop it changing any further. You won’t benefit from updates and features but your study won’t break or change behaviour.

1 Like

Hi @jon, thanks for the summary. But please note that using 3.1.0 in the experiment settings might be a bad idea given the bug that I reported above (message #9 in this thread)

Yes, I didn’t mean to suggest that people should use 3.1.0, just that whatever version you were using when you made the experiment work is the version you should then fix on with this mechanism. The concept applies to the Python interface as well. This allows you to prevent future versions from changing your experiment (either for better or worse) by allowing your script to run in a specific target version.

Hello @rob-linguistics13,

Might I gather your thought on the matter of communication? We are planning to use the message section of the dashboard, which was implemented for that specific purpose, and emails (but only for important communications). How does that sound to you?

Alain

1 Like

That’s good. Maybe a good idea would also be to put a short notice on the github homepage of PsychoJS and/or on the API documentation page (maybe with a link to the dashboard page where more information are provided)? This may be useful especially for those who are working on the js files directly rather than going through the Builder.

Thanks!!

I’ve also run into this problem – implementing the fixes now. I had collected a couple of data-sets which have no actual data in them.

Oddly both outputs (those with the info, and those without it logged) both say they are produced using 3.1.0 - I guess this is the builder version used rather than the version of psychojs?

It would be good to log out the versions of psychojs used in the csv, it would make finding the root of these things a bit easier.

From a user perspective should the default behavior not be to compile with fixed versions of all dependencies? It doesn’t mention that it is recommended to select a fixed version on the documentation for the online stuff at all, or that this impacts how exported HTML/js is compiled (maybe having a drop-down in the ‘Online’ tab of the settings would make sense).

The issue of how/when to fix the version number is tricky. We want people to be using the latest improvments to the code, but we want people to be able to prevent any further updates.

a) My feeling had been that we should allow an ‘unversioned’ version and that should be updated constantly, but allow users to fix their study to a particular version as well, which is the model we take for the PsychoPy Python lib. Then the optimal approach would be to develop your study in the unversioned (roughly latest) version so that you benefit from improvements, but then, when you start running the experiment for real, you could fix at the version you were on at that point to provide a fair degree of future proofing (nothing is ever truly future-proofed even in a ‘containerised study’). The problem is that people aren’t aware of that and probably wouldn’t notice it in the documentation anyway!

b) The alternative, and I think we’ll start doing this as of PsychoPy 3.3 is always to compile using a fixed version, but update that according to whatever version is currently installed. People can still select a particular version if they want, so they can still compile a 3.2 script from 3.3, but the default will be fixed at the installed version

Using (b) has the advantage that people won’t end up with conflicts between their script and the library version, but the disadvantage that people will more often run old versions of the lib (because they don’t update PsychoPy very often). I was talking with @alain and @dvbridges and we agreed this was probably worth it, but happy to hear other people’s views.