| Reference | Downloads | Github

Pavlovia: Session file variables not transferred to log file for last participants (first observed Sept, 2, 2019)

URL of experiment:

Description of the problem:
The experiment was running perfectly and we collected valid data the mid/end of August. When re-collecting some participants beginning of September with the very same experiment, suddenly data is missing in the log files generated on Pavlovia. They still include the variables created by the experiment but do not transfer the data from the session files used for the trial loop to the log file anymore.

Example of lines from a correct log file:


Example of lines from the problematic log file created in the last days:


Important detail:
If we run the experiment by hand, the log files are still created correctly. Only participants send from MTurk to the experiment cause in the creation of the corrupt log files. You could check your server log files what additional parameters MTurk adds to the URL and whether they might cause the issue.

Further details:
Both experiments are hosted on Pavlovia (created with Builder 3.1.2) and participants are recruited through MTurk with participant codes set through the URL parameter, that is something like:{participant_code} with {participant_code} being set with MTurk batches to actual participant codes, such as 1, 2, etc (Note: Actual participant codes include some letters such that they are harder to guess and the experiment will not run properly if you do not enter a correct participant code. @pavlovia team: If you need a valid participant code for testing, please send me a PN and I will provide you one).

Potentially important:
We also bought some Pavlovia credits on September 2 as we did not know that it will stay for free for one additional month. I am not sure, but the logging errors might have started to occur only after we bought the credits. So it might also be related to the credits being in our account.

I hope you can identify and resolve the issue soon because we tried to re-collect the participants multiple times, always resulting in the same error.

I don’t think this is related to credits, but we’ll look into the issue right away :frowning:

I was just getting my experiment ready for testing online and I have the same issue.

Variables supplied to PsychoJS for trials are not being recorded. Other variables remain unaffected (keyboard responses, reaction times, anything recorded in the expInfo object, and any manually recorded variables put in the JS code).

I used Psychopy Builder (along with custom code) 3.1.5 for the project.

As a temporary hack solution, you should be able to add custom JS at the end of the trial’s routine for each trial variable to ensure the variables are recorded in the output spreadsheet:
psychoJS.experiment.addData('variable_from_trial', variable_from_trial)

Also, to clarify, I encountered this just when piloting my study. After the pilot session, a .csv with the experiment results downloaded, but the trial variables were missing. I’ve updated my experiment’s javascript with the solution above, but won’t be able to properly test it until my pavlovia experiment updates itself with the new code from gitlab; from my experience this tends to take a couple of hours or so.

I just tested this and it works as a temporary fix. You need to add code to the end of routine in your trial loop. Some example code:
psychoJS.experiment.addData("stim_id", trials.trialList[trials.thisIndex]["stim_id"]);

where trials is the name of your TrialHandler object and stim_id is whatever you want to record from your condition file for your trials.

1 Like

Dear @jon, do you have any new insights on what might have introduced this Bug to Pavlovia and when it is going to be fixed? We would like to complete data collection for the two studies. Thanks!

1 Like

Hello @frank.papenmeier,

I am sorry to read that your experiment stopped working. Thank you for providing such a detailed explanation!

The issue here does not have to do with MTurk but, rather, with the fact that your experiment is using the generic, latest version of the library (i.e. core.js, data.js, etc.) and that we recently made changes to it in order to better handle certain loop scenarios.

As you may know, Pavlovia and PsychoJS are still very much under active development. We are trying to make changes to the library and to the back-end as transparent as possible to the experiment designers and to the participants but we regularly need to do deep changes, which sometimes also require the JavaScript code to be generated in a different way.
What happened here is that we have altered the way PsychoJS handles loops, in order to accommodate more scenarios. To do so, we had to change both the library and code generation. Unfortunately, because you are using the generic, latest version of the PsychoJS library instead of a specific version, you only got half of it: i.e. your experiment with the old code now uses the new library. Hence the problem.

The easiest way to deal with your issue is to modify the head of your experiment.js file and use the 3.1.0 version of the library, i.e.:

import { PsychoJS } from ‘’;
import * as core from ‘’;
import { TrialHandler } from ‘’;
import { Scheduler } from ‘’;
import * as util from ‘’;
import * as visual from ‘’;
import { Sound } from ‘’;

I have tested it on my end and it is working like a charm. This will also protect you against future changes.

Alternatively, you could regenerate your experiment code with the latest version of PsychoPy. That should also work.

I completely understand that you could not possible guess that we made deep changes and I apologise the mishap. @jon and I have been thinking about ways to clearly communicate those situations to the experiment designers. We should have a solution in place in the coming weeks, most probably using the message section of the dashboard, emails, and warning the designer before they change the experiment status to RUNNING that they are using a generic version.

Certainly, the take-home message is that once you are satisfied with a given version of the library and with your experiment code, it is probably a good idea to “lock in” the library, by using a given version, rather than using the latest, generic version, which is susceptible to change.


1 Like

Hello @kevinhroberts,

I believe your problem is of the same nature as that of @frank.papenmeier, to whom I have just replied (see above). I would encourage you to use the 3.1.0 version of the library, or to regenerate the code using the latest version of PsychoPy.
Let me know if the issue persists!


Dear @apitiot,

thanks for this detailed information. If will try it soon and report back whether it worked.

As I generated the experiment using the Builder, I suggest that the Builder should “lock in” the library instead of using the latest generic version. Doing so, one would get reproducible results when using a specific version of PsychoPy/Builder.

All the best,


Hi @frank.papenmeier, you can select which version of PsychoPy you wish to use in Builder from Experiment Settings > Use Version drop down menu. This will enable PsychoPy to write version required into the JavaScript, and also compile the code using that particular version of PsychoPy.

Hi @dvbridges,
just tried setting the version the way you described it, but it will set the lib versions as follows which looks wrong to me (the version number should not be quoted, should it?):
import { PsychoJS } from ‘‘3.1.0’.js’;
import * as core from ‘‘3.1.0’.js’;
import { TrialHandler } from ‘‘3.1.0’.js’;
import { Scheduler } from ‘‘3.1.0’.js’;
import * as util from ‘‘3.1.0’.js’;
import * as visual from ‘‘3.1.0’.js’;
import { Sound } from ‘‘3.1.0’.js’;
Thus, I went with editing the experiment.js by hand as suggested by @apitiot and this seems to work so far.
All the best,

Thanks @frank.papenmeier, looks like a bug in version 3.1.0. If instead you choose 3.1, or 3.1.1, the output should be correct. That would save having to edit the JS files each time.

Dear @apitiot
I have been experiencing a similar problem with my log files (see Incomplete Log Files in Pavlovia) and would like to try to see if your suggested solution works. Where exactly do I implement this change? Is this something that I do in the code on my project in gitlab? Or should I do this in builder and re-upload the experiment on Pavlovia?


I am running on the same problem, even after update the URLs in the module section of the script. Any idea of what could solve the issue?

OK, I emptied the cache and that solved the issue – phew!

I also wanted to underline how important it would be to set up some sort of communication tool about eventual changes, so to prevent unwanted mishaps.


1 Like

The options that Alain is pointing to are to either:

  • recompile your script from the latest version of PsychoPy
  • compile your script with a fixed version (in the experiment settings select useVersion 3.1.0 or similar)

The general recommendation is that once an experiment is working and ready to start data collection, you should set it’s version to be fixed at the version you have been developing on. So you would work on the latest version during development but then set use version when you run to stop it changing any further. You won’t benefit from updates and features but your study won’t break or change behaviour.

1 Like

Hi @jon, thanks for the summary. But please note that using 3.1.0 in the experiment settings might be a bad idea given the bug that I reported above (message #9 in this thread)

Yes, I didn’t mean to suggest that people should use 3.1.0, just that whatever version you were using when you made the experiment work is the version you should then fix on with this mechanism. The concept applies to the Python interface as well. This allows you to prevent future versions from changing your experiment (either for better or worse) by allowing your script to run in a specific target version.

Hello @rob-linguistics13,

Might I gather your thought on the matter of communication? We are planning to use the message section of the dashboard, which was implemented for that specific purpose, and emails (but only for important communications). How does that sound to you?


1 Like

That’s good. Maybe a good idea would also be to put a short notice on the github homepage of PsychoJS and/or on the API documentation page (maybe with a link to the dashboard page where more information are provided)? This may be useful especially for those who are working on the js files directly rather than going through the Builder.


I’ve also run into this problem – implementing the fixes now. I had collected a couple of data-sets which have no actual data in them.

Oddly both outputs (those with the info, and those without it logged) both say they are produced using 3.1.0 - I guess this is the builder version used rather than the version of psychojs?

It would be good to log out the versions of psychojs used in the csv, it would make finding the root of these things a bit easier.

From a user perspective should the default behavior not be to compile with fixed versions of all dependencies? It doesn’t mention that it is recommended to select a fixed version on the documentation for the online stuff at all, or that this impacts how exported HTML/js is compiled (maybe having a drop-down in the ‘Online’ tab of the settings would make sense).

The issue of how/when to fix the version number is tricky. We want people to be using the latest improvments to the code, but we want people to be able to prevent any further updates.

a) My feeling had been that we should allow an ‘unversioned’ version and that should be updated constantly, but allow users to fix their study to a particular version as well, which is the model we take for the PsychoPy Python lib. Then the optimal approach would be to develop your study in the unversioned (roughly latest) version so that you benefit from improvements, but then, when you start running the experiment for real, you could fix at the version you were on at that point to provide a fair degree of future proofing (nothing is ever truly future-proofed even in a ‘containerised study’). The problem is that people aren’t aware of that and probably wouldn’t notice it in the documentation anyway!

b) The alternative, and I think we’ll start doing this as of PsychoPy 3.3 is always to compile using a fixed version, but update that according to whatever version is currently installed. People can still select a particular version if they want, so they can still compile a 3.2 script from 3.3, but the default will be fixed at the installed version

Using (b) has the advantage that people won’t end up with conflicts between their script and the library version, but the disadvantage that people will more often run old versions of the lib (because they don’t update PsychoPy very often). I was talking with @alain and @dvbridges and we agreed this was probably worth it, but happy to hear other people’s views.