Pavlovia: Session file variables not transferred to log file for last participants (first observed Sept, 2, 2019)

Dear @apitiot,

thanks for this detailed information. If will try it soon and report back whether it worked.

As I generated the experiment using the Builder, I suggest that the Builder should “lock in” the library instead of using the latest generic version. Doing so, one would get reproducible results when using a specific version of PsychoPy/Builder.

All the best,

Frank.

Hi @frank.papenmeier, you can select which version of PsychoPy you wish to use in Builder from Experiment Settings > Use Version drop down menu. This will enable PsychoPy to write version required into the JavaScript, and also compile the code using that particular version of PsychoPy.

Hi @dvbridges,
just tried setting the version the way you described it, but it will set the lib versions as follows which looks wrong to me (the version number should not be quoted, should it?):
import { PsychoJS } from ‘https://pavlovia.org/lib/core-‘3.1.0’.js’;
import * as core from ‘https://pavlovia.org/lib/core-‘3.1.0’.js’;
import { TrialHandler } from ‘https://pavlovia.org/lib/data-‘3.1.0’.js’;
import { Scheduler } from ‘https://pavlovia.org/lib/util-‘3.1.0’.js’;
import * as util from ‘https://pavlovia.org/lib/util-‘3.1.0’.js’;
import * as visual from ‘https://pavlovia.org/lib/visual-‘3.1.0’.js’;
import { Sound } from ‘https://pavlovia.org/lib/sound-‘3.1.0’.js’;
Thus, I went with editing the experiment.js by hand as suggested by @apitiot and this seems to work so far.
All the best,
Frank.

Thanks @frank.papenmeier, looks like a bug in version 3.1.0. If instead you choose 3.1, or 3.1.1, the output should be correct. That would save having to edit the JS files each time.

Dear @apitiot
I have been experiencing a similar problem with my log files (see Incomplete Log Files in Pavlovia - #2 by jon) and would like to try to see if your suggested solution works. Where exactly do I implement this change? Is this something that I do in the code on my project in gitlab? Or should I do this in builder and re-upload the experiment on Pavlovia?

Hi,

I am running on the same problem, even after update the URLs in the module section of the script. Any idea of what could solve the issue?

OK, I emptied the cache and that solved the issue – phew!

I also wanted to underline how important it would be to set up some sort of communication tool about eventual changes, so to prevent unwanted mishaps.

Thanks!

1 Like

The options that Alain is pointing to are to either:

  • recompile your script from the latest version of PsychoPy
  • compile your script with a fixed version (in the experiment settings select useVersion 3.1.0 or similar)

The general recommendation is that once an experiment is working and ready to start data collection, you should set it’s version to be fixed at the version you have been developing on. So you would work on the latest version during development but then set use version when you run to stop it changing any further. You won’t benefit from updates and features but your study won’t break or change behaviour.

1 Like

Hi @jon, thanks for the summary. But please note that using 3.1.0 in the experiment settings might be a bad idea given the bug that I reported above (message #9 in this thread)

Yes, I didn’t mean to suggest that people should use 3.1.0, just that whatever version you were using when you made the experiment work is the version you should then fix on with this mechanism. The concept applies to the Python interface as well. This allows you to prevent future versions from changing your experiment (either for better or worse) by allowing your script to run in a specific target version.

Hello @rob-linguistics13,

Might I gather your thought on the matter of communication? We are planning to use the message section of the dashboard, which was implemented for that specific purpose, and emails (but only for important communications). How does that sound to you?

Alain

1 Like

That’s good. Maybe a good idea would also be to put a short notice on the github homepage of PsychoJS and/or on the API documentation page (maybe with a link to the dashboard page where more information are provided)? This may be useful especially for those who are working on the js files directly rather than going through the Builder.

Thanks!!

I’ve also run into this problem – implementing the fixes now. I had collected a couple of data-sets which have no actual data in them.

Oddly both outputs (those with the info, and those without it logged) both say they are produced using 3.1.0 - I guess this is the builder version used rather than the version of psychojs?

It would be good to log out the versions of psychojs used in the csv, it would make finding the root of these things a bit easier.

From a user perspective should the default behavior not be to compile with fixed versions of all dependencies? It doesn’t mention that it is recommended to select a fixed version on the documentation for the online stuff at all, or that this impacts how exported HTML/js is compiled (maybe having a drop-down in the ‘Online’ tab of the settings would make sense).

The issue of how/when to fix the version number is tricky. We want people to be using the latest improvments to the code, but we want people to be able to prevent any further updates.

a) My feeling had been that we should allow an ‘unversioned’ version and that should be updated constantly, but allow users to fix their study to a particular version as well, which is the model we take for the PsychoPy Python lib. Then the optimal approach would be to develop your study in the unversioned (roughly latest) version so that you benefit from improvements, but then, when you start running the experiment for real, you could fix at the version you were on at that point to provide a fair degree of future proofing (nothing is ever truly future-proofed even in a ‘containerised study’). The problem is that people aren’t aware of that and probably wouldn’t notice it in the documentation anyway!

b) The alternative, and I think we’ll start doing this as of PsychoPy 3.3 is always to compile using a fixed version, but update that according to whatever version is currently installed. People can still select a particular version if they want, so they can still compile a 3.2 script from 3.3, but the default will be fixed at the installed version

Using (b) has the advantage that people won’t end up with conflicts between their script and the library version, but the disadvantage that people will more often run old versions of the lib (because they don’t update PsychoPy very often). I was talking with @alain and @dvbridges and we agreed this was probably worth it, but happy to hear other people’s views.

Hey Jon,

I was using the same experiment link for 2 months with no new updates- https://run.pavlovia.org/sparmar34/group_3/html/ . I did update the Psychopy version on my desktop but I never synced my previous experiments again from builder to Psychopy or vice-versa. All the data collected in last 12 days does not have any trial information. Is there a way to recover that somehow? I was using experts for my Study and I know they are not going to participate again for an hour. Any help would be really appreciated.

Thanks.

1 Like

Hi Joh,
I am in a similar situation and will be interested in knowing if there is any way to recover these data.
Thank you,

Firstly, thanks for all the provided information. As @Sweta_Parmar, I would be interested if there is any data on the server with which the collected (but in the first half of September wrongly saved) data could be restored?

@jon, as I at the moment also face issues with my older code (from 3.1.5) on newer versions (3.2.X) independently of Pavlovia, version (b) of your descriptions sounds as the better solution to me!

Furthermore, I have a suggestion for the piloting status. It would be a great help of being able to collet 1 to 3 complete data sets in the piloting status to check if the data looks like it should look like. I seem to understand why that is not the case at the moment, but on the longer run a small amount of “piloting trials” with data collection would be an amazing feature. Probably it would also work without that. I just could imagine, that it would convince more people to afterwards buying the credits.

No, unfortunately we really can’t retrieve data from studies that suffered from this loss. Very sorry.

Yes, we have gone with option b) of fixing the version of the PsychoJS lib to the version of the PsychoPy app that compiled it. You can still use the Settings>useVersion setting to customise exactly what version is used in the compilation but avoid versions 3.2.0 to 3.2.2 because you’ll be int he same boat as before!)

Further measures we’ve taken to prevent any further loss:

  1. We’ve added further code so that the conditions data are captured even when the lib/app were not correctly paired as above. Right now, any further data collection in your studies won’t be suffering this issue even if you haven’t re-synced your code
  2. We’re working on adding log files right now, similar to those in PsychoPy itself, so that there is a second record of what happened in the experiment (Belt and braces to keep the trousers up!). In this instance a log file would have allowed us to fix the csv file but it hadn’t yet been programmed to output.

best wishes and our apologies again,
Jon

sorry, what does “modify the head” of your experiment.js file means, I can see the file under the html folder, but what is the “head”?

I believe it’s the beginning of the file, as described by @apitiot in this thread.