psychopy.org | Reference | Downloads | Github

Pavlovia experiment changes are delayed in "run" version

URL of experiment:
run: https://pavlovia.org/run/djangraw/mmi-3level/html/
code: https://gitlab.pavlovia.org/djangraw/mmi-3level

Description of the problem:
I have made changes to the code that are reflected in the gitlab code, but the “run” version has not updated (e.g., compare line 1409 here with line 1409 here). I have to wait some unknown time (minutes-hours) for the “run” version to update, and this makes debugging take a very long time. Is there any way to force an update of the “run” version, or to test the javascript while I wait for Pavlovia’s “run” version to update?

Update: it seems to be updating quickly now, but this has happened before, so any suggestions would be welcome.

1 Like

For the question of why the updates aren’t very fast I must say I don’t know why. They are independent files that have to be copied over, so it would take a finite time, but I would expect it to be seconds at most if it was just a change to the text (JS) files. I wonder if @alain has any insights.

For the second issue that this would be largely resolved by being able to test the JS/HTML locally, that is indeed something we’re working on with a high priority. We agree it’s painful to keep pushing back and forth to check each change and we’ll have a local debug option very soon.

cheers,
Jon

1 Like

It has happened to me a number of times that the ‘run’ version has not updated, despite the update having been recorded on the gitlab page when I click the ‘View code’ button. I don’t know what causes it to happen. It has resulted in many wasted debugging hours, not realising that Pavlovia is silently ignoring all the changes I make.

What I now do when this happens is I copy the .psyexp file to a new folder and create a new Pavlovia project from it. The file has to go into a new folder to trick Pavlovia into creating a new project when I press synch. Although Pavlovia doesn’t require the name of the .psyexp file to be changed, PsychoPy does, so I also rename it. If I don’t, then Experiment Runner doesn’t list it as a new program (so the old one gets run), PsychoPy fails to create a new ‘_lastrun.py’ file and any updates are not implemented in PsychoPy (even though they are in Pavlovia).

As far as I can tell, Pavlovia and PsychoPy each suffer separately from bouts of update insensitivity, of unknown duration (I have never attempted to wait for them to resolve themselves). PsychoPy’s bouts might be caused by uncaught coding errors, or by moving the project folder (https://discourse.psychopy.org/t/uncaught-error-in-keyboard-allowed-keys/12819/2), or on other occasions by no cause that I can discern. The problem is always fixed by renaming the .psyexp file, with no need to copy to a new folder. I have no idea what triggers Pavlovia’s episodes, but I recently returned to two projects that I had not worked on for several weeks and both of them were moribund. This problem is always fixed by moving the .psyexp file to a new folder.

You’re saying that opening 2 files with the same name but different folders doesn’t allow them to be distinguihsed in Runner? That sounds like a genuine bug that needs to be tracked on our issues list (should be a quick fix)
https://github.com/psychopy/psychopy/issues

This is very curious. If you can send a minimal example of an experiment that doesn’t compile its script but doesn’t raise an error then we should be able to track down why. Most likely this has been caused by the introduction of the Runner as well.

Thanks for your reply.

Yes, I am saying that opening 2 files with the same name but different folders doesn’t allow them to be distinguished in Runner.

The best example that I have of an experiment that doesn’t compile its script but doesn’t raise an error is the ‘IAT.psyexp’ file attached. However, I don’t think it will behave in the same way on a different PC. On my PC, the program saves changes that I make to it (the save date and time are typically updated, although in one instance of the problem this was not occurring), but does not create a ‘_lastrun.py’ file and none of the changes show up when it runs. (Bizarrely, it continues to run even when removed from the folder that contains its last ‘_lastrun.py’ file and the various images and Excel conditions files that it requires. In this respect, the program runs as if it was still located in the old folder.) But when I copy the .psyexp file to a different PC and run it there, it creates a new ‘_lastrun.py’ file and throws an error because the files it requires are not present. (I can’t get any further on this other PC because it lacks a suitable graphics card for PsychoPy, but the program seems to behave normally up to this point). If it is any help, I can also attach a zipped folder containing the images and conditions files that the program normally requires, but I suspect you’ll just see a program working normally.

Although I don’t know what caused the program to start behaving this way in PsychoPy v2020.1.3 (i.e. what initially prevented the ‘_lastrun.py’ file from updating), I think Experiment Runner is somehow linked to the persistence of the problem, as the ‘solution’ to the problem is different on v2020.1.3 and v3.2.4. When I was using v3.2.4 and the program failed to generate a ‘_lastrun.py’ file or to show changes, I was able to identify the cause as an uncaught coding error and once the error was resolved the program started behaving normally again. Also, the program ceased to run if it was copied to a new folder without also copying its last ‘_lastrun.py’ file. In contrast, when using v2020.1.3, the cause was not a coding error because the program worked normally when re-named. And the program did not cease to run when put into a new folder without its last ‘_lastrun.py’ file. I don’t know how Experiment Runner works, but it looks as if something was helping the .psyexp program to keep track of where its ‘_lastrun.py’ file and various image and Excel files were located and making sure that it found the old ones no matter where it was moved to.

As I’ve said, the initial cause of the problem in v2020.1.3 is difficult to determine. In one case it started when I retrieved a program folder from the Windows recycle bin. On another occasion, it started after I had saved a change to the code, but reverting to the old code did not resolve the problem (as had been the case in v3.2.4) and the only solution was to rename the program. At other times, it has been impossible even to guess what might have triggered it because I did not discover the problem until I had been failing to understand for some time why debugging attempts were not working. I should admit that my PC is prone to crashing (it freezes and requires a forced reboot, on average once per day), but this has not corrupted programs or documents I’ve been writing in any other software.

IAT.psyexp (158.2 KB)

Of course, if I’m cataloguing the causes of a program failing to update the ‘_lastrun.py’ file and to show changes when run, I shouldn’t leave out the occasions on which this was certainly caused by copying the .psyexp file to a new folder and not renaming it. Time having passed, I can’t rule out the possibility that cases where I had no idea of the cause were due to this.

I also have a run file that differs from its GitLab source for several hours now. Any other suggestions on how to work around this? Creating a new project with a new name is not very convenient (there are several other people working on this experiment).

Also, I noticed that the run file says:
expInfo['psychopyVersion'] = '2020.1.3';
while the GitLab file says:
expInfo['psychopyVersion'] = '2020.1.2';
which is presumably because one of the team members has a newer version of PsychoPy than I do.

I see that 2020.1.3 is the latest version, but pip does not know that. I’m not sure if this has anything to do with why the run file won’t update (it’s reasonable for Pavlovia to assume that the file with the later version is newer, though it is not), but can that be fixed?

After trying various options, what worked for me was simply to mark the experiment in Pavlovia as Inactive, which removed it from the run server, and then set it back to Piloting, which triggered it to reupload with the latest files from GitLab. All good now.

I’m having the same issue today - I didn’t last week - but refreshing the experiment with the inactive-pilot toggle isn’t working for me. I also tried clearing the browser cache but no luck. Adding this to the thread to further describe the issue.

I am also having this problem. This is my first time running an experiment on pavlovia so I don’t know if there is something with my set-up that could be causing it.

I think there are two separate issues. One is a caching issue where the sync happens but testing uses a previous version of the code. The solution to this issue is described in my crib sheet.

A second possible issue is that the sync fails (recognisable either by not being asked to name the changes or a red circle rather than green in PsychoPy). I’m currently getting this because I can’t work out how to push to a branch rather than the protected master. I also had it yesterday because PsychoPy is confused between the version I’m working on and a previous version I deleted from Pavlovia to try and solve syncing issues.

Are there any cases where the syncing works (asks for name plus green circle) and yet the code is not updated in the repository?

1 Like

I’m just been working online in GitLab rather than on the desktop and syncing with PsychoPy. It’s working fine today so I wonder whether there might be intermittent server issues of some kind?

This issue has now spontaneously resolved itself, though I am about to start editing the experiment again so I’ll provide updates if anything changes.

1 Like

I’m having troubles again today - changes to the code editing online in Pavlovia aren’t being recognised when I pilot, and toggling the experiment between ‘inactive’ and ‘piloting’ doesn’t change this.

Clearing the browser cache worked for me.

The way I did this was a bit haphazard because I had an old copy of @wakecarter’s cribsheet, which said to disable the cache, and this didn’t work. The solution in the most recent version of his cribsheet is to clear the cache by pressing CNTRL-F5, which sounds like a simpler procedure than the one I ended up using. I’ll try it next time the problem crops up.

Here’s what I did on this occasion…

  1. Piloted my experiment in Chrome

  2. On the initial dialogue screen, pressed F12 to open Developer Tools

  3. Held down the Refresh button (if you hover over it with the Developer Tools open, it says ‘Reload this page, hold to see more options’)

  4. Selected ‘Empty Cache and Hard Reload’ from the drop-down list

This might only work in Chrome (at any rate, the Refresh button doesn’t offer these options in Edge), but having done it once in Chrome, my experiment ran normally when I subsequently piloted it in Edge.

1 Like

Thanks @rdkirkden

I wasn’t able to figure out your Step 3* but I right-clicked on the initial input dialogue box (i.e., enter participant details) and selected ‘Reload’ (in Chrome by the way) and this refreshed it.

* When you say ‘Held down the Refresh button’ - is this a key on the keyboard or something you can click? I wasn’t able to see any tooltips or options when hovering over any of the elements of the display within Chrome.

There are actually 3 possible issues here:

  1. First thing to check for is a sync issue: Check GitLab online to ensure that changes actually sync’ed successfully to Pavlovia by looking at the updated dates of files included in the sync. If they did not update, then the Runner probably has some details as to why. If you have this issue, then you are probably on the wrong post - search the forum for sync problems instead.
  2. Second thing to check for is a client-side caching issue: If your browser is caching the old version of an experiment, then you can try refreshing (ctrl+r), force-refreshing (ctrl+shift+r), clearing your cache, opening a private window (this usually ignores the cache), trying a different browser (that does not have the experiment cached), a command-line tool such as wget if you have access to that, or a free CGI proxy service if you don’t. Again, if you have this issue, then you are probably on the wrong post - search the forum for caching issues instead.
  3. Third thing to check for is a run server issue: If you can confirm that a file on GitLab (gitlab.pavlovia.org) is different from the same file on the run server (run.pavlovia.org), then you have the issue as originally described on this post. In this case, try my solution. When setting your experiment to inactive, confirm that the file is removed from the run server. If it is not removed, then you probably don’t have this issue - see previous options instead.
1 Like

Hi @NicBadcock

I’m sorry the instructions weren’t very clear. I meant the browser refresh button, circled in the image:

I was following instructions that I found here: https://stackoverflow.com/questions/5690269/disabling-chrome-cache-for-website-development (the second post, with the green tick). I think there are multiple ways to do this though.

Marking the experiment as Inactive and then back to Piloting didn’t work for me. As other people have suggested, there are probably several different issues being described in this discussion. In my case, the project always syncs OK, and both the JS file in the html folder and the record of updates on the project’s GitLab page show that the project has been updated, but the version of the project that actually runs is an old one.

1 Like