Lessons from 3 frustrating days with PsychoPy Online


I’ve spent the past 3 days pull my hair out trying to get experiments to run online through PsychoPy and I thought I would share some of my experiences so that it doesn’t feel like it was such a waste of my time.


  • Do not try to create multiple experiments using the same .psyexp file. I tried this and even when I removed the project from gitlab to try and start again, old projects were recreated somehow and caused 403 FORBIDDEN errors when I tried to access new projects.
  • Do not create an experiment until it is completely finished. I was not able to update experiments after creating a project, and it was impossible to create a new experiment from the same file (above issue)
  • Don’t run experiments until the ‘colon issue’ in output files is fixed. Not sure if this applies to other actually, but I ran my experiment around 20 times and then was unable to sync. This is because the file name contained a colon. In order to proceed at all, I had to go back and manually delete each file. This wasted even more time than I already was.

THINGS TO DO (basically the opposite of above)

  • Build an experiment
  • Double and triple check to ensure it is completed
  • Upload
  • Don’t try and change
  • Don’t run it (lol)

Not sure if this will be helpful for anyone. But, I feel much better :blush:



You know this is a beta feature, right? That means when running it, you are voluntarily being a beta tester, not a user of the final product. Many other people who who would like to use this feature but who don’t have the time or energy to deal with it as a work-in-progress, will wisely be waiting until it is deemed to be an official stable release. Noone would recommend using PsychoPy’s online features to gather actual experimental data at this point.

So as a beta tester, providing useful and reproducible reports of bugs is really valuable in getting the feature to a state where it is usable for experiments in production. Listing things that people “should” and “shouldn’t” do, given the temporary current state of development teething issues, is not useful.

A constructive contribution, to the people working freely on your behalf to develop and improve this software, would be to provide usable bug-report information. This is a community of fellow researchers you’re being sarcastic towards, not some faceless corporation.


Hi Michael,

I have posted detailed accounts of all of the issues I encountered in numerous other threads. Many of them are not yet resolved, so I thought I would save others the trouble of going through long frustrating processes only to encounter the same unresolved issues that I did.

Sorry if you detected any sarcasm in the tone of my writing. It wasn’t intentional. I put in a lot of work over my long weekend trying to get experiments running using this technology in time to include in my master thesis. All of this, with the sole intention of promoting this platform, because I think it is game changing.

Thanks for the tips though.


I also just encountered ‘the colon issue’ when using git pull on Windows. If you have Bash on Ubuntu on Windows (or whatever ridiculous name it has these days), you can get around this by doing all the gitting on there.

Incidentally, I don’t think there’s anywhere on this Discourse appropriate to post this… but this morning I just had a bunch of students in a lecture participating in a demo online reading RT experiment, using their own laptops. Using the above Unix shell solution, I successfully pulled in all the resulting CSV files as soon as they were all done. Only stage that failed was my R script that was meant to spit out some charts from the data (I think some of the data came back with duplicated fields, but haven’t fully checked yet).

I’m absolutely buzzing, the online component of Psychopy is incredibly exciting.


Some explanations

A project on pavlovia is a folder (that has to be the case because of git). If you want mulitple experiments in different projects they will need a folder for each

This was a bug on pavlovia that we believe to be fixed (for some experiments the change wasn’t being detected correctly). Updating a study now should push the changes through correctly. If not you can also now deactivate and reactivate your study, forcing a new copy on the “run” path

Similarly, this was a bug that is fixed in the upcoming beta 11.

@cwnorton is correct about an additional issue with duplicated fields, which is also fixed in beta 11

As Mike says, bear with us. This is very much a beta period. It’s all very new and while we test lots of things, there will always be scenarios that we haven’t checked yet.

1 Like

Hi Jon,

I really appreciate taking the time to address those questions/concerns/frustrations.

I didn’t mean to seem impatient. I know this is a difficult process. I was just feeling pressure from my thesis supervisor to administer my experiments online this month, which may not have been a realistic expectation given that it is still in beta testing.

I hope to be more helpful moving forward.


I was planning use pavlovia for my on-line masked priming experiments, but now I am getting doubtful about it, especially after reading that

So, I was wondering if I could be provided with some sort of short, general report of the stability of the online platform. I have been running in-site experiments with PsychoPy for the last two years and using the same scripts for on-line experiments would be great to ensure comparisons across experiments. However, if you think the status of the online platform is still in beta, I’ll look for a feasible alternative.


Hi @rob-linguistics13,

The block quote from Michael was true at the time 6 months ago, when PsychoPy 3 was still in beta stages, but version 3.0.0 marks the first of the stable PsychoPy 3 releases. See the changelog for further details on bug fixes etc. It is still important to check that your experiment is doing what you want it to do, as you would with any software.

Timing performance of PsychoPy 3 will be published within the next year in a peer-reviewed journal. For now, if you plan on moving to online data collection, I would stick with one or the other for any given task, rather than mixing the data collection between desktop Python and online JavaScript tasks, due to the possible timing variation between online browser and desktop apps.

If you are having problems uploading and running your study, it would be great if you could post your errors so we can figure out what is happening.

Thanks for the update. I have actually been using desktop PsychoPy 3 for other experiments in the last months, and worked pretty fine. What I am concerned about is PsychoPy 3 online, which the blockquote was referring to. If what are saying is that people have been using PsychoPy online for actual data collection, I am totally on board and I’ll start working on the javascript tasks. But I thought of asking first before putting any effort on a platform that is on beta.

Yes, since the stable release, people have successfully been using PsychoPy3 and Pavlovia for online data collection. Jon has been tracking the usage, and the number of completed studies is substantial and growing :). Let us know if you need anymore info, or help with getting your task running.

1 Like