Description of the problem:
I am running a live experiment where people experienced the issue of uploading results with the error message “”“Unfortunately we encountered the following error: when uploading participant’s results for experiment: ellenhan/rating_video_set9_final 504 Gateway Time-out nginx Try to run the experiment again. If the error persists, contact the experiment designer.”"", please help asap.
Description of the problem:
Is the server down? Help is really needed since it’s live and potentially losing many participants results.
The same is true for me. I’m running a study and think that I may be losing results because of the error. I replied to this post where it seems something similar happened a few months ago: Sign In issue - Pavlovia - #3 by Daniele_G
I had to pause it, but am extremely worried about data loss, which are many
It seems to be working for me now. I don’t think that there’s any data loss
As far as I can tell, the server is live. Still having issues @YT_HAN?
please see my messages above, it was definitely down before, and as others have noted. I can provide all complaints form Prolific participants if needed. We are covered by a license, so no credits issues involved
and I don’t know if it’s working now, as I had to stop the study given all the error reports. So can you be sure that it’s working properly now? I don’t want to start the experiment again without being sure.
My apologies for the inconvenience. I can say that I’ve been developing and running an experiment for the past 30 mins without any problems. I think the error was only a temporary glitch.
ok, thanks for letting me know and I surely hope so. Is there a more direct way to reach out the Prolific team and get quicker reply in case of emergency, as this one once I start the study again?
I’d ask Prolific about that. I would guess though that if you automatically send participants back to Prolific after completing a study, and they can’t complete the study, you won’t get charged for it. Timed out submission status – Prolific
sorry for my typo, I meant psychopy team, not prolific team
No. We’re not big enough to offer something like that, nor is any player in our market. Online services that do offer something like that, such as Qualtrics, a multi-million venture, tend to do so as part of relatively expensive “premium” packages with which they finance dedicated help-desks. Would be cool if we could do that too, but it might take a while to get there.
ok, thanks. As a suggestion, qualtrics has an application status page like this Application Status | Qualtrics, which is very helpful, hopefully sth psychopy can consider
Ah, a “canary-in-the-coalmine”. That’s indeed a great idea. Thanks!
I am sorry to read that you experienced some difficulty yesterday.
I just checked out our extensive logs and could not find anything of note. As far as I can tell, your experiment ran just fine all the way until 16h34 UK (09h05 local time). Then there was a hiatus, probably when you decided to pull the plug, until 17h48 UK (09h48 local time) at which point you ran a test, I believe. And then all was well again thereafter.
Minutes before 16h34, in between 16h34 and 09h48, and after 09h48, dozens upon dozens of experiments ran without issue, with countless data being uploaded to the server. So I am puzzled.
Could you send me the Prolific messages in a private chat? I would really like to check what happened, to make sure that it was just a temporary connection glitch and not something likely to happen again.
Hi @apitiot , thanks for your explanation, what you have noted is basically correct. I will follow up with you with more info thru private chat. The study was paused because I need to verify first if data were saved for those getting the error messages, luckily, the csv files are, but the log files are not (which is fine for me). At this point, I do believe it’s glitch, though if the data were not saved, it would have been an expensive glitch. I would like to understand what happened as well, but more importantly, a reassurance that the server is working properly, because I have a big rollout again tomorrow morning and don’t want to experience the same issue again.