Browser, device, and screen compatibility (Builder mode 3.2.4)

URL of experiment: No URL (currently piloting) at the moment, but it would look something like this (below) when I start running it.
https://run.pavlovia.org/jber3175/drmlistexpa/html

Description of the problem: I have put together an online experiment that works on Google Chrome for Mac & Windows. I was also able to run it on Google Chrome with Android phones. However, I could not run it on Safari with Mac and iPhones (and the same issue applies to Google Chrome on iPhones). It appears there is a compatibility support issue with the iOS for my experiment, because the experiment info widget/box appears for a split second and then disappears. How can I solve this issue?

It might be that this is already fixed in 2020.1 version as @apitiot has worked a lot on cross-browser compatibility issues lately but if not already fixed he’ll be best placed to work out why it fails.

Thanks for your reply @jon. I would be happy to provide more information you or @apitiot would need.

Hi @jon, I have uninstalled the previous version that I was using and installed the 2020.1 version. Should I simply sync the experiment I created with the previous version using the 2020.1 version and will that fix it?

At first I tried to change “Use PsychoPy version” (under Experiment Settings) to 2020.1, but the compatibility issue persisted.

Using 2020.1 to build my experiment from scratch, I am now unable to run a type of nested loop (online) that used to work with 3.2.4. It still runs fine on PsychoPy from Builder mode, but the way I got my inner and outer loop working before no longer works online.

What I am trying to achieve:
(1). For each subject, randomly present 6 lists with 15 words in each list.
(2). For each list presented, 15 words are presented one-by-one sequentially for 2 seconds each (with audio).

How I used to get it to work:
(1). For presenting lists randomly, I used an outer loop to randomize the order for presenting lists.


a. To get this to work, I create a ‘master’ .xlsx file and is also specified in the loop properties.
b. Under one parameter in the ‘master’ .xlsx file, I input the 6 conditions and their filepaths to each list to be presented (linking to .xlsx files in the designated ‘study’ folder).
c. Create the designated ‘study’ folder with .xlsx files for each list.
d. For each list, I input 15 conditions for each of the 15 words to be presented with 2 parameters (1: text, and 2: audio filepaths for each text condition).

(2). For presenting the words in each list sequentially, I used an inner loop.
a. In the inner loop properties, specify in the ‘Conditions’ field the parameter from (1)b.


b. TEXT component - use the text parameter specified in each list (set every repeat)

c. AUDIO component - use the sound parameter specified in each list (set every repeat)

My problem now:

I am under the impression that this looping method is technically not supposed to work… but somehow it was fine with 3.2.4. It no longer works with 2020.1, and now it appears that the code is not registering the parameters for TEXT and AUDIO components (and I am not sure whether it did before as well since I didn’t look into the code when it was working).

Hi @jon and @apitiot, I think I found the main issue with my experiment. I could really use some help with getting this routine to work again with a fail-safe method.

Hello @Joshua_B,

If you give me the full path of your experiment, I’ll happily look into it for you.
https://run.pavlovia.org/jber3175/drmlistexpa/html does not seem to be there any longer.
Cheers,

Alain

UPDATE: It still works on Windows & Android phones. So far I have also got it running on Mac (Safari/Chrome) + iPhone 6 (Safari) + iPhone XS (Safari). I have yet to test it with the models between 6 and X (and ones before 6). I have also tested it on iPhone 11 (left) and 11 Pro (right), but these have an issue downloading the .wav files (the error message is e.g., “unable to download resource: audsfolds/thorn.wav (4)”).

Hi @apitiot,
I was able to pilot my experiment successfully in the end, I think it had to do with getting my audio sampling rate right for PTB (I switched to it from sounddevice since it was recommended by an Alert in the new Runner window). I am, however, unable to check whether or not it works on Safari/Chrome for Mac/iPhone as my friends are asleep now- I would appreciate it if you could help verify if it works on these other devices and browsers. There are some messages (in F12) like “WARN unknown” when I initialise and then run my experiment on Pilot mode, and as of now, it seems harmless given that it does fully run through the experiment on my end.

By the way, I have deleted the old path and it should be this now.
https://run.pavlovia.org/jber3175/2020drmtest/html

Thanks

Hi @jon and @apitiot,

Indeed 2020.1 was the solution. It seems to work on iPhones except the 11s (which have trouble downloading the .wav files). It is definitely not an issue for my project that it does not work for these models. Thank you again for your help. :star_struck:

Hello @Joshua_B,

I am very glad to read that all is now well! I am gradually back porting recent fixes to earlier versions of the library but this is a tricky process and it is not always possible for me to do so, so using the most recent version, whenever possible, is always a good idea.
Happy experimenting!

Alain

1 Like

Dear @apitiot and @jon,

I have created two experiments for a before and after measurement. Their structure is the same, but their contents are different. The idea is that the participants of my study run the first assessment (“drma”) and then one hour later they run the second assessment (“drmb”). We have come across two (possibly more) issues that require troubleshooting.

  1. At first, one participant came across this issue. Initially, I thought it was because I left the statuses of my experiments on “RUNNING” before the server maintenance, so then I released those reserved credits and turned the statuses of my experiments to “INACTIVE” and then back to “RUNNING” (though I am not sure if this is a fix). One more possibility is that I had briefly (for a minute) switched the “Saving Format” to “Database”, but then reverted it back to “CSV” soon after. But this issue persisted after I had switched it back to “CSV”, hours later when two other participants tried to run the “drma” online experiment.

  2. On the other hand, I had one participant who only successfully consumed her credit for the second assessment (Participant “1101”). Her “CONSUMED” credit for “drmb” was at 09:19:59. Although she should have also had a “CONSUMED” credit for the first assessment (“drma”), maybe she closed the experiment prematurely during the end screen (which I set to last only 2 seconds, perhaps I should set this to 0.5s or 1s, or even remove the routine showing the end screen altogether?). Is there any way to access the .csv files or data of the “RESERVED” credits to verify this? I am guessing that her attempt at the first assessment was at either the “RESERVED” credit ID at the 08:10:04 or 08:11:11 timestamp, whereas the unsuccessful attempts of other participants are in between (which should not have any significant data). It would be great too know which “RESERVED” credit IDs were for participant 1101, or even if her data for the first assessment can still be salvaged.

  3. I also noticed that the label for “Platform Version” has changed to “b’2020.1” since the update, though I am not sure if this even has anything to do with the current issues (as my experiments had successfully run on Androids and iPhones using Google Chrome and Safari on “RUNNING” mode with previous piloting attempts).

In the meantime, I have switched the statuses of these experiments to “INACTIVE”. Please lease let me know what can be done about this. Thanks again in advance for your help, it is really appreciated as I am counting on this working for my dissertation. :pray: :persevere:

Hello @Joshua_B,

The good news is that error 1 has already been fixed. It occurred when we moved to the new server.
Error 2 is most likely linked to error 1. I’ll happily refund you the credit consumed while the issue was active, of course.
Error 3 was also fixed early this morning, but it is a minor thing that had no impact on performance.

In other words, all should be well and you are welcome to change your experiment back to RUNNING.
Allow me to apologise for the mishap.
Best wishes,

Alain

Hi @apitiot,

Thank you for clarifying. I overlooked the time difference as well (we are at GMT+8), and I simply thought it was good to go since the maintenance screen that I saw yesterday was down already when I checked this morning.

Regards,
Josh

That is no problem at all. We are all working together :slight_smile:

Alain

1 Like

EDIT: It’s ok now!
EDIT 2: The problem is back…

Hi again @apitiot @jon,

I encountered a new error at my Pavlovia Dashboard where my experiments have suddenly disappeared (please see the screenshot below. I am in the middle of my data collection, so I urgently need help! :pray: :pray:


Regards,
Josh

hey the same thing happened to me - you can still access your files by typing in the url to the study,

e.g., i typed in https://pavlovia.org/Hunter/learning and it still allowed me to work on it

cheers,
Tom

1 Like

Edit: I am actually still able to get to my Experiment page by clicking the Credit id string under the ‘Credits’ tab of the Dashboard. I also tested with a full run of my experiment whether the data is still saved in the end, and yes, it does save! So actually, it is not a big issue when it comes to data collection, but hopefully the interface bug gets fixed soon!

Hi @TomH @apitiot @jon

The issue has resurfaced, and I was wondering since I can still access the experiments (thanks @TomH for bringing this to my awareness), will the saving of the participants’ data be affected in any way even if I personally cannot fetch the experiments from the Pavlovia Dashboard? Just to be safe, I will first inform all my participants not to run my experiment until I hear back from you. Thanks again in advance!

Regards,
Josh

Hello,

I have fixed the issue again. And I am tracking the root of the problem.

Alain

1 Like

Hi @apitiot

Thanks for fixing it. Let us know when you do.

Josh

That should not take long, now.

The good news is that this did not impact your participants: the experiments were all running nominally. Only access to dashboards and to the explore tab were compromised.

1 Like