| Reference | Downloads | Github

Pavlovia doesn't correctly emulate the experiment

URL of experiment:

Description of the problem:

When I run the experiment on my laptop, there’s no problem with the face images. But, if I run the experiment online there are some errors. First of all, the first face image doesn’t show up. Second of all, there are some trials in which the other face image pops up briefly before the actual face image shows up. Finally, there seems to be an error with the keyboard input since it creates this blip sounds whenever I type my input.

It would be glad if someone can answer this question.

How are you setting the face image on every trial? Can you share the experiment files from builder?

It looks like what’s happening is that it’s first loading the face for the previous trial and then immediately the one for the current trial, and on the first trial it’s not loading anything (probably because there’s no previous trial to load from). That feels like some kind of late updating to the index it’s pulling from when it identifies which image to load, but I’m not sure why or how.

I don’t quite get what you say.
If there’s something wrong with PsychoPy Builder, then the problem should occur when I run it offline.
But when I run the experiment offline using Builder, the problem doesn’t occur.
I suspect the reason being the language difference between offline and online experiment conducting program. As far as I understood PsychoPy uses python language but Pavlovia uses JavaScript. I’m not completely sure about this.

Yes, you’re almost certainly right that it’s an issue with the language conversion, but I’m trying to figure out what part of the conversion (or what specific part of the code that is being converted) is causing this behavior, and that’s difficult to do without seeing the experiment itself.

Can you share a link to the Pavlovia gitlab repository for your experiment?

I see, sure I can share.

I hope this does much help.

OK, fixed it. In the builder, on the “image” line of the image stimulus, change it from “set every frame” to “set every repeat”. I cloned your experiment and tested this fix, it should work but you will need to do it yourself on your own experiment (my copy is not connected to yours). The rest of this is technical stuff.

@jon and @Michael, I want to flag why this happened because it strikes me as a problem. When a stimulus is configured to set every frame, you get this:

if (image.status === PsychoJS.Status.STARTED){ // only update if being drawn

This isn’t custom code, it’s just how PsychoPy translated the trial into javascript, but if I’m reading it right, it makes it so the image doesn’t actually update until after it’s already been presented. That would explain the behavior Young was seeing: It presents the image, then very rapidly updates it according to the list of stimuli. However, it holds whatever the previous trial’s image was until after the first frame, so it essentially flashes the previous trial at the start of the next one.

The problem? This is different in JS versus Python. It’s going to be an issue with any experiment that uses a “set every frame” setting. I don’t want to touch code translation right now because it’s a very carefully engineered thing, but this seems like something to keep an eye on.


Thank you very much! Really appreciate it!
If it’s okay, can I ask one more question?
In the online experiment, whenever I give the input the system makes a sound(that blip sounds), while it runs just as fine when I run it offline.

That I have no explanation for. My computer did not do this as far as I noticed, and there’s nothing in the PsychoPy code that should be doing this. What kind of laptop are you using, and what browser?

I’m using MacBook Air 2019, and using Safari. I guess, then it’s my laptop’s problem. FYI, this laptop has a problem with a Psychtoolbox and a function in it named KbCheck, which is supposed to take in input. For some reason, in my laptop that function doesn’t work hence I can’t give an input argument when I use Psychtoolbox.

Interesting, I’m on a 2017 MacBook Pro, but I’m using Chrome. I’m also running MacOS High Sierra, which is probably a few versions behind what you’re using. I would try it in Chrome and see if that takes the sound away, but otherwise I have no idea.

I ran it using Chrome. Turns out it’s problem of Safari.

The blip on keypress is a puzzle. We had seen that in an earlier release, but not sure why it’s come back (damn Safari and it’s non-standard behaviour!). @apitiot will look into it very soon.

As for the updating code - that logic is the same in the Python-generated code. Things set to update every frame don’t update if they aren’t being drawn. Maybe the issue is different - maybe it’s that the stimulus is somehow updating to the next trial value. I’ll need to do some looking into that to see what can be done so that this works for more cases. But certainly setting the image to update on every repeat not on every frame is the right thing to do anyway!

Hi, I’m trying to run a study and am experiencing the same issue, even when I have it set to ‘every repeat’

Which issue?

What version are you using?