As an entry level psychology student, for the first assignment of an experiment, looked into using PsychoPy (or PsychoJS, to have an online version).
After reading the documentation, playing with the software and demos locally, I’m having a very hard time getting even close to the desired scenario. Documentation encouragingly suggests that builder can be used for most things, but I got completely stuck.
Is there any tutorial or an existing demo which would do the following?
a) Simple demographic questions. I played with the BigFive demo, but the layout and controls were awkward (text not scaled to screen, reversed scrolling from my system settings, the “continue” button having no label etc).
b) A video played.
c) Several response options as images (in a row or grid, scaled to screen).
d) Time taken recorded.
e) Ideally b) and c) could be taken from a CSV (URL to video, URLs to images) and presented in a random order.
f) All presented online (via browser) for easy accessibility.
Sorry if the question is too broad - just trying to gauge whether PsychoPy/JS is a feasible solution for this simple course task.
a) Simple demographic questions. I played with the BigFive demo, but the layout and controls were awkward (text not scaled to screen, reversed scrolling from my system settings, the “continue” button having no label etc).
The form component is still in beta mode - so can be tricky for online. How many questions do you have? if a short number I suggest gathering demographics in the start gui (click the cog icon and in experiment info use the + icon to add a field - you can add one foe age then one for gender and so on. If you want a dropdown you can set this up as follows for handedness)
b) A video played.
This should be simple - add a movie component from the stimuli section in the component and add the path to your movie. Is there something you tried that didn’t work?
c) Several response options as images (in a row or grid, scaled to screen).
Add several image components and then add a mouse component. In the mouse component make sure " end routine on press" is “valid click” and list the names of your image components in the clickable stimuli field, like this:
d) Time taken recorded.
In your mouse component in the data tab, set save mouse state to “on click” that will save the mouse click times to your data file.
e) Ideally b) and c) could be taken from a CSV (URL to video, URLs to images) and presented in a random order.
You need a spreadsheet like this.
im1
im2
im3
movie
path to image 1 for this trial
path to image 2 for this trial
path to image 3 for this trial
path to movie for this trial
path to image 1 for this trial
path to image 2 for this trial
path to image 3 for this trial
path to movie for this trial
path to image 1 for this trial
path to image 2 for this trial
path to image 3 for this trial
path to movie for this trial
Then in your Psychopy the movie path should be like this:
do the same for your image components (note that te field is set every repeat, meaning it updates every trial).
f) All presented online (via browser) for easy accessibility.
The above would all work online. Here’s a tutorial to help get you started
Thank you again for the great tips, managed to have the basics working
There was some weirdness where selecting a spreadsheet did not allow me to use variables, matching column names afterwards - but removing the spreadsheet, adding variables and re-adding spreadsheet fixed it.
Regarding the image ordering, is there a way (in the builder) to randomize the image order?
That is, each video would still have it’s own set of images, but the image order would vary randomly.
What I do is have a separate list in code with the three positions and shuffle them. This means that if the image locations are set each repeat I can use a mouse response to know which has been clicked without knowing where it is.