Hi there, I actually concatenated two experiments together: First I test participants’ screen parameters and then I do the formal experiment which was built on psychopy3.
By editing the html document (adding first part’s code to the psychopy3-generated “index.html”), now I can successfully run the whole experiment on pavlovia.org.
But I want to customize the location of the stimulus based on the screen parameters I tested before. For each participant’s screen size, resolution, and viewing distance may be different, so it’s not possible to present stimulus using “height” or “normal” unit while keeping the visual angle consistent among participants. I think it’s possible to present my stimulus in “pix”, the values(how many pixels) can be calculated based on those parameters.
Description of the problem:
That means I need to get access to the stimulus parameters like presented locations(imported from conditions.xlsx) on “main.js” and then revise them according to parameters tested before. How can I do that?
Thank you very much!!!
May I have your help?@jon @wakecarter
I would recommend that you use height units (or norm) and then calculate the x and y scaling factors in the first part of your experiment.
Visual angle is going to be pretty difficult to work with since you don’t have control over viewing distance or size of pixels. However, you may find my credit card screen scale experiment useful https://pavlovia.org/Wake/screenscale.
Best wishes,
Wakefield
Thank you for your answer!
While actually I applied a method called “virtual chinrest”(see: doi.org/10.1038/s41598-019-57204-1) to measure the viewing distance. Its basic assumption is the angle of our blind spot is consistent(about 13.5degree), by asking participants to respond to the disappearance(which means blind spot) of a ball moving horizontally on the screen, we can get the length corresponding to 13.5degree, then we can get the viewing distance.
btw: the screen size is of vital importance, so the first step of “virtual chinrest” is a card measure task similar to your screen scale experiment, while the second part is a blind spot task.
1 Like
Very nice. Please could you make that part of the experiment (virtual chinrest with/without card measure) public so I can link to it from my crib sheet?
Once you have the screen size and the viewing distance it should be possible to calculate the correct scaling factor.
Of course. Please refer to https://github.com/QishengLi/virtual_chinrest/
The author developed an example at src/example.html
Did you use their js code unaltered or did you recode it into PsychoJS?
I think it’s unaltered.
I just copy their code to pavlovia and it could work: https://pavlovia.org/LMH/virtual_chinrest
Hi, so how could I alter the stimulus parameters(presented locations) on main.js?.. THX
You either need to end the virtual chin rest with redirection to your main experiment and include the scaling factors in the URL / expInfo or you need to incorporate it into a PsychoPy routine. What variables does it come up with? I’m imagining that you should end up with pixels per degree in the x and y directions. If so then you could try having your main experiment in pixels and multiply the desired x and y positions and sizes in degrees by the x and y pixels per degree
Yep, I have already done the redirection and got the pixels per degree in the x and y directions. What confuses me is how to write the javascript code: like which variable is to be manipulated, what is the grammar of javascript…?
What shown below is the js code generated by psychopy. I have no idea where to start with.
trial_loop = new TrialHandler({
psychoJS: psychoJS,
nReps: 1, method: TrialHandler.Method.SEQUENTIAL,
extraInfo: expInfo, originPath: undefined,
trialList: ‘conditions.xlsx’,
seed: undefined, name: ‘trial_loop’
});
I would write Python code in auto translate code blocks at the top of your routines. Just try writing things like xPosScaled = XPos * XScale where XPos comes from your Excel and xPosScaled gets used in you stimulus
I finally got it! In function “xxxRoutineBegin”, there are codes about stimulus position quite clearly, therefore I can change them easily.
And thank you again for your help, I appreciate your patience very much. @wakecarter