This is my first time creating an experiment with PsychoPy (previous experience with E-Prime). I am attempting to create an experimental design where a different number of stimuli (playing cards) are presented on each trial. Additionally, as the number of cards for each trial change, I would like to have the relative position of the cards change as well. For example, if there are four cards presented I would like them presented in a square array, but if three cards are presented I’d like them presented in a linear array.
My problems are two-fold:
I’ve created a column on my excel file for position. However, when I run the experiment, I receive the error "ValueError: could not convert string to float: (-.25, .50). I assume my problem is that PsychoPy is not recognizing the column as a position and I’m not certain how to code my way out of that issue.
When the number of stimuli change, (e.g., from 4 to 2) a white blank spot appears on the screen in the relative position of where other stimuli would be. How can ensure that there are only the desired number of stimuli on the screen and nothing else?
I’ve attached the excel file I’m using for my conditions as well as the PsychoPy experiment I’ve build thus far. Below are the specs.
Hi @David_Marra, in response to your first error, you can rectify this by adding the following to your position argument: $eval(StimImage1Position1) - see also here for an explanation of the error and another method to avoid this error.
From your current upload, it is not clear to me how you determine the number of stimuli that appear on the screen.
A quick and dirty solution to this would be to supply a position for the extra stimuli that would place them slightly outside the viewable boundaries of the display, e.g. [1.25, 0] would be beyond the right-hand edge.
Alternatively, you can specify opacity values for each stimulus: 1 for visible, 0 for invisible. But the invisible ones will still need a valid position to be specified.
Thank you so much! The first problem was solved perfectly by adding the $eval(Stim).
I realize that the contingency table I provided was incomplete. The revised contingency table demonstrates how I have attempted to alter the number of stimuli per trial. In this table, I merely omit the name of the stimuli from the column. As mentioned earlier, this leaves a white blank spot on the screen in the relative position of where the other stimuli would be.
Thank you for your help thus far. I have been able to fix the two issues above, unfortunately, there are two other problems that have arisen that I am unable to resolve.
After the end of the testing trial, we would like to have the total number of correct responses as well as the average RT for each individual trial printed at the end of the experiment (as opposed to opening up the data file and calculating it ourselves). The problems are:
We are unable to get PsychoPy to recognize that more than one visual stimuli may be correct. We have used the code for CorrAns provided in a previous blog post of '[‘stim1’, ‘stim2’]" Unfortunately, this is not working when the stimuli are jpg (vs keyboard presses).
Once we are able to resolve issue #1, the next problem is getting PsychPy to keep track of RT and the number of correct answers and to have them printed at the very last Routine. We have used the code provided below from another blog post:
for stimulus in [DscmImg1, DscmImg2, DscmImg3, DscmImg4, DscmImg5, DscmImg6, DscmImg7, DscmImg8]:
# check if the mouse is pressed within the current one:
if mouse.isPressedIn(stimulus):
# Yes, so store the reaction time in the data:
thisExp.addData('RT', t)
# check if the stimulus' image filename matches the correct answer:
if stimulus.image == (corrAns):
thisExp.addData('correct', 'True')
else:
thisExp.addData('correct', 'False')
# end the trial once done here:
continueRoutine = False
# stop any further checking:
break
The output file is identifying any ‘corrAns’ selection as “False” in the data output file.
Again, the issue is getting PsychPy to print this onscreen vs. going through the data output files to calculate this ourselves.
Is this something that PsychoPy is capable of? I have attached an updated contingency file for you to review.
I can’t really work out the details here, but if you have a list of possible correct values, then you change from using == in your comparison to in, e.g. something like this: