Windows 7, PsychoPy 1.83.03
Hi folks, I’m working on an experiment involving a temporal discrimination task - i.e. short (100 ms) and long (500 ms) visual and auditory stimuli which the participants shall judge by duration.
The visual stimuli are what’s causing me problems here.
Responses are measured, as usual, starting from stimulus onset. Thus, if a subject recognises the long stimulus as such before 500 ms have elapsed, they should respond - however, the stimulus is to be presented for its full 500 ms.
Since the response-stimulus-interval must remain constant (600 ms) - i.e. the interval between the participant’s keypress and the presentation of the next visual stimulus - I need to create a varying duration of the blank screen between two visual stimuli:
If the subject responds early, the visual stimulus will remain on screen for the majority of the 600 ms interval - if they respond later, the old stimulus will take up some of the time while the remainder of the 600 ms are a blank screen, then the next stimulus appears.
This isn’t a problem for auditory stimuli, since the sound.Sound() function allows me to set a fixed duration, and also I can just cut my audio files to the desired length in Audacity.
The visual.Rect() function, in contrast, does not allow me to enter a fixed duration; to my knowledge, I can only draw the stimulus, flip the window, and then use core.wait() for either 100 or 500 ms.
I thought about using core.wait() before and after the subject’s response, recording the time until the reaction, subtracting it from the total stimulus duration (=500 ms), and then subtracting that duration from the maximum possible blank screen interval (=600 ms). Then I could draw the stimulus again and core.wait() for the remaining stimulus duration, then flip the screen and have it be blank for the remaining “blank duration”.
However, I am not sure how much latency this will create with such calculations going on in the background.
Especially since during the response-stimulus-interval, the experimentor needs to type in the accuracy of vocal responses given by the participant. I’m using the event.getKeys() function for this - so PsychoPy can’t just core.wait() and do nothing during that time period between response keypress and the onset of the next stimulus. Hence, the following still needs to happen while the stimulus and/or blank screen are being presented for a varying duration:
core.wait(RSI) exp_resp = event.getKeys(keyList = ["num_1", "num_3", "escape"]) try: exp_response = "%s" % (exp_resp) except: exp_response = None if exp_response == None: resp = None elif exp_response == "num_1" or "num_3" or "escape": if exp_response == "num_1": resp = 'left' elif exp_response == "num_3": resp = 'right' elif exp_response == "escape": core.quit()
So is there any way to either:
- Give a visual stimulus a fixed duration included in the variable of the stimulus itself
- Use core.wait() and the calculations I described above without creating latency or preventing the coding of vocal responses as described above?
I also thought about using a while loop, but then I have the problem of not being able to get out of that loop when the subject responds or the experimentor does the coding.
The issue is that the stimulus needs to be presented both before and after the subject responds, and both to varying durations (depending on the time of the participant’s response).
I’ve also found this method of going frame-by-frame - though I am not sure whether this is going to result in latency as well: