Failing to present images briefly

If this template helps then use it. If not then just delete and start from scratch.

OS (e.g. Win10): OSX 10.11.6
PsychoPy version (e.g. 1.84.x): 1.83.04
Standard Standalone? (y/n) If not then what?:
What are you trying to achieve?: I was to present an image of a human face for period that is short enough to prevent recognition of the face

What did you try to make it work?: I used builder with various setting using frame rate and second in multiple of 0.0167 seconds.

What specifically went wrong when you tried that?: Picture was flashed more than long enough to get a clear view of what it was. So I estimate several hundred milliseconds duration
Include pasted full error message if possible. “That didn’t work” is not enough information.

Screen Shot 2016-08-23 at 3.43.44 PM
4.1711 We overshot the intended duration of ISI by 0.0134 sec. The intervening code took too long to execute.

Original title: Cannot get iMac 27" i7, 8 GB RAM , AMD Radeon R9 M395X 4096 MB to present 49 KB image for less that several hundred milliseconds - image is easily visible - set builder to one frame, 60 Hz frame rate

Hard to know what you’re doing and where it’s going wrong from your post. The most likely issue is that you’re trying to load the image and also display it instantly. Or possibly you’re trying to present it during a static period. Or possibly you’re setting it to display for a certain number of seconds rather than for N frames. Could you show us a screenshot of your routine layout and of your image properties?

The following example is some code I used in the past to see exactly what my different screen were doing and may be useful for debugging this moving forward - code taken out of existing psychopy examples/demos so due credit to the original authors.

# hacked together from some provided demo 
# code to attempt to show a face for only 1 frame 
# and allow measurement of screen rise/fall with
# a photodiode

from psychopy import visual, logging, core, event
visual.useFBO=True#if available (try without for comparison)

import matplotlib
import pylab

#create a window to draw in
myWin = visual.Window([600,600], fullscr=True, allowGUI=False, waitBlanking=True)
myWin.recordFrameIntervals = True

#keep track of frame rate
myWin.recordFrameIntervals = True

#set up stim
face = visual.ImageStim(myWin, image='face1.jpg')
message = visual.TextStim(myWin,pos=(0.0,-0.75),text='Hit esc / q to quit')

# lazy way to set up a loop we can safely quit from
go = 1
frame = 0

while go == 1:
    frame+=1 # keep track of where we are
    face.draw() # draw face for 1 frame
    message.draw()
    myWin.logOnFlip(msg='frame=%i' %frame, level=logging.EXP)
    myWin.flip() #show it

    #for the next 59 frames, show nothing (assume 60Hz monitor)
    for i in range(59):
        frame+=1 # keep track of where we are
        myWin.logOnFlip(msg='frame=%i' %frame, level=logging.EXP)
        message.draw()
        myWin.flip()
    # allow safe quit
    if event.getKeys(keyList=['escape','q']):
        print myWin.fps()
        myWin.close()
        go =0


#calculate some values and plot
intervalsMS = pylab.array(myWin.frameIntervals)*1000
m=pylab.mean(intervalsMS)
sd=pylab.std(intervalsMS)
# se=sd/pylab.sqrt(len(intervalsMS)) # for CI of the mean
distString= "Mean=%.1fms, s.d.=%.2f, 99%%CI(frame)=%.2f-%.2f" %(m,sd,m-2.58*sd,m+2.58*sd)
nTotal=len(intervalsMS)
nDropped=sum(intervalsMS>(1.5*m))
droppedString = "Dropped/Frames = %i/%i = %.3f%%" %(nDropped,nTotal, 100*nDropped/float(nTotal))

#plot the frameintervals
pylab.figure(figsize=[12,8])
pylab.subplot(1,2,1)
pylab.plot(intervalsMS, '-')
pylab.ylabel('t (ms)')
pylab.xlabel('frame N')
pylab.title(droppedString)
#
pylab.subplot(1,2,2)
pylab.hist(intervalsMS, 50, normed=0, histtype='stepfilled')
pylab.xlabel('t (ms)')
pylab.ylabel('n frames')
pylab.title(distString)
pylab.show()

Hi Jon,

Thanks very much for your prompt reply.

I am completely new to Python and PsychoPy. Therefore I elected to use the builder to create an image flasher program. When I opened the Builder, I added an image component. In the Image Properties Dialog, I used a number of different settings. The code was saved and run. Either I saw nothing or I saw an image that was presented for an interval that seemed too long, e.g. several hundred milliseconds. Also the image was stretched in its horizontal dimension.

Here are the screen shots for the latest experiment.

Hi @William_Cleveland. (Not THE data visualisation William Cleveland? If so, very cool. If not, still cool.)

You shouldn’t specify timing values in seconds if you want precise control like this. PsychoPy synchronises with the discrete hardware screen refresh cycle. It is unlikely that specifying a time like ‘0.167’ in seconds would match the desired screen refresh onset exactly. Timing via frame counting is very precise but software timers can’t be used to precisely synchronise with that fixed hardware cycle.

I’m guessing what you actually want is to specify that the image should come on at frame N 10 (as 0.167 s is approx 10 × 1/60 s) and have a duration (frames) of 1 (aprox. 16.66 ms). In the current specification, it is possible that the image could be on for a variable period, up to three frames (50 ms).

You have specified that the image file is constant. This means that it will be loaded from disk before the experiment even begins, and so there shouldn’t be any issues with it being ready in time to draw to the screen for just a short presentation period. i.e., being pre-loaded, it can be drawn to the screen instantly and should be able to be displayed for just one frame without issue. Timing issues can arise when loading new images on every trial: this does take finite time to achieve, and so some care is needed to import it during a suitable period (like when a pre-trial fixation cross is being displayed), so that no delays are apparent.

Your image is appearing skewed as you are specifying its size as [0.5, 0.5], which is likely in norm units (normalised to the screen dimensions). i.e. the image will be sized as half the screen width and half the screen height. If the screen aspect ratio doesn’t match the image’s aspect ratio, this will cause skewing. Check here http://www.psychopy.org/general/units.html for other units options you have available to choose from, which allow you to scale the image in absolute terms, or just relative to one dimension of the screen (height).

Lastly, the fact that the image is being scaled can also be an issue. It is unwise to display multi-megapixel images when you are wanting real-time control. It can take much longer to import and decompress them from disk, and they take up much more memory. For speed and memory efficiency, it is best to pre-scale your images to match the resolution that they will be shown on screen. e.g. if your screen is just 1024 × 768 and you want to display an image taken from a camera full-screen, scale it to 1024 × 768 before feeding it to PsychoPy. The other multi megapixels won’t be visible on screen at all yet can still cause substantial performance and memory problems.

Regards,

Michael

@William_Cleveland, was your problem solved? If yes, please tick “solved”. If not, let us know where you’re stuck!

Hello
At first I have to answer the question. My reply will not improve this topic.

I have a question related to topic. I am a complete novice in programming. At the same time I am forced to create my first experiment. I thnik it is quite similar to William_Cleveland experiment. I just want to present an images of a human faces for some period of time. I think about 20-30 faces, one by one with some interval (5 s.). (It is easy, I know it.) Mayby you have some guidelines for person who have just bought a book to python.

Regards

Thanks for your inquiry. My problem was not solved. I also tried the same thing in Pharo Smalltalk 5, with which I have a bit more familiarity. The flash seemed a bit shorter, but still nowhere near the subliminal perception that I was seeking. One problem is that the apparent persistence may have been due to an after image effect. My plan was to follow the face with the presentation of another image that would leave an irrelevant after image, but I didn’t get the chance to implement it yet. Any advice on how to do this would be appreciated.

Thanks,

Lou Cleveland

Hello,

One problem is that the apparent persistence may have been due to an after image effect. My plan was to follow the face with the presentation of another image that would leave an irrelevant after image, but I didn’t get the chance to implement it yet. Any advice on how to do this would be appreciated.

It could be visual persistence/after image effects that make stimuli appear longer than they should, I’ve seen that before in experiments with exposure intervals as small as ~1ms. If your experiment allows, you could present a noise patch or similar immediately afterwards to disrupt processing the after image. Noise patches are very easy to generate using Numpy and ImageStim. I use psychopy exclusively as a python library (coder), I’m not familiar with builder to generate these patches though it.

To all,

I recently got the following code to present two images in rapid succession so that only the last was seen. The code is as follows and was typed into the Coder

import numpy 

from psychopy import core, visual 

win = visual.Window([2738,1875])  #Exact size of both images so no rescaling.

imageStimulus1 = visual.ImageStim(win =win, image ='/users/williamcleveland/Desktop/BDD/BDDExp1/JHC1.jpg') 

imageStimulus2 = visual.ImageStim(win =win, image ='/users/williamcleveland/Desktop/BDD/BDDExp1/Mask1.jpg') 

listOfImageStimuli=[] 

listOfImageStimuli.append(imageStimulus1) 

listOfImageStimuli.append(imageStimulus2) 

for imageStimulus in listOfImageStimuli: 
    imageStimulus.draw()
    win.flip() 

I found that the window size had to be matched to the image size as Michael suggested, otherwise the first image was visible, evidently because the second image appeared too late to mask the first.

I would like to present a series of image pairs, in which the second acts as a mask of the first. As I understand it, the existing code displays each image for one frame. I would like the first image of a pair to be presented a variable number of frames, say 1 to 5. The second image of a pair should also be presented for a few frames, but since this image functions as a mask, its duration is not critical like that of the first. I would also like to add a variable delay between the presentation of adjacent image pairs. Any help on how to do this would be greatly appreciated.

Incidentally, I have an AMD Radeon R9 M395X 4096 MB video card. I assume that video card memory will limit how many imageStimuli I can put in the listOfImageStimuli.

I also tried to create a monitor for my iMac27 using the Monitor dialog. I hit the save button. However, when I specified monitor = iMac27 in the above line of code for win, the monitor could not be found. Should I set up iMac27 monitor in the Coder?

Also is there any way of saving the code typed into the coder so that it can be printed with formatting? I like to read my code when I am not at a computer.

Thanks for your help.