| Reference | Downloads | Github

Colour ImageStim inconsistent timing

Colour stimuli slower than greyscale stimuli?
Does colour image makes it slow to load?

I’m trying to present colour images for 250ms, all different images, same size 400px X 500px used code for Preloading,
However I’ve not been able to get consistent timing. But when the image stimuli changed to greyscale it gives consistent timing.

Is that because the colour? Is there some way I can make it consistent timing with colour images?

Or is it because the image file size? Compressing files will help?

Any information you can provide me would be greatly appreciated.

Without telling us the details of the source image files, we can’t offer much useful guidance.

But as general advice, the source image files should be 400 x 500 pixels too. i.e. the source images should match the displayed resolution. Otherwise your images will be gobbling up unnecessary memory and time.

Thanks @Michael yes all images are scaled to 400x500px (cropped with GIMP), jpg format, 87 to 314 KB and natural photos of people (downloaded from google). I attached one of colour images. Then, greyscale images’ sizes are 20 to 25KB - a lot different, so I guessed these file size would make different

OK, good to know. No, the file size shouldn’t mean too much here. Files which store either greyscale or full colour information still get displayed on screen in RGB format: i.e. it takes as much information (one byte) to display a red pixel (e.g. #AA0000 as it does a grey pixel e.g. #AAAAAA), so although the duplicate info in a greyscale image results in a smaller file on disk, it takes as much video memory to display as a colour image.

Similarly compressing files doesn’t help: the same amount of information still needs to be decompressed to be displayed on screen (i.e. one byte per pixel (plus maybe transparency information)).

So the issue is likely elsewhere (unless it is something obvious like the greyscale images are on a fast local drive and the colour ones on a slow network drive). So please show the code you are using to create and preload your stimuli, and the results of your timing tests.

here some code I used and attached the excel file for the timing result

imgList = []
path = os.getcwd()
for infile in glob.glob(os.path.join(path, '*.jpg')):
pictures = [visual.ImageStim(win, img, ori=0, pos=[0, 0]) for img in imgList]
        # update component parameters for each repeat
        for img in pictures:
            if img.image == (os.path.join(path, str(ImageFile1))):
        if ImageFile1 == None:

I really need to precise 250ms but as you can see many of them not even close to 250ms :’(
Would it be easier to help with this if I send you @Michael all the code?colour img time result.xlsx (15.2 KB)

Your preloading is great: you cache a list of ImageStims ahead of time. But you then undo all of that good work by updating those stimuli again later with a .setImage()call. Each time you do that, you are going back to the disk, opening and uncompressing the image again, meaning that effectively you didn’t cache the images at all. The only effect this has is to cause timing difficulties, as it might not be able to be done within a single screen refresh.

Instead, simply loop through all of your pre-prepared stimuli like this, drawing each for 15 frames (= 250 ms @ 60 Hz):

for picture in pictures:
    for frame in range(15):

1 Like

Thanks Michael for the code,
However, when I added this the experiment doesn’t work as I intended (2blocks). It just keeps running, doesn’t work with [Esc] so unable to get the timing result :frowning:

Here my builder file, the simplest but I haven’t played this. I have been just trying to work out with a code version that I added preloading and some extra codes.

Hi, yes, that suggested code wouldn’t have been compatible with a Builder-generated experiment (I’d assumed you were working with an experiment built from scratch, where you were in control of the drawing loop).

But it should be relatively straightforward to integrate the code you need into your Builder file, rather than editing the .py file. I suggest you do the following:

(1) Remove the Image stimulus components from your Trial_1 and Trial_2 routines. We will replace them with the list of ImageStims you created in code, and draw them in code.

(2) You will still need some sort of dummy stimulus just to control the duration of the trial. e.g. put a small Polygon stimulus in the centre of the screen, set to last for 15 frames. Don’t worry, we’ll draw the image over it, so it won’t be visible.

(3) Insert a Code Component in Trial_1. Put it below the Polygon stimulus, so what it draws will erase the polygon. In its Begin Experiment tab, insert your caching code:

imgList = []
path = os.getcwd()
for infile in glob.glob(os.path.join(path, '*.jpg')):
pictures = [visual.ImageStim(win, img, ori=0, pos=[0, 0]) for img in imgList]

(4) Disconnect your inner loops from their .xlsx conditions files. We aren’t getting the stimulus names from there anymore. (But keep them connected if there is some other info in those files that you are using).

(5) Put this in the Begin routine tab, so that the name of the current image gets saved in the data for the current trial. i.e. we figure out which image to use based on the trial number of this loop (i.e. Block_loop_1.thisN), and access its .image property, which gives the filename that was used to create it:

thisExp.addData('image_name', pictures[Block_loop_1.thisN].image)

(5) In the Every frame tab, put this to actually draw your image:

pictures[Block_loop_1.thisN].draw() # gets repeated 15 times

i.e. the first image in your list (i.e. pictures[0]) will be drawn for 15 frames. Then on the next trial, pictures[1] will be drawn for 15 frames, and so on. The names of those images will be saved in the data at the start of each trial.

Hopefully you can see how to do the same thing in Trial_2, but in the code component there, don’t repeat the code in the Begin experiment tab. That only needs to be specified once. [Unless you have a different set of images for that part of the experiment? I wasn’t sure. If so, cache those too, but in a different list (say, pictures_2).]

Make sense?

1 Like

Hi Michael,
I like to say it worked but I can’t, whatever the reason I kept getting error messages, kind of fixed the problems but it doesn’t run properly showing only gray screen and still doesn’t work with escape key.
I cleaned all preference files and appdata and started new still not working. any thought?

Very hard to give useful suggestions without details…

Thanks Michael,

I meant any information for when the program doesn’t stop, I have to shut down.

Started with builder, test run was fine but when I added images in the coder, not showing any image the experiment doesn’t stop until I close whole psychopy program.
I mentioned last time when changed the code which is above (e.g, the experiment kept running, didn’t stop after the all routines then back to the first trial) and you said ‘it’s not compatible’ what do you mean by that? The psychopy version is 1.84.2 I got, some code, something not working in this version?

Any number of things can go wrong if you edit the script in the Coder view. All of the suggested code I gave in the last detailed message was designed to be used within Builder, in code components. That way, you know the code is being inserted in the right place. I’m not sure why you would have gone back to editing the script in the Coder view?

1 Like

Omg it’s working great, Michael thanks!!!
It works great on a lab computer, the polygon stimuli still showing though.

I still couldn’t figure out why it doesn’t work on my computer, anyway the most important thing is it works on the lab com

Thanks again!