| Reference | Downloads | Github

Resizing images for degree of visual angle


I created a group of image stimuli at 60 pixels (height and width) so that on the following monitor they appeared as 1.75 degree of visual angle (monitor: 1366x768pix, 57cm viewing distance, width=340mm, height=245mm).

This worked out fine, but now that I am using a new monitor (1024x768, width = 410mm, height = 300mm) those same stimuli are appearing larger and are overlapping. I now need my stims at around 37 pixels (width and height). Rather than making new ones, is there a way to have psychopy resize my old stimuli?



Assuming you’re using code:
When creating your ImageStim object, you can specify the units used and the X/Y size of the image in those units.

myImage = visual.ImageStim(win, units='deg', size=[1.75, 1.75])

Degrees of visual angle is supported, and automatically calculated from the information you input to the Monitor Center

Thanks, that’s very helpful. Is it just a case of making sure my images are large enough so that they are never being stretched? scaling down isn’t a problem is it?

In terms of appearance, it will depend on the size and contents of the source image, and the target size it gets scaled to. I’ve never had any issues with it, but images with hard edges might sometimes cause artifacts.

So the best option is to test it, and if you have visual distortion issues then resample the images externally. You could do this easily by batch scaling all of them together (e.g. actions in Photoshop)

1 Like

As a final note, there’s the possibility it could negatively impact performance. If it were me it would be worth it to get them to the size you want (once) instead of having psychopy resize them every time the imagestim is created and changed.

Actually, the only performance hit will come by using images substantially too large because of the waste of memory and the extra time taken to upload images to the graphics card. The algorithm for rendering stimuli is the same whether stimuli are roughly the right size or exactly the right size and graphics cards are very well optimized for the rescaling transforms required. Just don’t use an 8 megapixel image for a 200x300 pixel stimulus and you’ll be fine.


Great to know, thanks! Think I misinterpreted the docs.