| Reference | Downloads | Github

Getting interpolated texture of image


I’m having difficulty finding the best way to generate a stimulus set. I have a larger picture that I show zoomed in slices of. Because the resolution of the slices is small, the slices are scaled up, which is no issue. However, as control I would also like to display these same slices with their pixels shuffled. The issue I run into is shuffling the slice textures shuffles them before they are scaled, and then the shuffled slices get interpolated. Is there a way of obtaining the scaled texture from psychopy that are actually displayed?

One workaround I have been doing is drawing the scaled unshuffled slice to the back buffer and using win._getRegionOfFrame(buffer=‘back’) and then shuffling that image, but this doesn’t seem ideal. Any help would be appreciated.


I think that you do this the best way possible. The scaling and interpolation takes place in OpenGL, i.e. on the graphics card, so it’s not readily available unless you draw it. I looked into the code in visual.ImageStim and visual.TextureMixin._createTexture and couldn’t find any variable holding that matrix - only commands to the graphics card.

If you time it, my guess is that your solution is reasonably fast (a few milliseconds). Try doing:

# Set things up somewhere in the beginning of your script
from psychopy import core
clock = core.Clock()

# Ad the appropriate location in your script:
clock.reset()  # reset timer
#your code here
print clock.getTime()  # elapsed time in seconds
1 Like

Can you just set interpolate=False for the stimulus? That will switch to using nearest neighbour instead of linear interpolation

1 Like

Hi, thanks for the comments. I do want the images interpolated when they’re zoomed in. Which is why I need to somehow get that interpolated image in order to shuffle the pixels.

I timed my method, and creating the stim instance and drawing then pulling only takes about 20 ms. This isn’t too bad since I do all the generation ahead of time. Glad to know there’s not an obvious better way.

On a related note, my method of shuffling the pixels is very slow, about 200 ms, which adds up quickly with many images.

# draw and get screencap
cap ='back')

# scale rgb from -1 to 1
cap = np.asarray(image) / 255.0 * 2 - 1

# shuffle
np.random.shuffle(cap.reshape(-1, cap.shape[-1]))

# push texture

Is there a better way of shuffling images? A search online and through SO didn’t bring up much. I tried various ways of flattening and transforming without success.

1 Like

Then it sounds like you’re doing a pretty efficient job already. Pixel operations are always slow when they have to be done off the graphics card. You could do something like this with an OpenGL Shader program but you’ll probably spend more time writing it than you’ll save running your study

1 Like

Regarding the interpolation question, I have found scipy.ndimage.interpolation.zoom to be useful.

For example:

zoom = 4
z_img = scipy.ndimage.interpolation.zoom(img, zoom=[zoom, zoom, 1], order=0)