Dynamic blurring/scrambling images

Hello

I’ve been trying to implement this without much success. I’d like to present image files and then either (1) add a blurring effect, or (2) scramble them. I would then like to reduce this effect over the course of the trial such that the stimuli begin extremely blurred/scrambled, and get increasingly clearer towards the end of the trial.

Is this possible in PsychoPy, and if so can somebody point me in the direction of how to achieve this? I do own the new book but can’t see how to achieve it.

I’m not sure you can do that in PsychoPy, but the “easy” solution would be to manually blur/scramble your images with some editor (e.g. Gimp), and then call them in place of your original images in the program.

Ok. This would be way too much work since I need the blurring to be continuous, and to stop when a response is made. Maybe if I were to import an image processing library in python and create a loop that reduces blur over every frame?

Hi, I am not sure about the ‘blurring’ effect, but I know you can manipulate images as a function of time. So for example, within a trial, you can orient, change position and/or even change opacity as a function of time. This is done by using a lowercase ‘t’. The ‘t’ variable is indicative of time, in seconds.

For example, if you have a simple image stim and you want it to move from the centre of the screen towards the right-hand side, in the ‘position’ parameter, if you entered (t, 0), the stimulus will start at the centre of the screen and move to the right of the screen (i.e., [1,0]) in 1 second. Set it to ‘set every frame’ for this to work (i.e., update monitor per screen refresh). This is a very useful function for creating dynamic stimuli and almost every parameter can be manipulated this way.

Now, of course, this isn’t what you need, specifically. But, you can also change colours and other parameters as a function of time (‘t’). So if you can work out how to blur first, you can use the ‘t’ function to progressively blur images (or ‘unblur’ them).

I hope this sheds a little light to the problem.

Foyzul.

Hi Foyzul, thanks. I’m aware of the t variable and expected that I would need to make use of it. It’s just as you said - I just need to figure out how to blur the image.

This would be the easiest way to do so if using the vanilla PsychoPy API (and likely to get the fewest complaints from a reviewer since the blur algorithm is well defined somewhere). However, it might not be fast enough for real-time (per-frame) rendering without doing something clever, like caching images for each frame in video memory ahead of time. This is something that can be done from the “coder” side of things with some OpenGL calls, you might be able to get “builder” to run this code somehow. The example below shows generally how to do this. However, it’s upto you to make it work within your experiment.

Before we draw anything, we want to load our image, blur it using some library and then upload it to video memory.

from PIL import Image
import ctypes
import numpy as np
import pyglet.gl as GL

# load image
imageFile = "/path/to/image/file.png"
im = Image.open(imageFile)
im = im.transpose(Image.FLIP_TOP_BOTTOM)
im = im.convert("RGBA")
pixelData = np.array(im)  # convert to numpy array

imageCache = []  # blurred versions of a single image
blurLevels = [0.0, 1.0, ... ] # blur levels or whatever

# image stimulus
stim = visual.ImageStim(win, .....)

for blurLevel in blurLevels :
    # run whatever blur function here on a copy of the image array
    imageBlurred = blurFunction(pixelData.copy(), blurLevel)

    # load the image to video memory for fast access, this is from
    # https://discourse.psychopy.org/t/imagestim-texture-handle-swapping/906
    thisID =GL.GLuint()
    GL.glGenTextures(1, ctypes.byref(thisID)) # just creates the id
    #create the actual texture
    stim._createTexture( 
                    imageBlurred,  # or imageBlurred.ctypes?
                    stim=stim,
                    pixFormat=GL.GL_RGBA,
                    dataType=GL.GL_FLOAT,
                    forcePOW2=False)
    imageCache.append(thisID)

Within the main loop of your experiment, we tell ImageStim to reference the image data stored in video memory. We stored these references in the imageCache array. So each frame we select the next image ID imageToUseThisFrame which points to an image with a different amount of blur.

# to change the image displayed by the ImageStim, this is done very quickly
texId = imageCache[imageToUseThisFrame] 
stim._texID = texId 
stim. _needUpdate = True # so the stimulus is updated during draw
stim.draw()

When done, call this to free video memory. You might need to call this during your experiment if you are running out of video memory. This will require you to reload and blur images between trials. All the IDs in imageCache are invalided after this call.

for i in imageCache:
    GL.glDeleteTextures(1, i)

# clear the array since the IDs are now invalid and will cause errors if used
imageCache = []

Keep in mind video memory can quickly run out if you have lots of images.