Selective dynamic unblur of an image based on mouse movements

Hi Folks,

I’m working on a new experiment and wondering if I can do a particular thing in Pyschopy.

I am effectively implementing an information board/process tracing study.

What I would like to do is present participants with a blurred image – but have a small area of the image that is not blurred – and this area moves around with the mouse – effectively revealing the unblurred image dynamically as the participants hovers over different elements of the image. I would then record mouse movements and be able to see the order and duration of elements that participants examine.

I thought I might present two versions of the image overlaid – with the blurred image on top – and then use something like the aperture component to punch through the blurred imaged revealing the underlying (in focus) image – but it would appear that the aperture element just clobbers all images present on the screen. Hmmmm… Is there any way of getting the aperture to only apply to ONE image – and ignore another?

Can anyone think of – in broad brushstrokes – a way to do this?

Thanks for any tips/ideas/pointers.

(I think the mouse.getPos() trick should be straightforward, and I have had look at this thread – but it’s not ‘quite’ what I need to do: Johanna Knecht / dynamic_selective_inspect · GitLab

I also note that a similar question has been asked before, but not answered: Dynamically deblur a part of an image based on mouse location (a proxy for eyetracking)).

Thanks folks.


Ok. I have progressed a little with this - and now have the aperture following mouse movements, but I still cannot get one image (the underlying blurred image) to escape the effects of the aperture.

This thread seems to suggest that this is do-able - but I cannot work out how. This thread discusses it: Eye-Tracking and Gaze-Contigent Paradigms

And specifically: “you create two images (or text stimuli): one with the actual text, the other with the mask (e.g. lots of XXX). On every frame, you draw the XXX mask, then position the aperture stimulus to the current gaze position, and then draw the text stimulus through it so that it erases the underlying mask just at that location.

Is this possibly to do with the whole ‘write an image to memory and then programatically deploy it’ thing. I haven’t used this trick yet so could do with some pointers / documentation on it.

Thanks folks.


Altertatively, an approach I could also envisage would be to overlay the base image with a series of shapes (or images) - which blur the underlying image, and then as the cursor enters each shape - render that object invisible - revealing the full details of the base image.

But… this would require some form of blur or distortion from one shape to affect the base image. Is there a way this could be done? I have been playing around with generating gaussian noise images and then overlaying them on the base image with reduced opacity - but this isn’t quite working.

Is there an object that can visually interact with the base image in such a way to blur it?


Thanks again for any thoughts on this.


Hi Dan! I’m the OP of the topic you linked here. I happened to find the prototype scripts that I wrote as a result of that discussion, so I’m leaving them here in the off-chance they may help: (2.0 KB) (911 Bytes)

They’re old and messy scripts for what is now a 6 year old (!) version of PsychoPy, but hopefully they are still helpful. Conceptually, your experiment seems very similar to the one I ended up building.

Hi @mario_psyl !!!

Thanks so much!!!

I’ll have a look at these this evening.