Mouse Contingent Window for Online Attention

This is a continuation of the previous thread. I am (temporarily, according to Discourse) unable to post more than 3 replies to a thread. So, I’ve started this one to post a further reply to Wakefield. Wakefield said on 11/27:

I now have a simple solution for you.

All you need is a noise png with a transparent aperture which you put in front of the images and move with the mouse. This also works if you have a solid noise image behind a translucent image, which might have the advantage of being able to vary the visibility of the aperture rather than being a hard line.

I could mock this up for you if you like.

Best wishes


Thanks so much @wakecarter Wakefield! Our research group (@karipayne) appreciates your help! I have a question for you. Do you think it would be possible to have two image versions, 1) the original, and 2) a low-pass filtered version of the original, rather than a random noise image? If so, that is what we’re really hoping to be able to do. We know how to do this in PsychoPy, and we know how to create a gaze-contingent mouse window using HTML and JS using the following code:

However, we do not know how to get a version from PsychoPy to Pavlovia using the automatic translation function.

Best wishes,


Please could you email me a copy of a minimal PsychoPy experiment that does what you want locally so I can think about whether I’d be able to make it work online? My email is

Unfortunately my simple solution doesn’t work because the mask moves with the aperture.

Have a look at and try noise type 7.

I’ve also added mouse contingency to noise type 1 but the processing power required is too great to have small pixels of noise.

I’m not sure how to get the Javascript working in PsychoPy

It seems to involve browser functions svg (to turn an image into a vector graphic), feGaussianBlur (to blur an image). It then overlays two images with masks but I can’t quite interpret how. I think it’s one image where everything has been masked apart from the circle in front of another image which has been blurred.

Thanks, Wakefield! I’ll ask Kari (@karipayne) to send you her PsychoPy code so you can take a look at it.

The noise mask with an aperture is fascinating! When I leave the aperture at one location for a few seconds, and look at the surrounding noise, I perceive it as just noise (i.e., a great mask of the target). But if I move the aperture around quickly (i.e., rapidly jittering the aperture location), and look at the surrounding noise, it seems to be translucent, allowing me to see the entire underlying target image “through the noise.” I can’t tell if that is a purely perceptual effect, due to perceptually segregating the percepts from the two images and perceiving them both simultaneously, or whether it is a software effect, in which the noise mask is actually more translucent for brief moment in time after moving the aperture.

However, if one can overlay a noise image on top of the target image, it seems to me that one could instead overlay a low-pass filtered version of the target image on top of the target. If so, then that is what we’re shooting for. However, if the noise image is being regenerated on the fly, after each mouse movement, based on certain parameter values, and using the same random seed, then I can see that it would be different from doing the same thing with low-pass filtering (which would take more processing power).

Are you referring to noise type 1 or 7?

In noise type 1 I’m creating thousands of polygons (of random greyness and preset opacity) and setting their AutoDraw to False if they are too close to the mouse position. To do this with an image, the filtered image would need to be divided into a similar number of squares which could be placed and displayed separately. However, I don’t know how to do this automatically – I think this may be a case where np.array is needed which won’t work online.

I’m not sure I’m seeing the visual effect you describe so that may be related to processing power.

Hi Wakefield,

I was referring to noise type 7, in which, as you mentioned, the noise moves with the aperture. In that case, if I jitter the aperture and look at the noise, rather than the aperture, I can see the target image behind the noise. Trying it again, and thinking about it a bit more, I think it is actually a perceptual phenomenon, in which the two luminance patterns, one from the target image, and one from the moving noise mask, become increasingly perceptually segregated by their decorrelation via movement of the noise mask, allowing increasingly better recognition of the target image over time.

I just now tried noise type 1. I see that that noise type does not move with the aperture. So, there is no such perceptual segregation of the mask and noise. Thus, the mask is very effective. And I really like how you made the noise level easily modifiable, to see how it looks at those different levels. I think that somewhere between 2048 and 4096 would be both a) enough noise to encourage viewers to move their mouse to see objects of interest better, and b) enough peripheral resolution for the target image to allow the viewer’s attentional guidance system to intelligently choose where next to move the eyes, and subsequently move the mouse-window.

Your suggestion that one could simply show the low-pass filtered image through the same matrix of squares sounds good. And your earlier suggestion that one might use concentric rings of decreasing opoacity, to eliminate the hard edge sounds good too. However, that leaves the issue of how to do that automatically.

Thanks again for your continued interest and efforts. As I’ve said, I know that there is a lot of interest among many eye movement labs around the world in using the mouse-window. And we know that it is possible to create a system involving low-pass filtered images that will work on Pavlovia as shown in this example However, that uses a lot of custom coding in HTML. We also know that it is possible to create a system that will work on a generic webserver using HTML, or CSS, or JS as in this example But IF we can create a module within PsychoPy for the mouse-window, which can 1) be plugged into any PsychoPy experiment, and 2) be directly translated into workable JS in Pavlovia, THEN it will make this methodology available to a wide range of researchers who use both PsychoPy and Pavlovia. Indeed, it may bring more researchers over to PsychoPy and Pavlovia who have not used either before (e.g., my lab members, and several others I’ve spoken to). I think that will definitely be valuable to a considerable proportion of the eye movement research community. (And, In saying that, I’m not assuming a facile equivalence between the results of the mouse-window and eyetracking; but I am assuming a reasonably close approximation of the latter by the former, given an acknowledgment of the obvious differences.) So, any efforts put into cracking this nut are likely to be very valuable for a lot of researchers.

Hi @wakecarter Wakefield,

I just wanted to check in with you. I’ve started a working group of people interested in this problem (i.e., making a mouse-window for on-line studies, using blur if at all possible, also workable for videos, and also within PsychoPy + Pavlovia if possible). Currently it is me and several of my graduate students, plus a grad student in Engineering from U. Waterloo who has come up with his own online mouse-window for video using transparency, but would like to get it to work with blurring. We’ve set up a Slack Group to facilitate collaboration, as well as a Google Drive folder, and shared documents there. So, if you’re interested in joining our larger group on this project, let me know. I would foresee you as being our member with the closest ties to the PsychoPy + Pavlovia part of the problem space.

Best wishes,