Hi Wakefield,
I was referring to noise type 7, in which, as you mentioned, the noise moves with the aperture. In that case, if I jitter the aperture and look at the noise, rather than the aperture, I can see the target image behind the noise. Trying it again, and thinking about it a bit more, I think it is actually a perceptual phenomenon, in which the two luminance patterns, one from the target image, and one from the moving noise mask, become increasingly perceptually segregated by their decorrelation via movement of the noise mask, allowing increasingly better recognition of the target image over time.
I just now tried noise type 1. I see that that noise type does not move with the aperture. So, there is no such perceptual segregation of the mask and noise. Thus, the mask is very effective. And I really like how you made the noise level easily modifiable, to see how it looks at those different levels. I think that somewhere between 2048 and 4096 would be both a) enough noise to encourage viewers to move their mouse to see objects of interest better, and b) enough peripheral resolution for the target image to allow the viewer’s attentional guidance system to intelligently choose where next to move the eyes, and subsequently move the mouse-window.
Your suggestion that one could simply show the low-pass filtered image through the same matrix of squares sounds good. And your earlier suggestion that one might use concentric rings of decreasing opoacity, to eliminate the hard edge sounds good too. However, that leaves the issue of how to do that automatically.
Thanks again for your continued interest and efforts. As I’ve said, I know that there is a lot of interest among many eye movement labs around the world in using the mouse-window. And we know that it is possible to create a system involving low-pass filtered images that will work on Pavlovia as shown in this example https://run.pavlovia.org/syahn/web-mouseblur/html/. However, that uses a lot of custom coding in HTML. We also know that it is possible to create a system that will work on a generic webserver using HTML, or CSS, or JS as in this example https://stackoverflow.com/questions/22049177/how-to-reveal-part-of-blurred-image-where-mouse-is-hovered?noredirect=1&lq=1. But IF we can create a module within PsychoPy for the mouse-window, which can 1) be plugged into any PsychoPy experiment, and 2) be directly translated into workable JS in Pavlovia, THEN it will make this methodology available to a wide range of researchers who use both PsychoPy and Pavlovia. Indeed, it may bring more researchers over to PsychoPy and Pavlovia who have not used either before (e.g., my lab members, and several others I’ve spoken to). I think that will definitely be valuable to a considerable proportion of the eye movement research community. (And, In saying that, I’m not assuming a facile equivalence between the results of the mouse-window and eyetracking; but I am assuming a reasonably close approximation of the latter by the former, given an acknowledgment of the obvious differences.) So, any efforts put into cracking this nut are likely to be very valuable for a lot of researchers.