I have been working on implementing custom code in a PsychoPy experiment to measure attention on a picture with mouse movements—specifically, with images blurred, and with a clear circular region of high resolution that moves with the subject’s mouse position. I have been able to get this working in PsychoPy, but not in a way that is usable in Pavlovia. I am looking for a solution to this, because a method for using mouse-based attention tracking in online studies is in high demand in visual cognition labs, due to the pandemic interrupting in-person eyetracking data collection, and online webcam-based eyetracking not being sufficiently accurate for many researchers’ (e.g., ours and several colleagues’) purposes at present. Finding a solution that uses PsychoPy can make this methodology easier for psych researchers who may not have coding backgrounds to utilize as a module in their own studies.
The current implementation runs on a frame-by-frame basis, using the Python Imaging Library (PIL) to open each image as an RGB array, then creating a new image with a circle at the location of the mouse, saving that as an alpha layer and adding that alpha layer to the RGB array before saving it back as an image (creating an image with high-resolution content only in the circle, with the surrounding area transparent). This is then placed on top of a blurred version of the image in the PsychoPy builder view. Mouse location is saved to the output file. I can include the full code of this if anyone would like to take a look at it.
This works in PsychoPy, but because it is calling python libraries outside of PsychoPy, it does not work in Pavlovia. I have yet to find a JavaScript equivalent that can match this idea in custom code (that Pavlovia would presumably accept). Any insight in this area would be immensely valuable. I have found JavaScript code that would blur images in real-time, examples here and here, but this would work with HTML images, not ImageStim objects drawn on the PsychoJS window. Is there any feasible way to get this code to work on these objects, within JS custom code in a PsychoPy experiment? It would be additionally impactful if this solution could work with video stimuli as well, which is a future implementation we are working towards. That is something I know a lot of labs are interested in also. Thank you in advance for your time and assistance!
Sincerely,
Kari Payne
Kansas State University
Les Loschky’s Visual Cognition Lab
This is a really cool idea, and I can see the application for simulating search in visual field loss, so thank-you for putting the time in to develop this. We actually discussed this in a meeting the other day (likely inspired by yourselves!) as it is quite similar to this https://gitlab.pavlovia.org/demos/dynamic_selective_inspect but requires the aperture component be implemented into PsychoJS. This particular translation is beyond my JS capabilities, but I wanted to assure you that this has been put on the radar of the JS members of our team @thomas_pronk and @sotiri and I am tagging them here so that they have reference to the information you have already shared.
I look forward to following updates on this,
Becca
I’m currently working on something similar but the only way I could think of to degrade the image as to cover it with thousands of dots with a random position and level of grey (currently at .9 opacity). I could fairly easily suppress dots within a given distance from the mouse pointer.
Basically I don’t know how to read the current greyness of a given position in the image in order to average that with the surrounding pixels.
My current method has a minimum dot size of 3 pixels (relative to a 512x512 pixel photo) before it falls over when you try to create enough polygons to totally cover the image.
I’ve just had an idea (though this might not work for the aperture). Perhaps I can create blur by simply overlaying multiple partially transparent images. (With a slightly jittered position)
Becca, thank you for your input! I am looking forward to seeing the thoughts of those with more JS fluency than myself!
Wakecarter, these are both interesting ideas, thank you! Were you able to implement the dots method in JS custom code, or in some method that Pavlovia would accept? I would also be interested to see how user performance would differ between a gaussian-blurred periphery, vs. one with a multiple-dot filter as you described. I know that performance differs between low-resolution peripheries and no-resolution peripheries (like that in the dynamic_selective_inspect demo that Becca linked), but as far as blurring methods go, I’d have to check if there are any studies comparing task performance with different blur types to performance using one’s visual periphery. I will see what I can find on this, as I would like the blur here to be as similar as we can get to peripheral vision (but would also like it to be possible to implement online!).
Hi Everybody! Thanks for your suggestions! I was wondering if @thomas_pronk or @sotiri have any suggestions for us. Jonathan mentioned that you guys could probably be of help. We would greatly appreciate that. I think that a successful implementation of the ‘mouse-window’ as a PsychoPy experiment module, which can be used in any number of PsychoPy experiments, and easily ported to Pavlovia, will be of great interest to anyone looking for a rough approximation of eye movement measurement in web-based experiments. (That is on the assumption, which I think is true at present, that webcam-based eyetracking is not yet ready for many uses that those doing eyetracking research desire.) I believe that would be a large number of researchers, based on conversations I’ve had with a number of colleagues over the last several months during the pandemic. So, all of that is to say that, I think this is a high-value nut to crack.
Thanks for your suggestions! Those are very helpful. The idea of using overlapping dots, or partially transparent images, would absolutely work to create degradation of the image. And that may be enough for our purposes. However, ideally, we would like to use a method in which we vary only the image resolution, for example using low-pass filtering, without reducing the global luminance minima and maxima. These methods have been used a lot in studies of gaze-contingent multi-resolution displays. The challenge, in this case, however, is to implement them in a way that can be ported from PsychoPy to Pavlovia. Interestingly, we can implement them in Python, and we can implement them in Javascript. But making code in PsychoPy that can be translated to JS in Pavlovia seems to be our current stumbling block. @sotiri@thomas_pronk
All you need is a noise png with a transparent aperture which you put in front of the images and move with the mouse. This also works if you have a solid noise image behind a translucent image, which might have the advantage of being able to vary the visibility of the aperture rather than being a hard line.