I am attempting to create an element array where the individual elements are oriented ‘compass needle’ shaped bars. I was hoping this would be possible by passing a shapeStim with the desired custom vertices to an ‘element’ keyword of the ElementArrayStim, but it seems this is not included in the source, and passing the shapeStim object to ‘elementMask’ doesn’t work either.
Is there something I’m missing, or is it not possible to create custom elements for the ElementArrayStim…? I understand that an image file can be used, but this isn’t ideal. It seems intuitive that a set of vertices should be desirable / passable as the form of the element.
Here’s my code for a fairly custom polygon from Pavlovia. I use colour names as variables to work offline as well as online, so here grey = [-.1,-.1,-.1] in Python and grey = new util.Color([(- 0.1), (- 0.1), (- 0.1)]); in JS
If I follow, this is just creating your own ‘array’ of individual shapeStim, but I was under the impression that the purpose and benefit of the ElementArrayStim was the speed with which it can produce these arrays. I need to be able to rapidly update the orientation of each element in the array with precise timing - so I’m not sure if this would become an issue with the method you’ve suggested?
Honestly, I don’t know much (anything) about the ElementArrayStim since I tend to write experiments which also work online. The main thing I do is update parameters of existing objects, rather than create new objects when I need a change, because the creation takes most time. How many elements do you need to change?
~ 120 - 180, with some additional intermittent stimuli (although that would be separate from the bilateral arrays). The ideal targeted ISI would then give 50ms for any updating (position from anchor and orientation) between presentations.
I suppose I could just use opacity to hide the arrays as opposed to redrawing them so any lag would mostly occur for the first presentation only… unless another method short of modifying the source code for the ElementArrayStim exists.
Regardless, providing custom vertices for array elements seems like a feature one would expect to be included. There is also the possibility to use gratings for the elements in ElementArrayStim, which could provide the impression of an orientation, it just wouldn’t be consistent with our other experiments.
The problem is that ElementArray doesn’t accept a visual stimulus as an input - the reason it’s faster than just drawing hundreds of the same stimulus is that accepts raw textures/masks instead of a full PsychoPy stimulus. You can use a stimulus to create a mask like so:
from psychopy import visual
import numpy as np
# Create a window whose size is the same size you want the texture to be (let's say 100x100 pix for example) with a black background
texWin = visual.Window(size=(100, 100), color="black")
# Create your stimulus
comp = visual.ShapeStim(
win=texWin,
fillColor="white",
....[[STIMULUS PARAMS HERE]]
)
# Draw your stimulus to the window and flip it to the front
comp.draw()
texWin.flip()
# Screenshot the window to get it as a PIL image
img = texWin.screenshot
# Convert it to a numpy array and flip upside down (texture y coordinates are inverse to window y coordinates)
tex = np.flipud(np.array(img))
# Close the texture window so it doesn't interfere with your experiment
texWin.close()
You can then use elementTex=tex, elementMask=tex in your ElementArray to make an array of that shape in whatever colors are specified by colors. It’ll mean a window will briefly pop up each run though, if this is a problem you could instead do all this in a separate script and instead of converting img to a numpy array you could save it as a .png like so:
img.save("[[SOME PATH]]/tex.png")
and use the filename in your ElementArray rather than the numpy array directly.
Thanks for your response - I can see how this would function as a workaround. I suppose I’m a bit more interested as to why there isn’t a built in pre-set to accept vertices for custom elements, or rather if I could adapt the source code to have one.
looking into the source code a bit (psychopy.visual.baseVisual) it seems that you can pass an NxN numpy array to the mask:
def mask(self, value):
"""The alpha mask (forming the shape of the image).
This can be one of various options:
* 'circle', 'gauss', 'raisedCos', 'cross'
* **None** (resets to default)
* the name of an image file (most formats supported)
* a numpy array (1xN or NxN) ranging -1:1
so I suppose the question becomes how to convert a set of vertices (ranging 0:1) into an equilateral shape (as is done by shapeStim), and then to an NxN numpy array of -1’s and 1’s, where N is relative to the number of pixels present in the given size of the element. The available presets (found in psychopy.tools.arraytools.createLumPattern) generate these NxN arrays directly, for the most part using np.mgrid… but how to reproduce the desired shape in this way is a bit above my level of understanding (otherwise I would just add a new preset for the desired shape and call it a day - but the flexibility of vertices is probably more desirable anyway).
I’ll employ your workaround for now and then see if I can solve the problem above without the need for screenshots or img conversions.
It’s feasible that we could add something like that - what happens with named shapes like cross and etc. is we use some numpy functions to create gratings, then boolean-ise them into shapes with defined borders. What we could do instead is use vertices as indices to boolean arrays, or even use OpenGL to directly write to an array rather than a window. But this would be a fairly significant time investment for a relatively niche use case with a fairly good workaround already available, so it’s something we’d need to discuss when planning the next major release.