| Reference | Downloads | Github

Clicking on element in ElementArrayStim? "contains" function for elementArrayStim implementation

Is there a “contains()” function or some equivalent for shapes in an ElementArrayStim? I ask because I’m looking for an efficient implementation to check if one of many circles in an ElementArrayStim has been clicked without having to set up a lookup table of possible areas of the screen to check for.

I should also mention that I’m trying to do this with a customMouse object, which I learned does not have an “isPressedIn” function…

I think I figured it out! Can anyone think of a faster way to do this? I don’t like having to iterate over the array in the final step…

from psychopy import visual, monitors, core, event, os, data, gui, misc, logging
from random import shuffle
import time, math, random
import numpy as np

	import matplotlib
	if matplotlib.__version__ > '1.2':
		from matplotlib.path import Path as mplPath
		from matplotlib import nxutils
	haveMatplotlib = True
except Exception:
	haveMatplotlib = False

## Porting psychopy's contains function to elementArrayStim
def contains(thisElementArrayStim, x, y=None, units=None):
	"""Returns True if a point x,y is inside the stimulus' border.

	Can accept variety of input options:
		+ two separate args, x and y
		+ one arg (list, tuple or array) containing two vals (x,y)
		+ an object with a getPos() method that returns x,y, such
			as a :class:`~psychopy.event.Mouse`.

	Returns `True` if the point is within the area defined either by its
	`border` attribute (if one defined), or its `vertices` attribute if
	there is no .border. This method handles
	complex shapes, including concavities and self-crossings.

	Note that, if your stimulus uses a mask (such as a Gaussian) then
	this is not accounted for by the `contains` method; the extent of the
	stimulus is determined purely by the size, position (pos), and
	orientation (ori) settings (and by the vertices for shape stimuli).

	See Coder demos:
	# get the object in pixels
	if hasattr(x, 'border'):
		xy = x._borderPix  # access only once - this is a property
		units = 'pix'  # we can forget about the units
	elif hasattr(x, 'verticesPix'):
		# access only once - this is a property (slower to access)
		xy = x.verticesPix
		units = 'pix'  # we can forget about the units
	elif hasattr(x, 'getPos'):
		xy = x.getPos()
		units = x.units
	elif type(x) in [list, tuple, np.ndarray]:
		xy = np.array(x)
		xy = np.array((x, y))
	# try to work out what units x,y has
	if units is None:
		if hasattr(xy, 'units'):
			units = xy.units
			units = thisElementArrayStim.units
	if units != 'pix':
		xy = convertToPix(xy, pos=(0, 0), units=units,
	# ourself in pixels
	if hasattr(thisElementArrayStim, 'border'):
		poly = thisElementArrayStim._borderPix  # e.g., outline vertices
		poly = thisElementArrayStim.verticesPix[:, :, 0:2]  # e.g., tesselated vertices

	return any( np.fromiter( ( pointInPolygon(thisPoly, xy[0], xy[1]) for thisPoly in poly[theseTargetIndices] ), np.bool ) )
1 Like

To me (and someone else might know better) this looks about as good as one can get. The issue of iterating is exactly why I’d never implemented such a thing. But maybe you could time it and see how long the calculation takes for a few array sizes (like 10, 100, 1000). If it isn’t awful maybe we should add you code to the lib :slight_smile:

I personally prefer
because I like to separate the detection of presses and locations (e.g. the code to detect hovers and clicks then uses the same functions)

1 Like