| Reference | Downloads | Github

Get RGB (or HSV) of pixel given x,y coordinates

I have an experiment where subjects identify the color they say by clicking with a custom mouse on a palette of color (a visual.RadialStim, a color wheel). It would be good if I could directly get the RGB or HSB or whatever of the pixel that the subject click on. I couldn’t find anything in PsychoPy, but about 10+ different solutions to this problem using other packages. Given X,Y coordinates, what is an easy, psychoPy compatible way to get the color at that pixel.

Thanks a whole bunch,
Bill P

I think it’s easier to detect what the colour is that you intended than decoding what colour was actually there. Your way might be possible using a hidden function as below

pixels = win._getRegionOfFrame(rect=(-1, 1, 1, -1), buffer='front')


  • rect needs to be in “norm” units whereas the mouse coords will be in the window units (needs conversion)
  • pixels will be nxMx4 (rgba) and I don’t know what ranges of values will be in there so you’ll have to play with that

Jon and All,
I wish I could figure out how to the color of a pixel. What I have is a color palette that subject respond by clicking on a spot. Its written with Builder but a lot of code segments. The palette is made with a gradient, so I know where the subject’s click (reported as degrees on a circle) but not the exact color.
You suggested putting in:
pixels = win._getRegionOfFrame(rect=(-1, 1, 1, -1), buffer=‘front’)
I don’t know how to read pixels so I put print pixels and got:
<PIL.Image.Image image mode=RGBA size=1920x1080 at 0x11858AF
After trying several things I looked up PIL. I few commands from there like rgb = pixels.getpixel(xy) give something reasonable. (from ), but alas they always give the color of the screen, not the color of the pixel. In the attached, the screen is (0,0,0; gray) but I overlayed it with a pink rectangle. It with print out 128,128,128,255 (the last Alpha), which makes sense for gray!
The parts that have something to do with getting pixel color are marked “Jon Peirce” (two spots). Sorry all for my liberal mixing of Builder and Coder. It makes it hard to find the critical parts. Either look at the code and search for Jon Peirce or look at builder, trial > convert2DegCode (code component) > End Routine

Hi Bill, going back to Jon’s earlier comment, if you created the palette, don’t you already know what the colour is supposed to be at the point where the person clicked? i.e. the colour has been calculated for a given location, so given that location, you can calculate what the colour value must be?

Otherwise, as you don’t give the code you’re actually using, it is hard to give much help. But what you are describing (getting a grey value) sounds like you might be sampling from the wrong buffer (depending on when this code runs). Have you tried sampling from the back buffer instead?

Have you solved this problem? Could you share your program with me if you have solved it? Thanks!

@Ice_Lee, I have a solution that takes advantage of some other win functions. For now, I am not sure why _getRegionOfFrame will only return the initial state of the window, without any stimulus drawn. However, there is another function used to save frames called _getFrame and this can be used instead. You will need a mouse component (called mouse), and shape stim.

First some things to bear in mind. You need to set your experiment settings to pixels, so the mouse and window return pixel values that align with your numpy pixel array. Also, unless you reshape the array, remember that your x,y mouse positions need be reversed when indexing the array. Finally, the mouse sets the centre of the window as zero, so you need to account for that when indexing the array, where the top left of the image is the zero point for x and y (this is done using win.size in the example below).

In a code component, add the following to the “Each Frame” tab:

mousePos = tuple(mouse.getPos()) # Get mouse coordinates
pixels = win._getFrame()  # Gets image of window
pixels = np.array(pixels)   # Convert image to numpy array, but shape suggests x, y are reversed into y, x
x, y = win.size  # Size of win for resetting mouse x,y zero point
rgb = pixels[int((mousePos[1]+y/2)),int((mousePos[0])+x/2)]  # Gets pixel of mouse position. 

Hi There,

Sorry to resurrect this - but I’m trying to do something similar using these commands. I’m trying to make sure that the total luminance of my window and stimuli are not varying as I randomise the phase of a blurred grating. So I just need the RGB values of all pixels in the frame, not specific x,y coords.

When I use the above code, I only get the RGB values of the grey background (128, 128, 128), with no values for my grating. I’ve attached some simplified code below which is giving me the same issue. Would you be happy to let me know if I’m doing something obviously wrong here?

Edit: I’m on psychopy 3.0.7

from psychopy import visual, core, data, event, gui, monitors, sound

import numpy as np

win = visual.Window(fullscr=True, screen=0,
    allowGUI=True, allowStencil=True,
    monitor='HP_Elitebook', color=[0,0,0], colorSpace='rgb', useFBO=True, units='pix')

stim = visual.PatchStim(win, units='norm', size=(0.2, 0.5))
mouse = event.Mouse(win=win, visible = True) # set up mouse object

pixels = win._getFrame()  # Gets image of window
pixels = np.array(pixels)   # Convert image to numpy array, but shape suggests x, y are reversed into y, x