Fade an image to black instead of inverting colors

Hi everyone,

I’m trying to dynamically change the color of an image on the screen. To accomplish this, I’m using visual.ImageStim and changing the colors with stimulus.color = [r, g, b]. This works great for any color brighter than neutral grey: light colors fade out to grey. However, it does not work for any colors darker than neutral grey. When I try to make the whole image black, for example (stimulus.color = [-1, -1, -1]), the colors in the image are inverted instead of displaying as black.

The math of the color maps makes sense here: Multiply by a negative number to invert the color. However, the same outcome occurs regardless of whether I use the rgb or the rgb255 colormap.

Do you have any suggestions for how to dynamically recolor an image to a dark color?

I’m including a minimal working example here.

Thanks!
geoff

from psychopy import visual, core
import numpy as np

CS = 'rgb'  # ColorSpace
WHITE = [1, 1, 1]
LIGHT_GREY = [0.5, 0.5, 0.5]
GREY = [0, 0, 0]
BLACK = [-1, -1, -1]

## ---- Comment this section in to try a different colorspace
# CS = 'rgb255'  # ColorSpace
# WHITE = [255, 255, 255]
# LIGHT_GREY = [200, 200, 200]
# GREY = [128, 128, 128]
# BLACK = [0, 0, 0]

win = visual.Window([800, 800], monitor='testMonitor',
                    color=LIGHT_GREY, colorSpace=CS,
                    units='pix')

img = np.array([[-1, 0], [0, 1]]) # Image bitmap

stimulus = visual.ImageStim(win=win, image=img,
                     colorSpace=CS,
                     size=(100, 100),
                     units='pix')

# Show the normal stimulus
stimulus.color = WHITE
stimulus.draw()
win.flip()
core.wait(1.0)

# I want to show the stimulus faded to black.
# Instead, this inverts the stimulus, showing
# black areas as white and vice versa, leaving
# neutral grey areas unchanged.
stimulus.color = BLACK
stimulus.draw()
win.flip()
core.wait(1.0)

# This strategy does work for fading to neutral grey, however.
stimulus.color = GREY
stimulus.draw()
win.flip()
core.wait(1.0)

Just realized I forgot to include my system info!

I’m running Psychopy 1.90.2 on Windows 10.

thanks,
geoff

Hi, a quick hack would be to fade your image by performing operations on your numpy array, using np.clip to ensure all values are within range (i…e, -1, 1), and setting your image on every frame. There is probably a less computationally expensive way than setting image on every refresh.

Adding the following to your code…

for i in range(200):
    img = img-.01
    np.clip(img, -1, 1, img)
    stimulus.setImage(img)
    stimulus.draw()
    win.flip()

Hi there,

Thanks for the suggestion – it sounds like that could work in this case! However, I’m hoping for a generalizable solution that will work for any kind of stimulus – Cirlce or TextStim in addition to ImageStim.

The ideal solution would be the ability to do exacly what the current code does, but in a way that’s mathematically consistent with a 0–255 color map instead of a -1–1 colormap. Is it possible to change the colormap at a lower level? It’s perplexing that the color computations act as if they’re using a -1–1 colormap even when you specify 0–255.

Thanks again!

You could draw a black object behind and then dynamically update the foreground object’s opacity. For example:

from psychopy import visual, core
import numpy as np

CS = 'rgb'  # ColorSpace
WHITE = [1, 1, 1]
LIGHT_GREY = [0.5, 0.5, 0.5]
GREY = [0, 0, 0]
BLACK = [-1, -1, -1]

## ---- Comment this section in to try a different colorspace
# CS = 'rgb255'  # ColorSpace
# WHITE = [255, 255, 255]
# LIGHT_GREY = [200, 200, 200]
# GREY = [128, 128, 128]
# BLACK = [0, 0, 0]

win = visual.Window([800, 800], monitor='testMonitor',
                    color=LIGHT_GREY, colorSpace=CS,
                    units='pix')

img = np.array([[-1, 0], [0, 1]]) # Image bitmap

stimulus = visual.ImageStim(win=win, image=img,
                     colorSpace=CS,
                     size=(100, 100),
                     units='pix')

bg = visual.ImageStim(win=win, image=np.ones(img.shape) * -1.0,
                     colorSpace=CS,
                     size=(100, 100),
                     units='pix')

# Show the normal stimulus
stimulus.color = WHITE
stimulus.draw()
win.flip()
core.wait(1.0)

for fade_factor in np.linspace(0.0, 1.0, 60):
    bg.draw()
    stimulus.opacity = 1.0 - fade_factor
    stimulus.draw()
    win.flip()

core.wait(1.0)
1 Like

Thanks for the suggestion!

Unfortunately, after playing around with this, I see that it doesn’t work. I need to fade the color independently in each color channel.

For example, on one frame I might need white areas of the image to have the color [254, 10, 10], black areas to have the color [0, 0, 0], and grey areas to have the color halfway between the two [127, 5, 5]. This can’t be accomplished by adjusting the foreground object’s opacity.

If we could adjust the image colors assuming a colormap that varied from 0 to 255 instead of from -1 to 1, this would be trivial. Does anyone know if it’s possible to do that? (or know of another way to accomplish the same thing?)

thanks,
geoff

I just finished a version where I perform operations directly on a numpy array, and reset the image on every frame. Unfortunately, this is too slow. I have 8 stimuli, and it’s not capable of running stimulus.setImage(img) on each of those stimuli within the time it takes to refresh a frame.

I can’t quite follow what your stimulus requirements are. Can you either expand on what a ‘working’ stimulus would be like, or post an example of slow-but-working code?

Sure thing – apologies for the confusion. You can think of it like putting a colored piece of glass in front of the computer screen. I need to filter my stimuli through color filters like you’d put on a camera lens. Here’s a working example of what I need.

from __future__ import division
from psychopy import core, visual
import numpy as np

CS = 'rgb'  # ColorSpace
WHITE = [1, 1, 1]
LIGHT_GREY = [0.5, 0.5, 0.5]
GREY = [0, 0, 0]
BLACK = [-1, -1, -1]
RED = [1, -1, -1]
BLUE = [-1, -1, 1]
GREEN = [-1, 1, -1]

def color_filter(img, rgb_filter):
    """ Filter a greyscale image through an RGB color filter

    img: a 2D numpy array
    filter: a 3-number RGB sequence (-1)-(1)
    """
    img_rgb = np.empty([img.shape[0], img.shape[1], 3])
    rgb_filter = rescale_zero_min(rgb_filter)
    img = rescale_zero_min(img)
    for n_channel in range(3): # For each color channel
        img_rgb[:,:,n_channel] = img * rgb_filter[n_channel]
    return img_rgb

def rescale_zero_min(x):
    """ Rescale a color from (-1)-(1) to 0-1.
    """
    return (np.array(x) + 1) / 2

img = np.array([[-1, 0], [0, 1]]) # Image bitmap

win = visual.Window([400, 400], monitor='testMonitor',
                    color=LIGHT_GREY, colorSpace=CS,
                    units='pix')

stimulus = visual.ImageStim(win=win, image=img,
                     colorSpace=CS,
                     size=(100, 100),
                     units='pix')

# Show the example stimuli
for color in (WHITE, LIGHT_GREY, GREY, BLACK, RED, BLUE, GREEN):
    filtered_image = color_filter(img, color)
    stimulus.setImage(filtered_image) ## <---- This is too slow
    stimulus.draw()
    win.flip()
    core.wait(1.0)

I’ve been looking through the psychopy source code, and I think I’ve found a couple places that prevent me from getting this behavior by setting the color attribute of visual.ImageStim objects. The class psychopy.visual.basevisual.ColorMixin has a _getDesiredRGB method with the comment: “Ensure that we work on 0-centered color (to make negative contrast values work)”. This is exactly what I want to avoid, but overwriting that method doesn’t seem to solve the problem.

I also looked at the image method (an attributeSetter) of the visual.ImageStim class, and found that this method specifies what looks like an OpenGL colormap. When creating the texture (line 285 of psychopy.visual.image.py), self._createTexture takes the argument pixFormat=GL.GL_RGB. Maybe I could solve the problem by changing that argument, but I haven’t been able to make sense of the OpenGL documentation.

Thank you for your help!

My suggestion is to draw the stimulus, change the blend mode, draw the filter, and then reset the blend mode. If you change the blend mode to GL_DST_COLOR, GL_ZERO, then the filter will be multiplied with the stimulus to produce the output (see this great blend mode helper site).

Here is an example:

from __future__ import division
from psychopy import core, visual, event
import numpy as np

import pyglet.gl

CS = 'rgb'  # ColorSpace
WHITE = [1, 1, 1]
LIGHT_GREY = [0.5, 0.5, 0.5]
GREY = [0, 0, 0]
BLACK = [-1, -1, -1]
RED = [1, -1, -1]
BLUE = [-1, -1, 1]
GREEN = [-1, 1, -1]

def color_filter(img, rgb_filter):
    """ Filter a greyscale image through an RGB color filter

    img: a 2D numpy array
    filter: a 3-number RGB sequence (-1)-(1)
    """
    img_rgb = np.empty([img.shape[0], img.shape[1], 3])
    rgb_filter = rescale_zero_min(rgb_filter)
    img = rescale_zero_min(img)
    for n_channel in range(3): # For each color channel
        img_rgb[:,:,n_channel] = img * rgb_filter[n_channel]
    return img_rgb

def rescale_zero_min(x):
    """ Rescale a color from (-1)-(1) to 0-1.
    """
    return (np.array(x) + 1) / 2

img = np.array([[-1, 0], [0, 1]]) # Image bitmap

win = visual.Window([400, 400], monitor='testMonitor',
                    color=LIGHT_GREY, colorSpace=CS,
                    useFBO=True,
                    units='pix')

stimulus = visual.ImageStim(win=win, image=img,
                     colorSpace=CS,
                     pos=(0, -100),
                     size=(100, 100),
                     units='pix')

new_stimulus = visual.ImageStim(win=win, image=img,
                     colorSpace=CS,
                     pos=(0, 100),
                     size=(100, 100),
                     units='pix')

img_filter = visual.ImageStim(win=win, image=np.ones((100, 100)),
                     colorSpace=CS,
                     pos=(0, 100),
                     size=(100, 100),
                     units='pix')

# Show the example stimuli
for color in (WHITE, LIGHT_GREY, GREY, BLACK, RED, BLUE, GREEN):
    filtered_image = color_filter(img, color)
    stimulus.setImage(filtered_image) ## <---- This is too slow

    stimulus.draw()

    new_stimulus.draw()

    pyglet.gl.glBlendFunc(pyglet.gl.GL_DST_COLOR, pyglet.gl.GL_ZERO)

    img_filter.color = color
    img_filter.draw()

    # reset the blend mode
    pyglet.gl.glBlendFunc(pyglet.gl.GL_SRC_ALPHA, pyglet.gl.GL_ONE_MINUS_SRC_ALPHA)

    win.flip()

    frame = np.array(win.getMovieFrame())

    assert np.all(frame[:200, :] == frame[200:, :])

    event.waitKeys()
1 Like

Fantastic! This solves the problem, with one caveat. I’m trying to use circular alpha-masks on my stimuli, using the mask='circle' argument when initializing visual.ImageStim. But changing the GL blend function appears to disable the alpha mask. Is it possible to choose a GL blend function that preserves those masks?

Thank you!

I fixed the broken mask by adding a stimulus object on top of these that is just a mask with an aperture in the middle (after resetting the GL blend mode). Everything is working great now.

Thank you for all your help!