Dithering to measure contrast sensitivity

I am planning to measure contrast sensitivity and wonder whether someone has coded in psychopy any dithering method such as this:

Allard, R., & Faubert, J. (2008). The noisy-bit method for digital displays: Converting a 256 luminance resolution into a continuous resolution. Behavior Research Methods, 40(3), 735-743.

I thought that it might be possible through the ‘mask’ parameter in GratingStim, but I want to measure contrast sensitivity for moving gratings and the masks are static.

Hi There,

I have been working on a version of noisy bit. My code is below. Before taking a look let me note some of the details of this.

  1. This code works by taking an image with four channels (RGB Alpha). I have attached the image I use in this example (“BlackSquare.png”). Set the path to your image by setting the “File” variable.

2.The code then takes a float “value” (anything between 0 and 255). -In this example I set the value to be 100.5

3.This is where currently it gets messy. I need to properly sit down and optimise this. The code goes through each pixel (…this is fun…) and randomly sets its ALPHA CHANNEL to be either 100 or 101. The number of pixels set to the higher and lower integer are based upon the “getPixelPercent” function but overall the image will (in this case) have 50% 101 and 50%100. I measured this using a photometer and I seemed to get a nice continuous scale of luminance. See my plot below

from __future__ import division  # so that 1/3=0.333 instead of 1/3=0
from psychopy import visual, core, data, event, logging, sound, gui
from psychopy.constants import *  # things like STARTED, FINISHED
import numpy as np  # whole numpy lib is available, prepend 'np.'
from numpy import sin, cos, tan, log, log10, pi, average, sqrt, std, deg2rad, rad2deg, linspace, asarray
from numpy.random import random, randint, normal, shuffle
import math
import numpy
from numpy import random
from PIL import Image
import time
import os  # handy system and path functions
##---------------INPUT--------------
File='/Users/rebeccahirst/Downloads/BlackSquare.png'#Add path to image. Image should have an alpha channel.
value=100.5 #what contrast level do you want your stimulus to be set at? (between 0 and 255)
##----------------------------------

#window
win = visual.Window(fullscr=False, screen=0, allowGUI=True, allowStencil=False,
    monitor='testMonitor', color=[0,0,0], colorSpace='rgb',
    blendMode='avg', useFBO=True,
    )
#------------------------functions for noisy bit------------
#Functions to set the opacity level of the images
##The function below takes "value" as an input. This value is any number between 0 and 255 and can be a float or integer allowing a continuous scale of luminance.
##the output is what percentage of the pixels is the lower val and what percentage is the upper val.
def getPixelPercent(value):
    #calculates the percentage of upp
    if (value%1)*100<50:
        Percent_Lower_Pixels=100-((value%1)*100)
        Percent_Upper_Pixels=100-Percent_Lower_Pixels
     #   print Percent_Lower_Pixels, Percent_Upper_Pixels
    elif (value%1)*100>50:
        Percent_Lower_Pixels=100-((value%1)*100)
        Percent_Upper_Pixels=100-Percent_Lower_Pixels
       # print Percent_Lower_Pixels, Percent_Upper_Pixels
    elif (value%1)*100==50:
        Percent_Lower_Pixels=100-((value%1)*100)
        Percent_Upper_Pixels=100-Percent_Lower_Pixels
       # print Percent_Lower_Pixels, Percent_Upper_Pixels
    return (Percent_Lower_Pixels, Percent_Upper_Pixels)
##The function below goes through each channel (red green blue and alpha) and identifies how many pixels are fully opaque).
##it then creates a list of pixel values (the same length as the number of pixels it identifies). this list contains both lower and upper values.
##meaning if you want to set a stimulus to 200.5 half the list will be 200 and half 201. This way half of the pixels will become the lower val
##and half the upper val - so the overall luminance/opacity is a middle step.
def getNumberofPixels(Pixels, im):
    arr = numpy.copy(numpy.asarray(im))#get the numpy array from the image
    arr_T = arr.T#transpose? check
    r, g, b, a = arr_T#extract each of the channels
    for i in range(im.size[0]):#for the size of the image
        for j in range(im.size[1]):
            if a[i,j]>0:#go through the alpha channel. if the current pixel is above 0 set it to the level we want 
                Pixels=Pixels+1
    Number_of_Upper_Pixels=(Percent_Upper_Pixels/100)*Pixels
    Number_of_Lower_Pixels=(Percent_Lower_Pixels/100)*Pixels
    Upper_Pixels=numpy.rint(Number_of_Upper_Pixels)
    Lower_Pixels=numpy.rint(Number_of_Lower_Pixels)
    if Pixels% 2 == 0:
        TotalPixels=2#even
    else:
        TotalPixels=1#odd
    if (Upper_Pixels+Lower_Pixels)==Pixels-1 and TotalPixels==1: #if the total number of pixels is odd and together they make one less than the total pixels 
        Lower_Pixels=Lower_Pixels+1
    if (Upper_Pixels+Lower_Pixels)==Pixels+1 and TotalPixels==1: #if the total number of pixels is odd and together they make one more than the total pixels 
        Upper_Pixels=Upper_Pixels-1
    Pixel_Vals=[]
    Pixel_Vals.extend([UpperVal]*Upper_Pixels)
    Pixel_Vals.extend([LowerVal]*Lower_Pixels)
    return Pixel_Vals
##The function below goes through each pixal in an image/stimulus. Isolates the red, green, blue, and alpha channels
##it identifies if that channel contains a fully opaque pixel (i.e. its alpha is above 200 - just a random criterion set by me as obviously
## fully opaque would be 255. if the pixel is opaque it then checks in the list of pixel vals and randomly selects either the upper or lower value from 
## the list of "Pixel_Vals". it then removes this item from the PixelVals list.
 
def setPixelVals(im, pix, Pixel_Vals):
#rather than going through each pixel in an image we can convert the image into a numpy array in order to speed up processing
    arr = numpy.copy(numpy.asarray(im))#get the numpy array from the image
    arr_T = arr.T#transpose? check
    r, g, b, a = arr_T#extract each of the channels
    for i in range(im.size[0]):#for the size of the image
        for j in range(im.size[1]):
            if a[i,j]>=200:#go through the alpha channel. if the current pixel is above 200 set it to the level we want 
                e=numpy.random.choice(Pixel_Vals)
                a[i,j] = e
                Pixel_Vals.remove(e)
                im= Image.fromarray(arr)#convert the array back into an image
    Pic = visual.ImageStim(win, image=im, flipHoriz=False, pos=(0,0))
    return Pic, im

##Implement code
Percent_Lower_Pixels, Percent_Upper_Pixels=getPixelPercent(value)
UpperVal=math.ceil(value)#then we get the two nearest integer values we can use to compose said image with a mean luminance of 'value'
LowerVal=math.floor(value)
im=Image.open(File)
pix = im.load()
Pixels=0
Pixel_Vals=getNumberofPixels(Pixels, im)
Pic, im1=setPixelVals(im, pix, Pixel_Vals)

Pic.draw()
win.flip()
core.wait(2)
core.quit()

BlackSquare.png

Continuous Scale measurement between 0 and 20:

I understand that this is still a serious work in progress. The code isn’t efficient at processing large images (I suspect I need to improve my vectorising skills). So any feedback would be great !!

I am afraid that I am not sure if this is useful for moving stimuli. If you were able to adapt this at all I would love to hear how you get on.

Good Luck!

Becca

2 Likes

Nice work Becca! I love to see some good problem-solving coding going on!

There’s one very noticeable place that you can indeed vectorise and get a substantial speed-up in your code. This is the bit that’s doing most damage:

Almost any time you do that it can be done faster with a numpy array. e.g. you could do something like this (NB this is not accounting for what colour space your values are in - it’s just illustrating the idea by adding/subtracting an artbitrary value to a certain fraction of pixels):

fraction=0.3
increment = 1
decrement = -1
imageArray = np.asarray(im) # the copy is implicitly done here so copy() wasn't needed
noise = np.random.random(im.size) #random array matching image size
imageArray[noise<fraction] += increment # add increment to 30% of our image pixels
imageArray[noise>fraction] += decrement # add decrement to the other 70%
stim.setImage(imageArray)

This is much faster in that the call to change the pixels is just two python calls, whereas yours is at least as many calls as pixels in the array.

To reiterate though, nice work in getting a working solution. :slight_smile:

1 Like

Thank-you for the feedback/advise! I need to stop relying on for loops so much!

This makes good sense so I can try implementing this later !!

Thanks Jon!:slight_smile:

I was also looking for something similar earlier this year and here is my solution:

import numpy as np
from psychopy import visual
from PIL import Image


class NoisyImageStim(visual.ImageStim):

    def __init__(self, *args, **kwargs):
        super(NoisyImageStim, self).__init__(*args, **kwargs)

    def setImageArray(self, image):
        imag255 = 255*(image + 1)/2
        p = imag255 - np.floor(imag255)
        noise_bit = np.random.rand(*imag255.shape) > p
        self.image = Image.fromarray(
            (np.floor(imag255) + noise_bit).astype(np.uint8))

Use the NoisyImageStim class just like a regular ImageStim with the exception that you can set an array to be rendered using noisy bit by using the setImageArray method. I have so far tried it with both, grayscale (i.e. len(image.shape) == 2 and color images (i.e. len(image.shape) == 3 and image.shape[2] == 3). No photometric measurements yet.

1 Like

Here’s my implementation of this, using numpy so it should be pretty quick.

"""
Test implementation of noisy-bit method.

Allard, R., & Faubert, J. (2008). The noisy-bit method for digital displays: Converting a 256 luminance resolution
into a continuous resolution. Behavior Research Methods, 40(3), 735-743.
"""
import math

from psychopy import visual
import numpy as np

win = visual.Window(units='pix', color=[1, 1, 1])

value = 100.25
im_size = (256, 256)

# use a black image, and modulate the alpha values
# so need RGBA array
im_ar = np.full(im_size + (4,), -1, dtype=np.float64)

# get fraction and percent to change
frac, value = math.modf(value)

# set alpha to value
im_ar[:,:,3] = value / 255

# get random mask
mask = np.random.random(im_size)

# increment according to mask
im_ar[mask > frac, 3] += 1 / 255

stim = visual.GratingStim(win,
                          mask=None,
                          size=im_size,
                          tex=im_ar)

for i in range(120):
    stim.draw()
    win.flip()