Apodize function in sound sourcecodes of psychopy

Hi everyone. There is a function called apodize which include ramps at the start and end of sine waves used in sound routine.
Here is the function:

def apodize(soundArray, sampleRate):
    """Apply a Hanning window (5ms) to reduce a sound's 'click' onset / offset
    hwSize = int(min(sampleRate // 200, len(soundArray) // 15))
    hanningWindow = numpy.hanning(2 * hwSize + 1)
    soundArray = copy.copy(soundArray)
    soundArray[:hwSize] *= hanningWindow[:hwSize]
    soundArray[-hwSize:] *= hanningWindow[hwSize + 1:]
    return soundArray

It says that it include 5 ms ramp to sounds. I wanna know how can I increase or decrease this value (5ms)? As I still hear the click sounds it supposed to eliminate, I wanna change this value.
Thank you for your guidance.

It looks like the 5 ms value is baked into the function, I’m guessing via the // 200. So if you want to alter that value, you will need to override that function with your own customised version (this is called “monkey patching” and is one of the handy features of Python being a dynamic language).

But I’m not sure the apodize() function is what you are after though. e.g. the setSound() function does this for sounds with the SoundDevice backend:

So you could perhaps overside that entire function, doing nothing other than changing the 5 ms value, like this:

    # customised setSound() with 10 ms window:
    def setSound_10(self, value, secs=0.5, octave=4, hamming=None, log=True):
        """Set the sound to be played.
        Often this is not needed by the user - it is called implicitly during
            value: can be a number, string or an array:
                * If it's a number between 37 and 32767 then a tone will
                  be generated at that frequency in Hz.
                * It could be a string for a note ('A', 'Bfl', 'B', 'C',
                  'Csh'. ...). Then you may want to specify which octave.
                * Or a string could represent a filename in the current
                  location, or mediaLocation, or a full path combo
                * Or by giving an Nx2 numpy array of floats (-1:1) you can
                  specify the sound yourself as a waveform
            secs: duration (only relevant if the value is a note name or
                a frequency value)
            octave: is only relevant if the value is a note name.
                Middle octave of a piano is 4. Most computers won't
                output sounds in the bottom octave (1) and the top
                octave (8) is generally painful
        # start with the base class method
        _SoundBase.setSound(self, value, secs, octave, hamming, log)
            label, s = streams.getStream(sampleRate=self.sampleRate,
        except SoundFormatError as err:
            # try to use something similar (e.g. mono->stereo)
            # then check we have an appropriate stream open
            altern = streams._getSimilar(sampleRate=self.sampleRate,
            if altern is None:
                raise err
            else:  # safe to extract data
                label, s = altern
            # update self in case it changed to fit the stream
            self.sampleRate = s.sampleRate
            self.channels = s.channels
            self.blockSize = s.blockSize
        self.streamLabel = label

        if hamming is None:
            hamming = self.hamming
            self.hamming = hamming
        if hamming:
            # 5ms or 15th of stimulus (for short sounds)
            hammDur = min(0.010,  # 10 ms !!!!!!
                          self.secs / 15.0)  # 15th of stim
            self._hammingWindow = HammingWindow(winSecs=hammDur,

and then monkey patch like this:

# override the standard function with an amended version:
sound.Sound.setSound = setSound_10

But I don’t really understand all the different sound backends - you would need to use the setSound() function specific to the backend you are using.