Smooth tone sweeps generated procedurally

I have an experiment where I need to play smooth tone sweeps. For example, the frequency may start at 1024Hz and then stop at 2048 Hz. The speed of the sweep and the end point is conditioned on something the participant is doing on the screen, so I can’t generate an audio file in advance, but rather I need to be changing the frequency on demand.

In other languages this is not a trivial problem, since you need to match the phase of the sinusoids as you change the frequency otherwise you get unpleasant auditory artifacts. Is it possible to generate these smooth sweeps in psychopy or psychoJS?

If its not immediately clear how to do this in psychoJS I think by best option would be to use an external library for JS (there are lots of these) and have that initialised in the begin routine of a code block, and then in the each frame section have the psychoJS code updating some global parameter that the other library uses to set its frequency. Any thoughts on this approach?



Hi Peter,

Have a look at my Music Box and Music Player online demos for my attempts to play notes programmatically online.

I believe that they don’t sound great because they are square waves. I’d be happy to work with you to try to use an external JS library if you think this can be improved.

Okay, thats an interesting approach! Using square waves certainly seems to make the phase locking problem easier to solve, but at the cost of sound quality, as you note.

I think I will push ahead with using an external library, once I figure out how they work. Is there a way of manually setting imports and requirements (i.e., editing the index.html file) within the psychopy builder? I’d like to keep all the changes I make within the builder to make it easier to share the code later on.

Try something like the following in a JS only code component in the Before Experiment tab.

import PsychoPolyFill from '';
1 Like


I’ll post updates here as I make (hopefully) progress!

1 Like


I’m back!

I decided to go back to basics and use the WebAudio API, in particular the AurioParam object (see here). Its taken a little bit of time to figure out how to use it properly but I’m now good to go.

Its very powerful and supposedly quite performant (perhaps the PsychoJS devs use this as the backend anyway for the audio nodes?) so I’m reasonably confident using it for psychophys. Once I have the full task written and so on I’ll share a link to the repo so you can see how I’m using it, if you like.

Thanks Peter. I look forward to hearing from you when you have a full task up and running. I’m not sure what PsychoJS has as the backend. Perhaps this might help answer that question.