I have an experiment where I need to play smooth tone sweeps. For example, the frequency may start at 1024Hz and then stop at 2048 Hz. The speed of the sweep and the end point is conditioned on something the participant is doing on the screen, so I can’t generate an audio file in advance, but rather I need to be changing the frequency on demand.
In other languages this is not a trivial problem, since you need to match the phase of the sinusoids as you change the frequency otherwise you get unpleasant auditory artifacts. Is it possible to generate these smooth sweeps in psychopy or psychoJS?
If its not immediately clear how to do this in psychoJS I think by best option would be to use an external library for JS (there are lots of these) and have that initialised in the begin routine of a code block, and then in the each frame section have the psychoJS code updating some global parameter that the other library uses to set its frequency. Any thoughts on this approach?
Okay, thats an interesting approach! Using square waves certainly seems to make the phase locking problem easier to solve, but at the cost of sound quality, as you note.
I think I will push ahead with using an external library, once I figure out how they work. Is there a way of manually setting imports and requirements (i.e., editing the index.html file) within the psychopy builder? I’d like to keep all the changes I make within the builder to make it easier to share the code later on.
I decided to go back to basics and use the WebAudio API, in particular the AurioParam object (see here). Its taken a little bit of time to figure out how to use it properly but I’m now good to go.
Its very powerful and supposedly quite performant (perhaps the PsychoJS devs use this as the backend anyway for the audio nodes?) so I’m reasonably confident using it for psychophys. Once I have the full task written and so on I’ll share a link to the repo so you can see how I’m using it, if you like.