Any improvement in performance is by now purely hypothetical. I currently don’t have the means to measure latency. Nevertheless, low latency and predictable performance are both high priority goals.
The main idea of the
rtmixer module is that its callback function is implemented in C and it never touches Python’s GIL and it is not affected by the garbage collector. Therefore it should be theoretically possible to reduce the block size further than it is possible with a callback function written in Python. Again, I don’t have measurements to back that up.
On the Python side, there is still some code that has to be executed in the interpreter, and some memory has to be allocated for a so-called “action struct”. But once that “action struct” is placed in the “action queue”, the Python interpreter is not involved anymore.
When starting a stream in
rtmixer, the sample rate, the number of channels and everything else has to be known and cannot change during the runtime of the stream.
However, the given number of channels is an upper bound, and on each individual “play” or “record” action, the affected channels can be specified independently. For example, there is currently a function with this signature:
Mixer.play_buffer(self, buffer, channels, start=0, allow_belated=True) -> action
channels argument allows to specify a list of arbitrary channel numbers where the channels of
buffer will be played back. The same option is available for recording.
The default setting
start=0 means that the sound is played back as soon as possible, but if reduced jitter is desired, one can use a given
start time in the future (using
Mixer.time as reference), and the sound will be played at exactly that time. If the chosen time is not far enough in the future, the option
allow_belated decides if the sound will be played back even if the time has actually already passed.
I think it’s not a problem to force the user to decide a priori on a maximum number of channels that should be used, but I’m not sure about different sample rates.
However, I consider any sample rate conversions out-of-scope for
rtmixer (as stated here: https://github.com/mgeier/python-rtmixer). This is something that could be done separately on your side.
Duplicating a mono signal to multiple channels is currently not supported by the
channel argument, but I can probably add that feature. I think this would make sense. Currently, the audio data would have to be duplicated manually.
BTW, you should be careful when opening a mono stream, since some host APIs duplicate that stream on their own to create a stereo signal, others (I think JACK and ASIO) don’t. For that reason, it’s probably best to choose stereo as a default.
rtmixer module is quite new and not really tested at all, but on the other hand, since it is that new, the API can still be shaped to fit better for the use in PsychoPy (while still staying a more general-purpose library).
rtmixer isn’t enough for your needs, you could still use it as an example for how to implement your own very specialized C callback function for PsychoPy. After all, I initially intended it merely as a code example …