Does PsychoPy have a voice recognition based instruction operating system that reduces mouse and keyboard input during experiments?

Does PsychoPy have a voice recognition based instruction operating system that reduces mouse and keyboard input during experiments? Can this voice-based interface improve operational efficiency in experiments and meet the requirements of accessibility design?

Locally you can try the whisper plugin. Here’s a demo

whisper.psyexp (27.3 KB)