Online multitasking experiment with continuous mouse tracking

Dear All,
we are in the process of developing a new paradigm and would like to get your opinion whether PsychoPy/Pavlovia would be a good choice for this.

We would like to create a multitasking study running online with the following tasks running in parallel:

  1. Mouse Tracking: A box is moving randomly on the screen, and the participant has to track this box with their mouse. We need to record mouse data, e.g. to determine the distance to the center of the box/whether inside/outside of box, continuously. Ideally, we can change the courser (e.g. green if in box, red if out of box).

  2. A speeded choice-response task: A pseudo-random intervals, a sound is presented and participants have to press buttons as quickly as possible (e.g. low-pitched beep press X, high-pitched beep press C)

  3. A monitoring task: Pre-determined texts are shown on the screen, changing every 1-2 seconds, and participants may have to press e.g. the space bar when a certain target text is shown (of course, not overlapping with task 2).

So far I used to program similar tasks in Presentation (in PCL) where I basically have full control over what to present when and reading out keyboard and mouse.

The challenge of such tasks is the ‘multitasking’ nature, i.e. being able to continuously read mouse and keyboard, potentially provide error feedback, and also change the screen, play audio,… Note that I don’t need total ms-precision in recording RTs and mouse-positions, as long as events are not missed.

Before spending a lot of time learning how to program in PsychoPy (I’ve never done Python), it would be of great help if you could let me know:

  1. Do you think this is possible in PsychoPy/Pavlovia?
  2. Do you think it’s easy and straightforward, or will it be quite tricky, challenging, facing hurdles, etc
  3. What would be the best approach to start with, e.g. doing it in the Builder, as Javascript code, etc?

Many thanks &
Kind Regards,
Andre

Hi Andre, quick responses:

  1. Yes
  2. Very challenging in general. I am particularly worried about the sound task
  3. I suggest starting with the builder and then exporting it online

tandy

Thanks for the reply.
When you say ‘particularly worried about the sound task’ do you mean because it has sound (we may be able to change this to another visual task), or because it’s a speeded choice-response task in parallel to the other tasks?

So I had a lot of problem implementing something similar online a quite ago. In the meantime there have been a few updating to psychopy but I see on the forum a lot of thread about problems with sound. In the end I switched to a visual stimuli.

To expand what I wrote yesterday: I would recomend starting with the builder. Start with building only one task (the monitoring task seems to be the easiest one) and test if it works localy and online. After that start adding the second task and then the third.

tandy

Andre,

Your bullet points 1 and 2 are direct replications of my work first published publicly in January 2021.
In the manuscript you can find details of the technical aspects of how we implemented tasks 1 and 2 exactly as you describe them in your post.

@Andre, did you persist with programming this multitasking paradigm in psychopy? If so, I’d appreciate any tips you havefor anyone else attempting to do this!