| Reference | Downloads | Github

Needed: Real-time Intervention/control of experiment by User

Hi Devs,

Is there any real-time 2nd window support planned for Psychopy? Currently the overall design concept is start it and after a relatively slowish initialization, hope that things go well. However to use Psychopy in the ‘real world’ psychophysics particularly in neurophysiology, you have to be able to have some real-time control of the experiment. Some ideas if they are not already implemented:

  1. I’d be happy if after running from the builder, you can have option for a standard output window for simple text output to be displayed in a second monitor window so at least you know what trial # you are currently on, and perhaps allow simple pauses and text input to change trial logic by the User instead of a prewritten algorithm.

  2. A second python process bound to another CPU core (a la iohub) can run a Qt window or other window/button/slider/textbox manager displayed on a second monitor which can interact with the main psychopy process.

If this is already solved/implemented, apologies if I haven’t found it already, but currently I don’t see any options for stdout in the psychopy Data outputs documentation page.


PsychoPy can control windows on two monitors (I’ve been doing that for years), although admittedly Builder doesn’t provide for that option (and I think trying to control one via a code component from within Builder might be problematic, as the win.flip() pauses for each window might conflict and interfere with timing).

On the other issue, real-time control is exactly what PsychoPy is for, so perhaps you need to better explain exactly what issues you might be facing? All software necessarily takes time for initialisation. When you need to start an experiment at a precise time (e.g. synchronising with an MRI), one starts the experiment in advance, and inserts a pause which waits for some sort of trigger to initiate the experiment proper. That would be the same approach used in all experimental control software I imagine.

Thanks fo the insights Michael.

For example, if you run your paradigm from the coder, there is a python shell, where any print(“hello”) commands written the code will be displayed in the shell windows, and if the coder is in the 2nd monitor, you get pretty close to realtime output of these prints (I’m not clear as to when the stdout.flush() type of commands are run in Psychopy but it seems to be quite close to the frame rate of the monitors). There is no equivalent shell when running a psyexp file from the builder, this was what I was thinking of in my question 1.

The second question has to do with neuro-physiology and the ebb and flow of data acquisition in that type of environment. There are times when we want to stop/start/restart the psychophysics paradigm without significant delay or display any visual artifacts on the screen for the subject (e.g. wish to avoid rendering mouse cursors, windows backgrounds, task bars, resolution switching, quick flashes/color changes in the background etc…) or perhaps change modes of the psychophysics paradigm (e.g. manually activate/deactivate certain routines to balance trial types etc…) If a separate graphical user interface in a second window (not an rendered Open GL window), but simply a PyQt interactive GUI with some paradigm on/off buttons, trial number counters, %correct counters etc…, that gives the experimenter some better control and knowledge of the success of the current psychophysics paradigm. This type of thing exists in the NIH/LSR REX/VEX visual presentation display/neural acquisition control system (written in C) that is devoid of any of these interactive delays, or monkey logic (which runs in MATLAB). So far I found that this type of feature is not explicitly coded for in Psychopy yet.

This sort of control can easily be achieved when writing your own experiments, by drawing to windows on separate screens, one for the operator and one for the participant.

You’re right though, in that this isn’t easily achievable via Builder. Builder was originally envisaged as a teaching platform, with the idea that it would just be a gateway to introduce people to programming for “serious” experiments. Over time it’s become capable enough for doing real work, but there are still times when the flexibility of coding an experiment from scratch is needed.

I do something like this in PyHab (, which is designed to be used in infant looking-time experiments where you need similar information available and similar ability to intervene. In particular PyHab is designed for studies where presentation duration is dependent on infant looking behavior, so you might be able to adapt it to your needs with relatively minimal work. It’s all coder, though. No builder functionality whatsoever.

I’m going to be doing a huge upgrade to PyHab probably in the next week, but in the current version it creates two windows, one a presentation window on Screen 1 and the other an output window for the experimenter on Screen 0, and trial presentation basically depends on which keys the experimenter is holding down and the presentation settings. The version on github uses movie files as stimuli but I’ve already created versions for my own studies that use PsychoPy’s stimulus generation instead. If you decide to try to adapt it to your needs and have any questions about how it works, let me know.

Hello Liner,

You might be able to open two processes (PsychoPy experiment/control GUI) and use python’s ‘socket’ library to communicate between the two. Such a setup will allow you to supervise experiments locally or from another PC.

You can use Tkinter for the GUI, it’s quite decent nowadays.