| Reference | Downloads | Github

How to get (output) psychopy experiment screen as numpy array in real-time?

Hello psychopy developers.
I am developing the experiment that two participants join using psychopy.

In my experiment, I need to change the screen of ParticipantB according to the response and input of ParticipantA when A is looking at his screen. The screens of both of them are the same at first, but the screen of B gradually changes according to the response and inputs of A.

When changing the screen of B, I would like to use Numpy to calculate the complex and large amounts of pixels. In other words, I want to output the screen that A sees as a Numpy Array (ndarray, such as the result of cv2.imread()), and then edit it to reconstruct the screen that B sees in real-time. The screen seen by B does not have to be a psychopy.

I attached the image that shows my experiment. And images include what I want to know in particular by red color.

Does anyone know how to get (output) a psychopy experiment screen as a numpy array?



In general, it’s easier to just change the parameters of PsychoPy components which make up the image, e.g. changing the fillColor of a :polygon: Polygon component. However, if you already have the array morphing function, you can use win.screenshot to get the current screen as a PIL image, which you can supply to numpy.asarray() to get it as an array.

Thank you very much for your advice.
I feel that using win.screenshot or saveMovieFrames is certainly possible. However, this may compromise real-time performance.
For example, while A is holding down a button, the objects on the screen that B is looking at is need to be changed in real-time. Saving a single scene and converting it will take a little longer, so real-time performance will likely be lost.

I am sorry, I forgot to write “in real-time, with as little delay as possible”. I’ll add it.
In order to synchronize the change of Screen B by Screen A, I think we need to output it in ndarray in real-time, what do you think?
In addition, I already have the morphing function using numpy.



What does the morphing algorithm do to the image? If it changes the colours and dimensions of shapes, then this is achievable using :polygon: Polygon components rather than an image, and would solve the speed problem as it would require far less processing. I can see why it would be frustrating having to rewrite a morphing algorithm when there’s one already available, but if speed is of the essence then I think it’s your best bet.

My morphing algorithm changes the color gradually and dynamically. Also, it changes a shape to another shape dynamically such as a bubble. These are like generative art I think. However, I can’t say anything about the detail because this is one of my research projects.
I know Polygon components have many properties to change its visual. However, my experiment program for participant A based on psychopy transfer its data to participant B screen and Arduino program which control the physical actuators through OSC. To control many actuators, I need to use array style for Arduino in my case.

So, If psychopy’s screen can’t be output as an ndarray, then I am going to think of other approaches.