VPIXX hardware class?

I’m hoping to switch our lab computer over to use PsychoPy from PTB. We use a VPIXX Inc ViewPixx 3D monitor. As previously discussed, it should be relatively simple to integrate psychopy code and the VPIXX python API (to control screen properties) within the same script by just importing both.

What I’d like to know from the developers is whether it would make sense to create a new ViewPixx class within psychopy.hardware similar to the bits++ class. This might be nice to unify the API between hardware, but alternatively, it might place an unnecessary extra layer of maintenance on top of the above solution (separately integrating the toolboxes within one script). I’m also not sure to what extent a lot of the code in the Bits++ class will be necessary to port to the ViewPixx (some of this seems specific to setting up the Bits++ box properly, but I haven’t checked in detail).

We will be looking into this more in January, and we’re happy to work towards a pull request into PsychoPy based on our development. What I’m seeking here are opinions from the developers about useful approaches, or any warnings about things to take particular care of.

This sounds great.

Yes, I think it would be handy to have a hardware class for the box, even though that might be just a wrapper around their API. I suspect that their API isn’t very “Pythonic” in its use of classes etc. and so creating a class would be helpful.

The bits that look complex could be copied over (or possibly sub-classed) from the BitsSharp class. e.g. I think the viewPixx has an identical pixel-packing system to the mode that CRS calls Color++, so you should be able to do exactly the same thing as that code.

Some of the BitsSharp code is about verifying that it has worked (e.g. there’s a system to try and work out what the “correct” identity lookup table is by repeatedly checking the values we get back from the screen). I imagine a variant of that will be useful too.

But, basically, if I were you I would take the approach of starting on the things I needed and seeing what got in the way!

happy to help with ideas (and explanations of the existing code) if you need

Jon

I’ve got some time to look into this over the next few days. Hopefully I make some progress (if only to realise that the scope of the challenge is beyond me).

I have a couple of requests from my reading so far:

A script showing the current canonical use of a Bits#?

Looking through the API, there are a few deprecated methods. For example, the visual.Window class contains a deprecated bitsMode, who’s entry now says

DEPRECATED in 1.80.02. Use BitsSharp class from pycrsltd instead.

However, I can’t see the pycrsltd library being loaded anywhere. Instead, there’s the hardware.crs.BitsSharp class within psychopy/hardware/crs/bits.py. The example for this class shows

        from psychopy import visual
        from psychopy.hardware import crs

        # we need to be rendering to framebuffer
        win = visual.Window([1024,768], useFBO=True)
        bits = crs.BitsSharp(win, mode = 'mono++')
        # You can continue using your window as normal and OpenGL shaders
        # will convert the output as needed

Is that the current canonical method for using Bits#?

Ultimately I’d like to emulate something like this for the VIEWPixx, but this week I think I’d be content to get a high-bitdepth greyscale out.

The VPIXX python library

VPIXX software tools, available from here (may require a user account), contains a Python interface library pypixxlib. This is currently version 1.5, and seems quite full-featured in terms of hardware interface. The documentation for this is here.

Using their class-oriented API allows you to do things like this:

from pypixxlib.viewpixx import VIEWPixx3D
# viewpixx and VIEWPixx3D would need to be replaced by the appropriate devices. 
my_device = VIEWPixx3D() # Opens and initiates the device 
my_device.setVideoMode(’M16’) # Set the right video mode 
my_device.updateRegisterCache() # Update the device

which enables high-bitdepth greyscale mode on a VIEWPixx 3D. VPIXX write:

If you create an object that instantiates a class for a device you have connected, all possible functions attached to the object are guaranteed to work and to be applied on the correct device, assuming the operations are permitted.

That seems simple, but I expect it would then be difficult to pass this to PsychoPy’s Window class in a way that it would understand (e.g. LUT and gamma tables correct, correct passing of bits in special modes). I haven’t checked this yet (not in the lab right now) but I should have more info tomorrow.

There is also a lower-level wrapper module _libdpx. VPIXX write:

If you are programming for something more permissible or if you do not know which device will be used, you can use the _libdpx Wrapper Module, which does not guarantee the functions will work, but does guarantee all the functions will be called on the correct devices if you provide the appropriate commands.

These decisions aside, it seems to me that using the VPIXX pypixxlib is probably the best way to interface to the hardware. Then we can rely on VPIXX for maintaining low-level functionality.

A question here: how should I include a dependency to pypixxlib within PsychoPy? Obviously we don’t want this to be required for all users, but rather we would like the system to check for it if the VPIXX hardware is used. How should I implement that in an import statement?

MonitorCenter

Could someone check my high-level summary of how MonitorCenter is working? My understanding is that you have a monitorCentre GUI, which wraps around the functionality in calibTools.py. A monitor object is created and saved, that can contain calibration data if run. When an experiment script is run with a monitor name specified, the calibration data and other information is loaded for that monitor, and gamma correction is applied via that.

Does this description differ in any important way for the Bits#? I have read a bit about the difficulty of finding the identity LUT but haven’t examined that carefully yet.

I’m basically hoping to start with a really simple calibration script that would allow me to test high-bitdepth modes, but I want to keep the monitorcenter in mind as an eventual goal for integration.

Hopefully more info later this week.

Looking through the API, there are a few deprecated methods. For example, the visual.Window class contains a deprecated bitsMode, who’s entry now says

DEPRECATED in 1.80.02. Use BitsSharp class from pycrsltd instead.

However, I can’t see the pycrsltd library being loaded anywhere.

So, yes the way this works had to change to make it possible to make it more possible to support other hardware. That means that, rather than the window, itself, needing to know about all the boxes that might interact with it, Window just does its own thing but allows hooks (methods) to be overridden by other objects. For instance, with

win = Window( ... )
bits = BitsSharp(win, ... )

the idea is that bits goes and changes the Window so that it will operate correctly with this device. In the init of BitsSharp. it takes the Window method _prepareFBOrender and replaces it with its own (so that it can render the framebuffer using Mono++ mode etc) and replaces win.gammaRamp with a linear ramp so that it can take over the gamma ramp control itself. Does that make sense?

So the Window no longer knows about CRS, it’s that CRS changes the necessary parts of the Window.

You sort of don’t pass either object anything from the other.

I expect that you’d most naturally create a ViewPixx class in PsychoPy that is a subclass of the one they provide (just as the BitsSharp class is a sub-class of SerialDevice) and where some methods need overriding so that they also trigger some action on the visual.Window as needed.

You’ll need to use settings from the ViewPixx to set things in the Window, and vice-versa. e.g. the choice of what shader program you apply when rendering the FBO into the actual back buffer will depend on what mode the ViewPixx is set to. So you could override the function setVideoMode() like this:

    def setVideoMode(self, videoMode):
        super(self).setVideoMode(videoMode)  # set the mode on the ViewPix
        
        # then also set up the Window to use this mode (changed FBO shader)

Conversely some methods (like setBacklightIntensity ) won’t need overriding because the window doesn’t need to behave differently

A question here: how should I include a dependency to pypixxlib within PsychoPy? Obviously we don’t want this to be required for all users, but rather we would like the system to check for it if the VPIXX hardware is used. How should I implement that in an import statement?

Your device will be in psychopy/hardware/pypixx and pypixxlib will just be imported if the user does

from psychopy.hardware import pypixx

Thanks for the reply.

Yes, I think so. I think I will start by just mirroring the Bits code and seeing what breaks.

In terms of monitorCenter: there’s just a single checkbox for Bits++. I guess you’d set up a different “monitor” for each mode you want to calibrate. Can the mode be specified in MonitorCenter?

Ok, so in that case I’d just be relying on the user having correctly installed VPIXX’s pypixxlib? I could of course check it in a try loop.

The issue around the monitor needing a tick box for Bits++ was that, while running the calibration scripts you want the same hardware configuration as when running experiments. When there was really only Bits++ that we supported this was no problem but I think we need to rethink how this is done now that we have more potential options. The issue is akin to the issue in the code; having an argument useBits was only fine if there were very few options.

Yes, although we can add it to the Standalone so that most beginner users will have it. You could have a message that tells people how to install it though (is it available on pip?)

Not available on pip; installed through a .tar.gz file available on the VPIXX support website. There’s also no license file in the directory, so I’m not sure whether VPIXX would actually allow it to be wrapped up with Standalone.

Oh, I guess we should chat to them about that.

Hi Both - Where did you get to with vpixx support?

Hi, as you can probably tell this fell off my plate. No progress from my end, but I do know that VPIXX have been doing some development on the Python API of their library. One issue I had, last I looked into it, was to figure out what made most sense to implement in PsychoPy given that VPIXX are actively developing better Python support from their end (thus potentially making any work I did on it quickly obsolete). Perhaps the situation has stabilised now – it might be worth checking into again.