Use Wacom graphics tablet in experiment: PeniInizialization and getSamples

Hello,

(1) How can I correcttly setup a pen device with (e.g. with psychopy.iohub.launchHubServer or is there another way less related to HDF5 files)?

(2) What does pen.getSamples() return?

Background-Info:

I have a running experiment, where subjects are presented with images. I want to allow subjects to draw on the Images with a Wacom tablet. I’d like to save the koordinates of the drawing in a list or matrix.

(A) I was able to implement drawing with the mouse, where the drawing starts once the mouse button is clicked. With the pen and tablet, I can move the mouse pointer within this code, but not draw.

(B) I am aware of this wintab demo code, it works on my laptop, but can’t figure out how to tranfer relevant parts to my experiment.

So far I have:

(1) For the setup

  def setup_iohub(win, sess_code=None):
      
      exp_code = 'wintab_evts_test'
  
      if sess_code is None:
          sess_code = 'S_{0}'.format(int(time.mktime(time.localtime())))
      
      # iohub configuration for the Pen device
      kwargs = {'experiment_code': exp_code,
              'session_code': sess_code,
              'wintab.Wintab': {'name': 'pen',
                              'mouse_simulation': {'enable': False,
                                                      'leave_region_timeout': 2.0
                                                      }
                              }
              }
  
      # Launch iohub server with the specified configuration
      return launchHubServer(window=win, **kwargs)
  
  io = setup_iohub(win)
  keyboard = io.devices.keyboard
  mouse = io.devices.mouse
  pen = io.devices.pen

(2) For the drawing

while continue_drawing:
    
    # Redraw the background image and the instruction text
    image_visual.draw()  
    instruction_visual.draw()  

    # Fetch the latest samples from the pen
    is_reporting = pen.reporting
    wtab_evts = pen.getSamples()
    if len(wtab_evts) > 0:
        if is_reporting:
            pen_trace.updateFromEvents(wtab_evts)
            last_evt = wtab_evts[-1]
            pen_pos.updateFromEvent(last_evt)
    
    # Draw the current pen position and traces
    pen_pos.draw()
    pen_trace.draw()

    for evt in wtab_evts:
        if evt.pressure > 0:  # Only record when the pen is touching the tablet
            x, y = evt.x, evt.y
            pen_data.append((x, y))

    win.flip()  # Refresh the screen

Hi Noracre,
I also want to record data by using the tablet, pen and the iohub and have similar problems in background (B). May I know whether you have solved it? Thanks!

Hi, we decided not to use the wintab demo code .

We are now working with this, since it works for our purposes:

def pen_drawing(win, image_visual, instruction_visual1, instruction_visual2):
“”"
Allows a participant to draw on the screen over an image with real-time visual feedback.
Returns the list of drawn coordinates.

Args:
    win (visual.Window): The window in which the drawing is done.
    image_visual (visual.ImageStim): The background image over which drawing is done.

Returns:
    List of tuples: A list containing (x, y) coordinates of the drawing.
"""

# Initialize pen data storage
pen_data = []

# Initially set a dummy vertex to avoid the empty array issue
pen_path = visual.ShapeStim(win, lineWidth=3, lineColor='red', vertices=[(0, 0)]) #TODO Adapt lineWidth if necessary

# Initialize a mouse object for drawing (simulating pen input here)
mouse = event.Mouse(win=win)
mouse.setVisible(False)

# Initialize cursor
cursor = visual.Circle(win, radius=0.002, fillColor='red', lineColor='red')  #TODO Adapt radius if necessary

# Allow drawing until spacebar (or escape) is pressed
continue_drawing = True
while continue_drawing:
    
    # Redraw the background image and the instruction text
    image_visual.draw()  
    instruction_visual1.draw()
    instruction_visual2.draw()   

    # Get the mouse position and button press status to draw 
    buttons, times = mouse.getPressed(getTime=True)
    x, y = mouse.getPos()
    if buttons[0]:  # Left mouse button simulating pen contact
        pen_data.append((x, y))  # Store the pen data
        
    else:
        pen_data.append((np.nan, np.nan))
    
    # Update cursor position
    cursor.pos = (x, y)
    cursor.draw()

    pen_path.vertices = pen_data # Update path vertices
    pen_path.draw()  # Draw the current path

    win.flip()  # Refresh the screen

    # Check for key presses
    keys = event.getKeys(keyList=['space', 'escape', 'x'])
    if 'space' in keys:
        continue_drawing = False
    elif 'escape' in keys:
        core.quit() # TODO Nothing saved, should save drawing
    if 'x' in keys:
        # Delete drawing but allow to continue drawing
        pen_data = []  
        pen_path.vertices = [(0,0)]
        pen_path.draw()
        win.flip()

return pen_data

Thanks for your sharing. This is fascinating. Glad you solved the problem!
Unfortunately, I need the pressure data so I still need to figure out how to let PsychoPy recognise pen as a pen instead of a mouse.