psychopy.org | Reference | Downloads | Github

Animation of a long text

I am programming a reading task where I have to present a long text to an observer. How can I make a program that animates a text from bottom to top of a certain screen region moving it smoothly (like text reader)? The speed of the animation must be adjustable by an observer on the fly as well.

I am thinking of using textStim because I have to use non monospace font. I cannot use one long textStim with all the text and move its position because the text is too long (several book’s pages). I could use one textStim per line of the text and recycle them on the fly when they “fall off the screen’s region”, but to update a textStim’s text takes 20ms or 30ms which is long time and I can notice a jump in the animation.

Could you please give me an advise about the best approach to program it?

You probably want to have three text stimuli. Each has about a screen-full of text on it. Prepare the first two in advance. The third gets its text assigned when the second one comes into view. Once the first is scrolled off the top of the screen, it becomes the “third” one, ready to scroll in from the bottom.

Updating text stimuli is slower than for other stimuli, but shouldn’t take 20-30 ms I think.

Are you pre-creating the text stimuli and just updating their contents using .setText()? i.e. if it is taking this long, I wonder if you are creating the stimulus entirely from scratch rather than just updating its content.

Thank you for the reply Michael. My program is working as you say, the only difference is that it works line by line. I create the text stimuli only once and then I just update its content. I have found out that when I use pyglet it takes approximately 20 ms to update text stimuli of 40 characters, but when I switch to pygame it takes 1ms or less. My computer is i7, geforce gtx 960m.

This is the part of code that does the text animation:

#-*- coding: utf-8 -*-
from __future__ import division
from psychopy import visual
from psychopy import core, event, monitors
import textwrap

class Param:
    pass

params = Param()
params.FONT = 'Georgia'
params.ROW_HEIGHT = 0.4       # cm
params.ROW_WIDTH = 10         # cm
params.FG_COLOR = '#FF0000'
params.BG_COLOR = '#000000'
params.NUM_CHARS_ROW = 40
params.NUM_ROWS = 5
params.SPEED = 0.07           # cm per second
params.SPEED_STEP = 0.03      # cm per second

def prepare_text():
    text = 'Hello, this is a sample text. ' * 100
    return(textwrap.wrap(text, width=params.NUM_CHARS_ROW))

def create_rows():
    rows = []
    for i in range(params.NUM_ROWS + 1):
        row = visual.TextStim(win, color=params.FG_COLOR,
            font=params.FONT, antialias=True, height=params.ROW_HEIGHT, wrapWidth=params.ROW_WIDTH, alignHoriz='left', text='')
        row.pos = (-params.ROW_WIDTH / 2,
                   (params.NUM_ROWS + 1) * params.ROW_HEIGHT / 2 -i * params.ROW_HEIGHT)
        rows.append(row)
    return(rows)

def set_aperture():
    VERTICES = [(-mon.getWidth() / 2, params.NUM_ROWS * params.ROW_HEIGHT / 2),
                (mon.getWidth() / 2, params.NUM_ROWS * params.ROW_HEIGHT / 2),
                (mon.getWidth() / 2, -params.NUM_ROWS * params.ROW_HEIGHT / 2),
                (-mon.getWidth() / 2, -params.NUM_ROWS * params.ROW_HEIGHT / 2)]
        
    return(visual.Aperture(win, shape=VERTICES))
    
def move_text(ts, delta_t):
    global current_row
    
    for i, t in enumerate(ts):
        t.pos += (0, delta_t * params.SPEED)
        
        if t.pos[1] > params.NUM_ROWS * params.ROW_HEIGHT / 2 + params.ROW_HEIGHT / 2:
            if current_row < len(split_text):
                c1 = clock.getTime()    # measure time
                t.text = split_text[current_row]
                print clock.getTime() - c1  # measure time
                current_row += 1
            else:
                t.text = ''
            if i == 0:
                t.pos = (-params.ROW_WIDTH / 2, ts[-1].pos[1] - params.ROW_HEIGHT)
            else:
                t.pos = (-params.ROW_WIDTH / 2, ts[i - 1].pos[1] - params.ROW_HEIGHT)

########################################################

split_text = prepare_text()

mon = monitors.Monitor('testMonitor')
win = visual.Window([1920, 1080], monitor='testMonitor', color=params.BG_COLOR, allowStencil=True, winType='pyglet', units='cm')

rows = create_rows()
current_row = 0
aperture = set_aperture()

clock = core.Clock()

done = False
t_sim = t_sim_old = win.flip()
while not done:
    move_text(rows, t_sim - t_sim_old)

    for r in rows:
        r.draw()
        
    t_sim_old = t_sim
    t_sim = win.flip()
    
    allKeys = event.getKeys()
    if len(allKeys)>0:
        for thisKey in allKeys:
            if thisKey =='right':
                params.SPEED += params.SPEED_STEP
            elif thisKey=='left':
                params.SPEED -= params.SPEED_STEP
                if params.SPEED <= 0:
                    params.SPEED += params.SPEED_STEP
            elif thisKey in ['q', 'escape']:
                done = True
    event.clearEvents()

win.close()
core.quit()

So, the problem can be solved by using pygame. The disadvantage is that I cannot center the left justified text on the screen because boundingBox returns bounding box only when I use pyglet.

Another problem, that I cannot solve, is that when the speed of the text animation is low (the default example’s speed) there can be seen some “wave” artefacts. You can speed up/slow down the animation by pressing arrow keys (right/left). When the speed is higher there is no noticeable “wave” artefact.

@jon just happens to be working on a new, much improved text stimulus, which will reduce the reliance on pyglet (including being much faster).

In the interim, you could probably create all of your multiple text stimuli in advance and then draw and move them as required, with no appreciable delay. What you can also do is render your text stimuli to images using win.getMovieFrame() and win.saveMovieFrames() (http://www.psychopy.org/api/visual/window.html#psychopy.visual.Window)

Those images saved to disk can be used instead of text stimuli, and as long as the image stimuli are created before they need to be displayed, might get around whatever this “wave” artefact is.