Find screen location of a word in textbox2 stimulus

Dear all,

I display pages of a text to my participants using textbox2 and collect their gaze position using an eyetracker. Because I want to present another stimulus whenever a participant looks at a specific word on each page (the word will be different on each page), I need to find that word’s location on the screen.

Assuming that the text is stored in a string variable, I thought I simply sum the width of each individual word as rendered on the screen until I get to the target word (accounting for line breaks). But I can’t seem to find a way to get the rendered width of a word. Another idea was that I use a monospace font and simply multiply the number of letters and spaces to the target word with the rendered width of a letter. But again, I can’t seem to find out the rendered width of a letter.

Anyone having a hint for me?


Try using JavaScript to dynamically measure word positions by creating hidden elements with the text’s font properties and using`getBoundingClientRect() to get their dimensions. This method can help synchronize participant gaze with specific words on each page effectively.

Hi Sebastian,

Thanks for your suggestion. So, do you mean I convert everything to an online study using Javascript? Or is it possible to include Javascript elements into a local study?


1 Like

Is this something you thing could work in PsychoPy? eyetracking is generally done locally.

Personally I would use a fixed width font and calculate the position – I’ve done this previously for a dictation task.

I actually managed to create a demo by summing bounding boxes of individual words until I reach the target string (I take line breaks into account by checking whether the horizontal sum exceeds the wrap width). The problem is that I only managed to do this using visual.TextStim, as there is the convenient boundingbox property. However, I need double line spacing for the experiment, and if I’m not mistaken I need visual.TextBox2 for this. But I don’t know how to obtain bounding boxes of visual.TextBox2 stimuli.

Yes, I think this could work in PsychoPy.

You’re welcome! You can do either. Convert everything to an online study using JavaScript or include JavaScript elements in a local study by running the files on your computer through a web browser. Let me know if you need more details!

In the end, I chose to stick to a python implementation using visual.TextStim to display the text. Because I needed double spacing between lines, I implemented a function that estimates where the line breaks would be based on the bounding boxes of individual words (plus whitespace). At these locations ‘\n\n’ is inserted into the text. The function also sums up the bounding boxes until the target phrase for the gaze contingent part and returns that as well.

This might not be the smartest implementation, but it seems to work so far. If anyone is interested, please DM me.