| Reference | Downloads | Github

Are ROI's "looks" gazes?

OS (e.g. Win10): Windows 10
PsychoPy version (e.g. 1.84.x): v2021.2.3
Standard Standalone? (y/n) If not then what?: y
What are you trying to achieve?: I’m trying to do an eye-tracking experiment using the builder. I’m at the point where I have data that is coming out, and this is cool. However, I’m perplexed. What are looks? They are these thing called “looks” given by the ROI routine such as there is a number of looks, a time indicating the start of each look and a time indicating an end for each look. I would like to know what are those looks. They seems to take longer than a frame, so it is not just one datapoint from the eyetracker. Is it calculated based on velocity? Because that is what I would actually want.

What did you try to make it work?: Not sure what these looks were, I’ve tried adding my own velocity calculator in the code, and using the velocity to make a threshold. If the velocity is low enough, it is a gaze, else, it is not. If it is, I record many of the data points I want in a list, like location of the gaze, the character that was looked, and things like that. Then, I add the list in the data files with addData(). So I do my own parsing of the data in gazes, without even knowing if “looks” is already what I’m looking for.

What specifically went wrong when you tried that?:
It’s working, in the sense that it adds the data in the file. However, I’m not sure my velocity calculation makes sense. Every velocity calculated my way gives results between 0 and 1.5, and the recommended threshold for determining gaze that I’ve read in an article is 100. I know that if I want help with my code and calculation, I need to give more info. However, my question here is more basic : Do I need to continue to struggle with my spaghetti code, or is there already a solution, in the builder or elsewhere, to take gazes into account in the data files? Is this solution already implemented with the “looks” from ROI, but I’ve been too noob to notice?

Somebody with more expertise than me (an absolute novice at this) could probably offer a better explanation, but checking the ROI component code; e.g.,
ROI component code , it seems like looks are a number based on continuous time spent looking at the ROI; each 100 milliseconds of uninterrupted looking time a look is added.

When you have more than a single look, you can see that there are also the same number of data points in the start and end time (Those are the lists of the data points when gaze stopped being inside the ROI or started being inside)

Hopefully this comment helps clarify what looks are


1 Like

That is correct. Only thing I would add is that the “Min Look Time” setting is what defines a look duration. If “Min Look Time” was 0.25 seconds, then each 250 milliseconds of uninterrupted looking time a look is added.

1 Like

To do calculations based on sample velocity, continue accessing the eye samples using custom code in your experiment.

The ROI component only reads the eye position once a screen refresh / frame (so likely at 60 Hz), so it can not be used to access every eye sample like you need.

1 Like

Now I know a little more about what looks are.
And I know that it is not what I need.

The code I started to calculate velocity was inserted in the isLook/wasLook loop, and then must be changed totally.
I will need help to create a code that calculate velocity while the experiment is running, but outside of the while loop, because, if I understand correctly, the while loop makes frames, wich are 60hz, and my eyetracker is 250hz.

But that is for a later question that I will formulate more clearly before posting it.

For now, thank you both for your help.