Holding down mouse runs through all Ratings


I’m working on a multiple-rating paradigm, to be run online. For each stimulus, 4 routines of ratings are being answered using the Rating component in the builder. The problem is that if a subject holds down the mouse click, they can run through all the ratings without properly answering.

It seems like a really basic issue to come across - I’m surprised I didn’t see more discussions about this. I know one could turn on “Show Accept” in the Rating settings, but it slows down the pacing of the experiment.
I’ve found other forum posts where these types of problems are fixed for Mouse responses with the “New clicks only” checkbox and Keyboard responses with the “Discard previous” checkbox, but there seems to be no Builder solution for this for Sliders and Ratings.

The other solution would be using code (i.e. Button Up Solution), but I haven’t been able to get a single piece of code to work from anything on the forum (completely new to Python). I can imagine the solution should be a simple one, but if anyone with some patience could spell it out, that would be much appreciated.


Hi There,

Does the participant need to make a response with the mouse or can they use the Keyboard? if they can use the keyboard one solution is to check if the Keypress has a measured duration (indicating there was a key lift).

There is some more info on that in this post Spacebar Release Duration Issue


Hey, thanks for the reply.

I’ll keep it in mind in case things change.
But we would really like to use the mouse. There should be a way to work around this problem with the Rating component, since the other builder components have a matching work around.

Please could you upload the .psyexp file of the experiment you are trying to run (with corresponding .csv files), I can take a closer look to check out the issue :slight_smile:

Sure, here’s a link to a .zip file with a small version of the whole directory.

You can see while rating, if you hold the mouse down and hover over a potential answer, it gets auto-submitted.


Thanks for sharing this. Whilst it might not quite be what you want, one solution could be to add a mouse click component to each routine (i.e. SAM1, SAM2, SAM3), untick “Force end of Routine” in the advanced options of your rating scale component and have the mouse component end the routine on “any click” this way your mouse component is only looking for new clicks. I would be sure to check the data output is as you expect from that to be sure though :slight_smile:

Let me know how you get on,

Ah, that’s clever! It functioned well, but unfortunately, the data output starts having problems with the slider response: I get “none” as the rating response in the output files.

I figured the mouse component might be getting a temporal priority over the slider component, so I tried using the “sync timing with screen refresh” to try to get more reliable outputs, but it didn’t help.

Might not be able to use a slider after all.
If any other thoughts come up, let me know.

Hmmm that is a strange one.

An alternative work around for the moment would be to present 5 image stimuli and make them click able with the mouse. If you do that for your “SAM” routines then keep the rating scale for your final rating I think thay should work fairly smoothly.

Let me know how you get on,

Alright, I’ll see. Thanks again.