psychopy.org | Reference | Downloads | Github

Trial/ loop error: impossible human responses recorded in output

URL of experiment: Pavlovia

Description of the problem: I have a loop that is set to run for 1 minute and then break. During this loop, there are two routines: one where participants use a mouse to click on valid text options to match a target, and another routine that shows feedback for 2 seconds. I have the total number of repetitions set to 180 for the loop, since participants would have to respond ridiculous fast in order to go through all the trials (and testing never came close to that number of trials). However, I had one participant who on the last run of this loop, seemingly completed about 10 trials normally, but then somehow completed all possible trials for that loop. However, there is no mouse click data for what option was clicked on, if they were correct, etc. The time between trials are on a scale of tenths of a second, so it seems as though the feedback routine wasn’t even displayed.

The four previous iterations of this loop all look normal, as does the rest of the participant’s data. This makes me think there was some kind of issue on that last loop or with the mouse response. I have data from 23 participants and only noticed this in one data set. Has anyone else had this problem or something similar? Is there a solution, or does this seem like a one time glitch?

Hey @kylie,

I haven’t heard of cases like these, so if it’s a one-off, I’m inclined to assume it’s a fluke. However, if it happens again, feel free to give a shout!

Best, Thomas

Now that I’ve had a chance to collect a little more data, I’ve noticed it in at least one other participant (I haven’t checked through all the data yet). But I haven’t had time to really look at the issue yet. I’ll be sure to update here if I find additional information or figure out what happened.

Thanks for replying!

In that case, I’d like to take a peek at your experiment if that’s OK. Could you share your gitlab project with me? (my username is tpronk).

@thomas_pronk Done. So the issue comes up during the experimental Stroop portion (which I used str to abbreviate for), and so far the two participants I noticed the issue in were both run in group B. I haven’t been able to sort through the code and/or data files to determine if there is an error in coding, if the fact both are in group B is mere coincidence, if there is a bug, or if something else is happening.

That’s quite a complicated experiment you got there! I took a look at code components in the str-routines. I notice some timers (strTimeLeft and conTimer) that, if they reach a certain limit, trigger routines to be finished etc. Here is a guess: something makes those timers fire when they shouldn’t (maybe participants are doing nothing for a long time at some point), causing a whole bunch of routines to be finished at the first frame, which is then registered as these routines being finished without any response being given?

@thomas_pronk that seems in line with the information I’m seeing. Is there a potential fix for that? Group B completes this portion self-paced, so I don’t have a specific trial time out for them. But I do need the Stroop task to alternate with the MIST task (the number one), ending with the MIST, for no more than 10 minutes.

Group A does have a trial time out time.

Any idea how the routines ending with no participant input looks? Or is there a way to see it replicated?

Not sure how routines ending without input look like. I guess if you set those timers really short, you might be able to reproduce it? And if I’d speculate a bit more: perhaps there is some spot in your experiment, like a feedback routine, where your participants might have decided to go make a sandwich, and upon returning, go to the next routine and trigger all kinds of timeouts? If this could be the case, tracking how long each individual routine took could help (you’d have some routine that took a couple of minutes)

@thomas_pronk I could try, but the times are incredibly short.

I thought about participants walking away, so I did include an internal timer for my study that outputs the end of every routine. I think it’s called “expTime” or something similar. When I look at one of the participants, there doesn’t seem to be any sandwich making (or other long breaks)- the longest routine was 6.49 seconds and was not immediately followed by the incredibly short routines. There was at least one trial after each of the longest routines for that block.

I did notice that the routine preceding the short routines had a duration more than 1 second but less than 2- and this is the fastest non-impossible routine for the block from that participant. Perhaps this fast RT is interacting with how I have the feedback routine setup? I’ll have to check after work, but I think the feedback routine is set to 2 seconds for Stroop…

Generally speaking, we’ve got a tricky case here; the logic/code is too complex for me to analyze simply by looking at it and we’ve got an issue that only occurs occasionally. Another thing that could help is interviewing the participants that reported the error; see if there is some pattern there. I wrote a little guide on that: https://thomaspronk.com/pdf/Tech_Support_and_Bug_Report_Guidelines%20v1.3.pdf

@thomas_pronk Yes, it is rather complex. However, I am running this completely online using mTurk, and none of my participants have reported any errors due to this. I’ve seen the guide before but wasn’t sure when to use it- I’m guessing now would be the time to replicate the issue and send what it says?

Before I do that, though: I was able to replicate the issue simply by clicking incredibly fast during that section. But, on the participant side, nothing looks wrong. I think I have narrowed it down to this portion of code here:

    if ((expInfo["group"] === "B")) {
        strTr = (strTr - (t + 2));
        if ((strTr < 0)) {
            strTr = 0;
            StrFd = 0;
        }
        psychoJS.experiment.addData("strTr", strTr);
        strFBmsg = "Response Recorded";
    }

And for some reason, when t is less than 2, it’s causing the issue. I’m not entirely sure why, since strTr is a longer time period (say, up to 58 seconds) and the code is supposed to subtract out the current trial so the next trial (if not responded to) will time out when it is supposed to.

Am I missing some kind of very specific timing related math/code issue? For instance, do I need to use “2.0” instead of “2” because it doesn’t typecast the variables correctly and that’s what is causing the error?

Thanks!

Hmmm… I tried something out in the browser console, the expression below gives the result 36, which is a bit weird
58 - ("2" + 2)
This is what happens…
"2" + 2 evaluates to 22 (string concatenation)
Next, 58 - 22 evaluates to 36

So there could be some typing issue going on? One workaround could be:
58 - (Number("2") + 2)

I don’t know why, but the solution ended up being:
strTr = (strTr - (Number(t) + Number(2)))
So both had to be typecast (?) to numbers for it to work.

Also, the error was caused by participants rapidly clicking through. Thorough testing determined that was the cause. But I’m not sure why the feedback routine was then skipped entirely. While this doesn’t prevent participants clicking through, it DOES force them to see the feedback routine and significantly decrease the number of trials per block. Therefore, since the program is acting as expected now, I’m calling this bug fixed.

1 Like