Criterion-based learning with item drop-off

OS:
Ubuntu

PsychoPy version:
v1.83.04

What are you trying to achieve?:
I am trying to figure out how to create a criterion-based test-feedback phase for my experiment, in which participants will be asked to report the right-hand members of word pairs studied before, when cued with the corresponding left-hand members of the word pairs.
The criterion would be just 1 ‘hit’, so that after the first correct response, the item should be considered as learned.
The learned items should then be progressively dropped of the trial list as the participants works through the test-feedback phase, so that only non-learned items would be presented again.
To be clear, non-learned items should not appear again after a wrong answer has been provided, but only on subsequent cycles of the same trial list.
Moreover, the presentation of stimuli would follow a sequential order.

What did you try to make it work?:
I tried to use the strategies and scripts proposed in these quite insightful threads:

https://discourse.psychopy.org/t/criterion-learning-conditional-mapping

https://discourse.psychopy.org/t/dropping-items-from-list-based-on-performance

In particular, I downloaded and tested ‘criterion2.psyexp’ as it comes in the first thread above, with the exception that I halved the stimuli (from 6 to 3) for the sake of simplicity. Therefore, the resulting trial list was composed of just three stimuli: the word pairs ‘skirt - blade’, ‘ant - phone’, and ‘graph - rain’. I also kept the learning criterion at two, and maintained the feedback routine.

What specifically went wrong when you tried that?:
When I tried the script myself, the following happened: I responded correctly to the first two word pairs in the first two rounds of practice (thus achieving the learning criterion for these word pairs) and always gave wrong answers to the third word pair. Nonetheless, the experiment kept presenting the first two word pairs on the next cycle of the trial list. Moreover, during these additional presentation of the already learned word pairs, the feedback now warned me that the expected response was now the one belonging to the third word pair (i.e., the last in the trial list). I did not change anything in that script except for the amount of items in the trial lists, which doesn´t seem to be hardcoded anywhere in the script, and I was not able to figure out why the approach used in that script did not work for me, whereas it worked fine for other users.

Great, thanks for starting a new thread, one more thing please, if you could just post the criterion2 experiment and the conditions file(s) you’re using, so we know we’re working on the same thing.

Right, so I uploaded the experiment/conditions files, as well as the results (.csv) from my last simulation.
The feedback´s behavior becomes quite erratic during trials already marked as ‘LEARNED’ in the output, even though this cannot be seen in the output because of the ‘LEARNED’ label that overrides the detected accuracy.

_criterion2_2017_Mar_03_1703.csv (3.6 KB)
criterion2.psyexp (11.7 KB)
innerConditions.xlsx (4.8 KB)

Well I remember not liking the way the keyboard input was handled in that experiment, so I combined this with something else I mocked up for another question last week. So in my process of modifying and cleaning up that code, it seems like the problem you were having has gone away, but of course please check my work. Note that the code is very different now, but I think it’s better so hopefully that works for you.

There are some extra features you may or may not need, in particular this allows the experimenter to change the “keyboard layout”, allows for accented characters, and gives the choice of changing how a keyname is displayed. For example, on my keyboard “space” is the name of the key, but if I wanted to type “Hey there”, and I used the version in the experiment first uploaded here, that would show as “Heyspacethere”.

I assume you might speak Italian? (Sorry if I’m wrong) So maybe you will find this useful. Also keep in mind that the Participant Info Window is now home-made. Take a careful look at all this stuff here and make sure it fits your uses.

criterion3.psyexp (16.7 KB)
innerConditions.xlsx (4.8 KB)

Thank you very much for taking the time to help me with my experiment!

I tried the latest version of the program that you provided, and at first it behaved exactly as the previous (i.e., same problem with additional presentations of LEARNED items, with subsequent erratic feedback).

Some web search and Coder view exploration later, I managed to make it behave as expected. Basically, I moved the bit of code that checks whether the score in the dictionary matches the minCorrect criterion and eventually stops the routine from the Begin Routine section to the top of the Each Frame section. I felt that this was necessary to enable effective routine stopping because I thought that the “continueRoutine = True” line immediately after the “#-------Start Routine “enterText”-------” line (as for the Coder view) was overriding the stop command given in the Begin Routine section.

The program (attached) now does exactly what I wanted in the first place, even though I feel like checking for correspondence to a fixed value on each frame is not exactly an elegant/economic solution. However it seems to do the trick for me, and I can deal with potential sub-optimal timing/minimal slow down in that phase of my experiment.

On a side note, I also experienced an issue with full-screen presentation of the experiment, which got stuck at the custom participant info window (without closing to desktop). To circumvent the issue I just reverted to the custom info window.

I also removed the custom keyboard bit. I am indeed Italian, but I won´t need those special characters at the moment. I might still want to use the special characters in the future though, so that part of the script was also quite useful to look at.

I can´t thank you enough for helping me with this program, and I also learned quite a lot in the process. As a newcomer to PsychoPy and Python, it would have take me quite a long time to figure this out on my own.

– Davide S.

criterion3dfs.psyexp (13.0 KB)
innerConditions.xlsx (4.8 KB)

Well now I do actually wonder if the version makes a difference, I vaguely remember reading something on here about continueRoutine behavior and changes to a new version, because I wasn’t getting the behavior you’re describing, but if you found a solution, great! If it ain’t broke, don’t fix it.

EDIT: Yep, this is where I saw that: Stop a Routine from starting? , so the code in the Experiment I gave you will work with current versions.

Very glad it all worked out. If you get a chance please mark as solved, and have a great day!

1 Like