Mean RT and Feedback (Reward)

OS (e.g. Win10):
PsychoPy version (e.g.2020.1.3):
Standard Standalone? (y)
What are you trying to achieve?:

  • I rebuild the extended Stroop experiment with feedback from the demo-pack.

  • I want to add a code component that allows me to give the participants feedback on their statistics per block. That is mean RT and percentage correct, reward earned.

  • Additionally, I want to apply reward blockwise and compare the mean RT per block of the participants to a set criterion RT, to receive a reward

  • For those reasons I suppose I need to add variables into the “Begin Experiment” - index. I have put it within the “Feedback”-code component, but I am not sure if this is correct (probably not).

mean_RT = 0
Ref_RT = 1000
sum_RT = 0.0
sum_corr = 0.0
counter = 0

  • In the “Begin Routine” -index

if response.corr:#stored on last run routine
msg=“Correct! RT=%.3f” %(response.rt)
else:
msg=“Oops! That was wrong”

  • In the “End Rountine”-index

if response.corr:
counter= counter + 1

  • I can conceptualize what I want to implement, but I do not know the build in functions and where to put the functions within the indexes.

sum_RT = RT + latest response
mean_RT = sum_RT/counter

If mean_RT < Ref_RT & counter > 28 then reward
[There are 32 Trials in total, so 28 relates to them]

I hope the information is sufficient. Any help is appreciated.

Best regards
leif.

So you seem to have something in place to construct msg according to participats’ responses? From there presenting this info should be simple - just make a Text component with the value of “text” set to $msg. An important consideration with this is that msg will need to exist from the very beginning, so you would need to add to the msg = "" to the Begin Experiment tab.

You also seem to be counting correct responses - to show this as feedback, it’s the same process: Make a Text component with “text” set to a variable name (after a $) and assign the value you want to show to that variable.

I tried to elaborate on the feedback, which in a simple Stroop-tasks works, but not in a Dual-Task setting. Were two responses are given in two separate trials and the sequence of two responses to the stimuli may vary. That’s why I tried to add If-elif component for both types of sequences.code dual-task

“Begin Experiment”

if not key_DT_resp1.keys:
msg = “zu langsam” [translates to: too slow]

if not key_DT_resp2.keys:
msg = “zu langsam” [translates to: too slow]

if key_DT_resp1.keys == rsb_num and key_DT_resp2.keys == rsb_word:
msg = “”

elif key_DT_resp1.keys == rsb_word and key_DT_resp2.keys == rsb_num:
msg = “”

else:
msg = “falsch” [translates to: wrong]

key_DT_resp1 = 1st keyboard component
key_DT_resp2 = 2st keyboard component

rsb_num = refers to the column “rsb_num” in my excel spreadsheet containing the right response buttons for the “number-task”.

rsb_word= refers to the column “rsb_word” in my excel spreadsheet containing the right response buttons for the “word-task”.

The problem is that “wrong” is always the feedback and if I disable the line in the code component containing “wrong” it always gives the feedback “too slow”, even though the correct responses were made.

Thank you for your time, and feel free to ask if I missed something in order to understand what I posted.

Best regards
Leif

I think the problem is that your Text component is set to “Set Every Repeat”, which will set its value before the Begin Routine code is executed. You could fix this by either moving the code to the End Routine tab in the previous routine, or by setting Text to “Set Each Frame”

Thank you for your remarks!

I fixed it using a nested-If clause (if this term exists):

if key_DT_resp1.keys and key_DT_resp2.keys:
if key_DT_resp1.keys == rsb_num and key_DT_resp2.keys == rsb_word:
msg = “”

elif key_DT_resp1.keys == rsb_word and key_DT_resp2.keys == rsb_num:
msg = “”

else:
msg = “falsch”

Anyhow, a new problem arose. I added a further stimulus onset asynchrony (SOA) of 900 ms. In this case, stimulus1 is presented and response1 is given and stimulus2 is not presented. Sometimes it overlaps with the feedback provided if response 1 was false or too slow.

I am thinking about a code-component containing something like:

if target_1 and target_2 have finished (do not know the command yet) and resp1.keys and resp2.keys:
finish routine

Alternatively, I am checking the temporal parameters of the response buttons etc.

thankful for advice
best regards
leif