Feedback every 6 trials

Hello,

I am working on an experiment in which participants complete several tasks, and on each task they receive a different type of feedback reinforcement: immediate feedback, one-back (similar to an n-back task, receiving feedback one trial behind), delayed feedback (6 sec delay before receiving feedback), and then a deferred feedback type (complete 6 trials, and then receive all of the correct feedback first, and incorrect feedback second, so if they responded correctly on 4 trials and incorrectly on 2, the feedback would say “Correct +6 points, Incorrect -2 points. Total this block = 4”).

I have code working for all reinforcement types except for the 6 trial blocks. Because eventually I will want to pseudo-randomize the task order (not the same feedback type twice in a row), I was trying to set up the block of 6 trials similarly to how I have the one-back task (see below). However I have not been successful in getting the feedback to work every 6 trials. Is it possible to do this in a similar way as I have the oneback task below? Or is there something else I should be doing. Sorry for such a vague question. I am still trying to learn psychopy and familiarize myself with python and JS. I have recently purchased the book on psychopy and am slowly working my way through it.


Hi Brooke,

you could record the history of correct/incorrect responses and use multiple if-statements to determine the feedback that is given. Like this:

### Begin experiment
corr_history = []
feedback = ' '

### End routine of your trial or Begin routine of a feedback routine

# append the last accuracy value
corr_history.append(train_resp.corr)

# immediate feedback
if feedback_type == 'immediate':
    if corr_history[-1] == 1:
        feedback = 'Correct'
    else:
        feedback = 'Incorrect'

# delayed feedback
if feedback_type == 'delayed':
    feedback_onset = 6
    if corr_history[-1] == 1:
        feedback = 'Correct'
    else:
        feedback = 'Incorrect'

# one back feedback
if feedback_type == 'one_back':
    if corr_history[-2] == 1:
        feedback = 'Correct'
    else:
        feedback = 'Incorrect'

# last six feedback        
if feedback_type == 'last_six':
    n_correct = sum(corr_history[-6:-1])
    feedback = 'Correct +6 points, Incorrect -' str(6 - n_correct) + ' points. Total this block = ' + str(n_correct) + ' points.'
  • feedback_onset can be used as the onset time of the text component giving the feedback but has to be set back to 0 after the feedback.

For the second question regarding when which kind of feedback is given the approach depends on two questions:

  • Is the number of trials known and constant across respondents (i.e., all participants complete exactly 60 trials)?
  • Should the rate of each feedback type be constant across participants (i.e., each feedback type is supposed to be given in exactly 15 trials)?

Depending on this you could either pre-generate a pseudo-random trial list specifying the feedback type for each trial or randomly draw the feedback type in each trial.

Thank you so much for your reply!
This is extremely helpful.

So there is no set number of trials unfortunately. They change tasks based on accuracy, so once the participant achieves 80% correct across the last 30 trials. I have done with for other experiments, that code is below, I assumed I could incorporate that code in this experiment once I change variable name. However, if this becomes overly difficult I could change so it is 60 trials per task.
I’m not quite sure I understand your second question, my apologies. Each participant will receive 2 of each feedback type, in a pseudo-random order (just not repeating the same feedback type in a row). So not all participants should receive the feedback types in the same order. I’m not sure if that answers your question.


Each participant will receive 2 of each feedback type

Across the whole experiment? Or in one task? Does that mean that there are trials without any feedback, even most of them?

The entire experiment is 8 tasks, and every participant will complete all 8 tasks. 2 of those tasks they receive feedback on every trial, 2 of the tasks they receive feedback delayed by 6 seconds, 2 of the tasks they receive feedback one trial back from the original trial (trial 1 no feedback, complete trial 2 and then receive feedback for trial 1, then complete trial 3 and receive feedback for trial 2), and then 2 of those tasks have deferred feedback which is feedback after completing 6 trials (so complete 6 trials, then receive positive and negaitve feedback for those 6 trials, then complete the next 6 trials and then receive feedback for those 6 trials). The order in which these tasks are present needs to be randomized.
So in ALL tasks they are receiving feedback, but not always immediately after the trial they just completed.

Ah ok, I thought the randomization of feedback types was taking place on the level of trials.

You can use this code to generate a list of your feedback types, shuffle it until it has no repeats, and then go through it task by task.

# Begin experiment
import random
from itertools import groupby

feedback_type_list = ["one_back","one_back", "immediate", "immediate", "last_six", "last_six", "delayed", "delayed"]

while any(sum(1 for _ in g) > 1 for _, g in groupby(feedback_type_list)):
    random.shuffle(feedback_type_list)
    
    
# Begin routine of  a trial
feedback_type = feedback_type_list[(task - 1)]

This code expects that there is a variable task that counts up from 1 to 8.

The last piece that is missing is to prevent the feedback for the first trial in the “one back” condition and for the first 5 trials in the “last six” condition.

Thank you so much, this is incredibly helpful.
I think I have this very close to working.
My advisor last second has requested a bunch of changes. One of which is in the last 6 (deferred) feedback condition. Instead of displaying all of the correct feedback and then negative feedback, they want it to so the correct/incorrect responses in the same order in which they occurred. So complete the 6 trials, then if they got the first wrong and last 5 correct, the feedback would show: Incorrect -1 (clear screen) Correct +1 (clear screen) Correct +1 (clear screen) Correct (clear screen) etc…
I assume I can use he same corr_history list that you suggested, and instead of summing them, tell it to play Correct +1 if 1 and Incorrect -1 in order that they are in the list. I’m just not quite sure how to make it look through the corr_history list to do that? If you have any suggestions I would greatly appreciate it.

Hi! Sure that’s doable. Is the feedback presented for a fixed time or until a response occurs?

If it is the former, there are multiple ways of doing this. I would suggest building a separate routine for this purpose. There you can have six text components that display the appropriate feedback in the correct order and with spaces in between.

To access the corresponding value in the history, you can use something like corr_hist[-6] (for the sixth to last value) and set the right feedback text for the first text that appears using an if statement in Begin Routine. Do this for all six texts.

Put this routine next to the regular feedback routine and put code to skip the routine in Begin routine if the feedback condition is / is not „last_six“.

Hope that helps to get you started. I am glad to help you further but will only be able next Tuesday. Good luck until then :wink:

I am so close to having everything working, thanks to your help!
I am having a bit of trouble with the last 6 condition only showing feedback every 6 trials. Currently, I will complete trials 1, 2, 3, 4, 5, 6 and then receive feedback for all 6 at the same time like I need, but then the next trials show feedback immediately after, instead of waiting 6 trials. I have been trying to use the % modulo operator in python to make this work. I was thinking that if I had an if statement with the trial number % 6 == 0, then show feedback, else, skip feedback. However, it’s only working for the first 6 trials. I have copied my code below, I am sure it is not the most efficient code as I’m still learning. I also am not sure if the % operator will even work when I put the experiment online?

"""
# deferred feedback        
if Feedback_Type == 'Deferred':
    conditional = True
    show_stim = 'pink.png'
    feedback_onset = 0    
    if trial_num < 6:
        response = ' '
        timeout = 0.1
    if trial_num >= 6:
        if trial_num % 6 == 0:
            if corr_history[-6] == 1:
                score = score + 1
                def1 = 'Correct +1 point. Total Points: ' + str(score)
                timeout = 1.0
            else:
                score = score - 1
                def1 = 'Incorrect -1 point. Total Points: ' + str(score)
                timeout = 4.0
            if corr_history[-5] == 1:
                score = score + 1
                def2 = 'Correct +1 point. Total Points: ' + str(score)
                timeout = 1
            else: 
                score = score - 1
                def2 = 'Incorrect -1 point. Total Points: ' + str(score)
                timeout = 4
            if corr_history[-4] == 1:
                score = score + 1
                def3 = 'Correct +1 point. Total Points: ' + str(score)
                timeout = 1
            else:
                score = score - 1
                def3 = 'Incorrect -1 point. Total Points: ' + str(score)
                timeout = 4
            if corr_history[-3] == 1:
                score = score + 1
                def4 = 'Correct +1 point. Total Points: ' + str(score)
                timeout = 1
            else:
                score = score - 1
                def4 = 'Incorrect -1 point. Total Points: ' + str(score)
                timeout = 4
            if corr_history[-2] == 1:
                score = score + 1
                def5 = 'Correct +1 point. Total Points: ' + str(score)
                timeout = 1
            else:
                score = score - 1
                def5 = 'Incorrect -1 point. Total Points: ' + str(score)
                timeout = 4
            if corr_history[-1] == 1:
                score = score + 1
                def6 = 'Correct +1 point. Total Points: ' + str(score)
                timeout = 1
            else:
                score = score - 1
                def6 = 'Incorrect -1 point. Total Points: ' + str(score)
                timeout = 4
        else:
            response = ' '
            timeout = .1
"""

Hi Brooke, looks good! The code so far only tells me that you change the feedback (i.e., the values of the def variables) every six trials, but there is nothing saying that the feedback should only be shown every six trials. How do you use the conditional variable?

If this code is in the “Begin routine” tab of the feedback routine itself, you could say if trial_num % 6 == 0: ... else: continueRoutine = false In this way, the routine is skipped, if condition does not hold.

When putting your experiment online, you will export it as javascript (PsychoJS). You will then see if the auto-translation of your python code works as intended. If not, you have to manually go in and fix some code. But since there is definitely a js analog of %, I am confident that this will work or you easily can make it work.

The conditional variable is a True/False variable. In the loop I have around my 6 deferred feedback text routine, I have int(conditional) in the number of reps. So if it’s a deferred task, conditional = True, otherwise it is False and skips the loop. All of the feedback conditions are working really well thanks to you help. I will be putting it online today!
I was wondering if it’s possible to my feedback text display on top of my feedback image instead of appear after the image? For one of the tasks (oneback), during feedback it shows them the image from 2 trials ago and then says whether it is correct or not. Ideally the correct/incorrect feedback would appear with/on the image. But instead it is showing the image and then text separately. Can I change that? For my text component, I have the start time as the variable $feedback_onset, and for the oneback task it is = 0 and ends at the $timeout variable (1.5 sec if correct, 4 if incorrect). The image component, starts a 0.0 sec, and ends at $feedback_img, which is set to 1.5sec. So I would expect they would overlap for 1.5 sec since they both start at 0 sec and are onscreen for at least 1.5 sec, but that is not the case.

Cool! Don’t be discouraged in case something does not immediately work when you uploaded the experiment to pavlovia. For me, there are usually one or two things that need to be addressed before the auto-transloation works fine. Or did you even already try that?

How you describe the timing of your picture and the text I would expect the same. I would have to see that in detail. Could you upload the experiment somewhere so I can have a look?

hu4in1_test.zip (7.5 MB)

Yes! Folder is attached. This is a shortened version of the full experiment that I’m using to test stuff out. I have already tried to upload it to Pavlovia to see what I might need to change before finishing the full version. And I can’t even get to the stage of testing because I’m getting error messages after having to create a SSH key, which I’ve never needed before.

hu4in1_on.psyexp (75.5 KB)

There were a few things that lead to some misalignments. I hope by “fixing” them I did not change anything that was intended to be this way :smiley:

  • I swapped the image and the text component so the image is always in the background and the text in the foreground
  • I changed the onset field for the text from “frames” to “time”
  • In the code for the one-back condition, I added a line setting an onset time to 4 which was missing.
  • I set the onset time of the image to 0 and the duration to $feedback_img_onset + timeout, so the image is constantly displayed.

Some things I noticed from the “participant” perspective were that

  • in the first task the feedback seems to be wrong, it’s “Correct! -1 point”
  • the deferred feedback is the only one where the colored background changes to white for the feedback. Maybe you could have the colored img in the background there as well.
  • The one-back feedback now shows the stimulus some time before the feedback text, because I changed it so that the colored images are always there (I found the change to white quite irritating, but maybe that was intended :smiley: ). It would be nice to have a “blank” grey image displayed before the actual stimulus is shown in sync with the text.
  • For the deferred feedback, it would be nice to have a little “ISI” between the feedbacks. Otherwise, it’s hard to notice where one feedback ends and the next begins.

Let me know if you need clarification on any changes!

For the SSH key issue, I have no idea why you have to do this. Normally, there is no need for this. Have you uploaded experiments before, where it went fine?

1 Like

Yes, the white background was driving me crazy! I had tried creating a variable to change the background rgb values for each feedback type but failed. So I started working on adding the colored images in, but then the text feedback was still showing in white! So thank you for that! I’m not sure what “ISI” between feedback means?

I just tested out the version you sent, and now I’m only seeing the feedback text on the immediate feedback task? *** Edit *** I ran it a second time and it is working! Not sure what happened the first time. Thanks so much!

I have uploaded and ran more than 40 experiments on pavlovia and never been asked to create a SSH key. So I’m not sure why I need it now?

Perfect! Maybe you did not compile a new py file before running it the first time? Because what you describe is exactly what I experienced running the version you sent :smiley:

Sorry, ISI is “inter-stimulus intervall”. So basically just a pause of some milliseconds that shows that one feedback ended and the next began.

Wow, 40 experiments. Sorry, if my encouragement regarding the upload to pavlovia before seemed patronizing. I thought you were just starting out. But apparently, you are already quite experienced with that :+1:

No idea, about the key. Sounds like the next issue to post in the forum. Does this only happen for this specific experiment? Or is it the same when you try to upload any other experiment at the moment?

1 Like

No worries at all! I have uploaded some much easier to program tasks in the past that didn’t require a lot of custom code. I am definitely just starting out with learning python and JavaScript!
I posted a new topic with the issue a little bit ago! I’ll try uploading some other experiments to see if it’s an experiment specific problem.

Thanks for all of your help with this!

Is there a way to make it so while in the last 6 task, the task does not end in the middle of a block and instead waits to move on until the block is finished? For example, if meet criterion at 34 trials, it will just move to the next task on trial 34 instead of waiting for trial 36, show feedback, and then move on.
This is currently what I have in my End Routine tab of my deferred feedback routine. trials_5 is my task loop and trials_7 is to end the task list loop once they’ve completed 4 tasks.

last30 = corr_history[-30:]
avg_acc = (sum(last30))/30
if(avg_acc >=0.8 and trial_num >=30 and trial_num % 6 ==0):
    trials_5.finished = True
else:
    trials_5.finished = False
if task_ctr == 5:
    trials_7.finished = True

I have the other 3 tasks feedback routine before the defer feedback loop, and it contains the code below in End Routine. And when I hide that code the deferred blocks always go through the full block, so it’s definitely an issue with the routine before.

last30 = corr_history[-10:]
avg_acc = (sum(last30))/10
if Feedback_Type == 'Immediate' or 'Delayed' or 'Oneback':
    if(avg_acc >=0.8 and trial_num >= 10):
        task_ctr = task_ctr + 1
        trials_5.finished = True
        if task_ctr == 5:
            trials_7.finished = True

Yes,

this line

if Feedback_Type == 'Immediate' or 'Delayed' or 'Oneback':

has to be

if Feedback_Type == 'Immediate' or Feedback_Type == 'Delayed' or Feedback_Type == 'Oneback':
1 Like

Hello,

I’m having an issue with my corr_history list. I am using it to hold score (0 or 1) on the last 30 trials, and then dividing by 30 to see if they were at .80 in the last 30 trials. If they were, they move on, if not, they continue. I am clearing my corr_history list every time that criterion is met by restating corr_history = . And in my data file I am seeing that after my last_6 tasks the list isn’t emptying. My code for that is also below. So the participants have been moving on much faster than they are supposed to because it is including the score from the previous task. Any ideas on what I might be doing wrong here?

``
last30 = corr_history[-30:]
avg_acc = (sum(last30))/30
if Feedback_Type == 'Immediate' or Feedback_Type == 'Delayed' or Feedback_Type == 'Oneback':
    if(avg_acc >=0.8 and trial_num >= 24):
        trials.finished = True
        corr_history = []

Last 6 code

last30 = corr_history[-30:]
avg_acc = (sum(last30))/30
if trial_num % 6 == 0:
    if(avg_acc >=0.8 and trial_num >= 24):
        trials.finished = True
        corr_history = []
if trial_num % 6 > 0:
    trials.finished = False

*Edit
I am also noticing even when it is emptying the list, participants are still able to move on much faster than they are supposed to be able to. I have one participants who completed 30 trials in the last 6 condition, but their score was only .66 instead of .8. So I’m not sure what else could be wrong.