Issue with all data saving even though experiment runs properly

If this template helps then use it. If not then just delete and start from scratch.

OS: Apple Sonoma 14.4.1
PsychoPy version: 2022.2.4

I’ve created the following experiment in PsychoPy Builder that is designed to collect continuous and discrete ratings as participants listen to audio clips. It has been synced to Pavlovia and is run through Pavlovia. The experiment seems to run properly and collects data well, but when I look at some of the data files, not all the continuous ratings save for each trial and I’m really not sure why because it works when I pilot it. So while for some of the 20 trials we have a full dataset of continuous ratings, some do not. I should note that the experiment is run on a variety of computers. I’m not sure if it’s an issue with the experiment design, so am asking here. If anything is confusing, please let me know and I appreciate your assistance.
EDMUrgeToMove.psyexp (749.6 KB)

I would not recommend trying to save a rating using thisExp.addData every frame. Are you also using thisExp.nextEntry()? For continuous ratings I would check every frame whether the rating has changed since the last frame and save the time and rating only if it’s different.

Thank you so much for your reply - I appreciate it. I’m not using thisExp.nextEntry(). Why is it not recommended to use thisExp.addData, and how would it need to be modified to instead save the time and rating only if it’s different?

I’ve just looked at your code and you aren’t using thisExp.addData every frame, but you do have:

rating.append(sliderThroughTheLandNoBeatDrop.getMarkerPos())
print(sliderThroughTheLandNoBeatDrop.getMarkerPos())

timestamp.append(t)

The print statement every frame is definitely an issue. However, I would recommend putting oldRating = -1 in Begin Routine and then using:

newRating = sliderThroughTheLandNoBeatDrop.getMarkerPos()
if newRating != oldRating:
     rating.append(newRating)
     print(newRating) # Remove this when you have finished debugging
     timestamp.append(t)
     oldRating = newRating

Thank you so much for sharing that. I will make the change to the code and see how it runs.

Thank you again for your suggestion on changing my code each frame. Do I need to modify anything in the end routine code to account for the change in each frame? I’m thinking not, but just wanted to confirm.

Hi @wakecarter , I’ve changed the code (modified what you shared as I need data recorded even when the marker doesn’t move) and when I pilot it in Pavlovia everything runs smoothly. However, when I get someone else to test it out on the “Run” setting on a different computer, the data still isn’t always saving fully. For example, in one test, while most of the data saved fully, there was one clip where it only showed 11 seconds worth of data (when there should have been 45 seconds. Does anybody know why this may be? Here is the link to the PsychoPy task again:
EDM_ContinuousRatings.psyexp (750.7 KB)

For the experiment, I have all the files in the same folder and I think I’ve designed everything correctly, so I have no clue why this is happening.

Why? If the marker hasn’t moved then you know that the reading is the same as the last recorded value?

You have a print statement every frame as well, which is likely to cause an overload.

Thank you so much for your reply. I guess that may work, but for graphing the progression of the ratings in R in a line graph, we’re thinking it makes it easier to have a data point at each timestamp, even if the rating doesn’t change. Since we want to see how the rating progresses/changes over the passage of time.

Could the overload explain why it isn’t saving the data properly? Would removing the print statement solve the issue?

It’s worth a try. The print statement is more likely to be the issue than the append

You could reduce the frequency of the append if you wanted by using if frameN % 2 == 0: (to append every other frame)

Thanks so much for your help. I’ll try removing the print statement and see if that fixes it and circle back once I do.

You’re welcome.

You also have a large number of routines. It looks like you could have just six (or possibly even four) by reusing then, which would make more efficient code. However, my understanding is that this is more likely to create an issue when editing/saving the experiment than when running a participant.