If this template helps then use it. If not then just delete and start from scratch.
OS: Apple Sonoma 14.4.1
PsychoPy version: 2022.2.4
I’ve created the following experiment in PsychoPy Builder that is designed to collect continuous and discrete ratings as participants listen to audio clips. It has been synced to Pavlovia and is run through Pavlovia. The experiment seems to run properly and collects data well, but when I look at some of the data files, not all the continuous ratings save for each trial and I’m really not sure why because it works when I pilot it. So while for some of the 20 trials we have a full dataset of continuous ratings, some do not. I should note that the experiment is run on a variety of computers. I’m not sure if it’s an issue with the experiment design, so am asking here. If anything is confusing, please let me know and I appreciate your assistance. EDMUrgeToMove.psyexp (749.6 KB)
I would not recommend trying to save a rating using thisExp.addData every frame. Are you also using thisExp.nextEntry()? For continuous ratings I would check every frame whether the rating has changed since the last frame and save the time and rating only if it’s different.
Thank you so much for your reply - I appreciate it. I’m not using thisExp.nextEntry(). Why is it not recommended to use thisExp.addData, and how would it need to be modified to instead save the time and rating only if it’s different?
The print statement every frame is definitely an issue. However, I would recommend putting oldRating = -1 in Begin Routine and then using:
newRating = sliderThroughTheLandNoBeatDrop.getMarkerPos()
if newRating != oldRating:
rating.append(newRating)
print(newRating) # Remove this when you have finished debugging
timestamp.append(t)
oldRating = newRating
Thank you again for your suggestion on changing my code each frame. Do I need to modify anything in the end routine code to account for the change in each frame? I’m thinking not, but just wanted to confirm.
Hi @wakecarter , I’ve changed the code (modified what you shared as I need data recorded even when the marker doesn’t move) and when I pilot it in Pavlovia everything runs smoothly. However, when I get someone else to test it out on the “Run” setting on a different computer, the data still isn’t always saving fully. For example, in one test, while most of the data saved fully, there was one clip where it only showed 11 seconds worth of data (when there should have been 45 seconds. Does anybody know why this may be? Here is the link to the PsychoPy task again: EDM_ContinuousRatings.psyexp (750.7 KB)
For the experiment, I have all the files in the same folder and I think I’ve designed everything correctly, so I have no clue why this is happening.
Thank you so much for your reply. I guess that may work, but for graphing the progression of the ratings in R in a line graph, we’re thinking it makes it easier to have a data point at each timestamp, even if the rating doesn’t change. Since we want to see how the rating progresses/changes over the passage of time.
Could the overload explain why it isn’t saving the data properly? Would removing the print statement solve the issue?
You also have a large number of routines. It looks like you could have just six (or possibly even four) by reusing then, which would make more efficient code. However, my understanding is that this is more likely to create an issue when editing/saving the experiment than when running a participant.
Just providing an update - I removed the print statement and piloted the task with 4 individuals (so 88 trials total - 22 per participant). 87/88 trials saved the data fully - only one did not. Still trying to figure out why the one that didn’t save fully might not have.
Which trial had an issue. Does it have any data saved? Was it the first one or the last one? Is there anything odd about that trial (e.g. in the other columns)? What line of code failed to execute?
It was the 7th trial (out of 22). The total time is supposed to be 72 seconds, and it has saved the ratings and timestamps for the first 45 seconds and then jumps to one data point at 416 seconds (so it saves normally until 45 seconds and then after 45 there is one more rating at a timestamp of 416 seconds). There doesn’t seem to be anything odd about that specific trial (and it only occurred with 1/4 pilots). Where would I see the code to see if anything failed to execute?
Is it possible that the participant switched to a different window during that trial, effectively pausing the experiment? If I understand correctly, you seem to be saying that a 72 second trial froze for 6 minutes before continuing normally. Have a look at the commit timestamp for that data file and compare it with the start time, and the same values for your other participants.
I spoke to the participant and they mentioned that they never switched windows. The data would suggest that it froze, but the actual task didn’t freeze at all as the participant mentioned it ran smoothly and all trials following this one had the data properly saved. I compared the data to the other participants for this specific trial and it saves the same up until the 45 second timestamp and then it just stops for that specific trial, whereas with the other 3 participants whose data saved properly, it goes until the 72 second timestamp. However, when I look at the log file (not the data file) of the problem participant, there is an odd jump in timestamp from 632 to 1013 (which is during the problem trial). In the participants that don’t have any data issues, there isn’t this same jump. So while this jump occurred in the recording of the data in the back end, it was not reflected on the front end as the participant was completing the task.
So there is a 6 minute jump in the experiment clock. Have you looked at the commit times to see if that participant took six minutes longer than the others?
(my mistake - accidentally deleted my reply and can’t seem to undo it) Just had a look and no that participant didn’t finish anywhere near 6 minutes after (13 seconds before and 150 seconds after two other participants). I wonder why such a jump occurred with this participant and this trial, and not the others
While the glitch may have been caused by the large number of routines in your experiment, I think that an external factor (such as Internet glitch or participant switching focus) is still the most likely explanation.