| Reference | Downloads | Github

Why Do Datafiles Log Different Information in PsychoPy versus Pavlovia

I’m piloting an RT study on Pavlovia and find that the datafiles produced when running the study on my machine locally through PsychoPy vs online through Pavlovia contain very different information.

Description of the problem:

It is extremely useful for me to know when a stimulus starts and when it stops in the flow of the experiment, as well as the participant’s RT and accuracy on any given trial. The PsychoPy Excel datafile contains this information but the Pavlovia-generated datafile does not. This is in spite of the fact that the “Save onset/offset times” option in the Data tab of the Stimulus Properties Component is checked.

Does anybody know how this information can be logged in the Pavlovia-generated datafile? My experiment varies stimulus-onset asynchronies and since I’ll be running this on-line this information can be used to determine what specific SOAs the participants actually experienced over the course of the study (as a kind of manipulation check). Since timing is a dodgy thing when running online studies, this seems like crucial information to have.

Thank you
Andrew Delamater

Hi There,

Stimulus onset and offset times are not logged automatically on pavlovia. But can be done so using a similar method to that explained here:Save onset offset time 2020.2.10 online - #2 by Becca

Note though that this is still when your computer “thinks” the stimulus is sent - as you highlight, timing is somewhat unknown “in the wild” and we cannot say that is the actual time without an external hardware measurement. So, it is a limitation to recognise when running timing based studies online.

Hope this helps,

1 Like

Hello @Becca,

Thank you for your feedback. I am trying to work out the details of your reply to my post. First, I’m running PsychoPy v2021.2.3 and found that I had to add to the Begin Experiment (in the first routine) the following code:


Beyond that, however, I managed to get a column of values in my Pavlovia output file corresponding to what I think may be cue onset times. By using the following code at the End Routine (following your suggestion):

// save the onset times of stimuli
thisExp.addData(‘Cue.tStart’, Cue.tStart)

The numbers showing up in the datafile are all very small (e.g., 0.0002). Can I assume that these reflect the times from the start of the routine to when the stimulus onset (or at least as told by Pavlovia to the local computer)?

Third, I cannot understand the final part of your initial (linked) post. In my case, I am telling PsychoPy (or I should say Pavlovia?) to turn the stimulus on for 0.2 s and then turn it off. Your suggested code seems to evaluate when a stimulus has been moved off screen, but that doesn’t apply in my case. I simply want PsychoPy (Pavlovia) to turn the stimulus off and I want it to tell me when it actually did that. I just don’t know what to check for in the IF statement. Do you have a suggestion? When a stimulus has been turned off does its value change?

Finally, in my experiment I am doing the following within a single Routine: Present Cue for 0.2 s, then Wait for either 0, 0.2, 0.6, or 1.4 s (across trial types), before then moving on to the next Routine in which a Target stimulus is presented to which the participant must respond by pressing one of two buttons. I would like to know, as best as possible, (1) how much time the Cue is actually on the screen, how much time the actual Wait period was, and how much time has elapsed from the Cue offset to the Target stimulus onset (now occurring in the next Routine).

The output datafile from PsychoPy seemlessly provides this information by providing an experiment clock stamp for each change of event. I’m unclear how Pavlovia might accomplish these across-Routine times without using an experiment clock stamp.

Any further help you could provide would be greatly appreciated!


Hello Andrew,

without a photo diode and an oscilloscope you won’t be able to know for certain that your programmed timing is correct. Software can only register when it “told” the operating system to perform a certain action. But then drivers, other programs (e.g. virus scanner, cloud-service, network-traffic aso), the os itself might delay the actual execution. Sound is extremely bad in timing, sometime onset delays of up to 200 ms. Modern graphic drivers and monitors might alter the screen refresh rate. So, performance measured based on software alone offers a false sense of accuracy. See the megastudy or other publications on the accuracy of stimulus presentation to see what you can expect.

So, to (nearly) always get the timing you intend you have to run the experiment on your local computer knowing its performance limits.

The good thing though is with enough participants, trials and your planned ISI, smaller errors in presentation times will probably not matter. And it is easier to collect participants online that offline.

Best wishes Jens

Hello @JensBoelte,

Thanks for your thoughts on this. Yes, I realize that there are uncontrolled sources of timing variability especially in online studies. At the very least it would be useful, I think, to know when the code is actually sending a Turn Stimulus On and Turn Stimulus Off signal. If those values don’t match up with the intended times then there are real problems. My experience with running PsychoPy locally using NFrames as the basis of the timing is that it can produce obtained durations, based on PsychoPy’s Start/Stop values in the data file, that are not close to what we think they should be. Thats why I’m thinking that having these Start/Stop commands in the Pavlovia datafile would be helpful.

Now, I am really beginning to question just what to take as meaningful information in the data files. For example, when I run the experiment locally on my machine through Pavlovia I get a Frame Rate = ~ 250, but when I run the same experiment through Pavlovia the datafile says that my Frame Rate = ~ 60. In addition, when I actually compute the amount of time elapsed from stimulus onset to offset in my PsychoPy datafile, since I can set the NFrames to a specific value, I will be able to calculate the obtained Frame Rate. This value is not the same number as that given by PsychoPy. Thus, I don’t know what is accurate when looking at these numbers.

I sure wish that the developers could help us better understand these issues.

Other odd things also seem to continually plague this platform, such as synced files to Pavlovia don’t always get deployed when you attempt to run them (even after doing a “hard refresh” following @wakecarter). Instead, there appears to be a caching problem because when switching to a new browser the program that was most recently synced gets run, but when running on a browser used to test the program several times results in an older, not-recently-synced, version being run. This is just odd and one would hope that with all the iterations that this program has now gone through some of these most basic functions would be corrected.

Finally, I’m still trying to figure out how to record an End Stimulus Time command and suspect it is probably very easy, but I’m just not a very sophisticated coder. For example, following @Becca suggestion the following works for stimulus onset times but not for offset times (I’ve placed this code in the End Routine bit of the code component):

thisExp.addData(‘Cue.tStart’, Cue.tStart)
thisExp.addData(‘Cue.tEnd’, Cue.t)

Is there a tEnd command that is the equivalent of tStart, or maybe there is something else?

Sorry for jumping around from topic to topic. This is getting pretty frustrating…


Hi There,

I am pleased to hear you managed to get onsets stored to the csv. For the offset, in the example a stimulus was moved offscreen, and we check every frame whether the stimulus is now offscreen and store that time. Now, that timestamp won’t be that precise because it will also be polled only on screen refreshes (so may be off by a single frame, 16.66ms for most 60Hz monitors).

Since it sounds like you are wanting this info for sanity checking timing, what I would suggest is actually checking out the log file. The log file stores time stamps of all events (within the constraints discussed, in that we can’t know the actual time without external hardware measurements). Each file should also save a log file with a time stamp column of all events - have you had a look at that?


@Becca : Well, when in Pilot mode only a csv file seems to be generated by Pavlovia. I have some .log files from a previous study I ran on Pavlovia and the file looks like it would be impossible to work with. Its got a long list of things that are not so easy to decipher. It is certainly not as straightforward as the PsychoPy datafile that shows clearly what the timestamp is for stimulus on/off sets.

Alternatively, do you know if there is a simple code that could be used to timestamp, using the experiment clock, the beginning and end of a stimulus presentation (e.g., the beginning and end of a routine)?

thanks again for your help.

There is certainly a learnt art to reading log files, I only started making use of them recently. Though I would recommend this as the most “accurate” (in constraints) approach, whilst not polling on the frame refresh.

To help I’ve made a python script that can be used to extract the onset and offset of a visual stimulus from a log file. You should in theory be able to open this and press run then select the log file using the gui and type the name of the visual stimulus component you are checking. It will write the estimated onset, offset and duration of the visual stimulus to a csv file for easier reading. (1.9 KB)

I hope that this helps,


I do appreciate your efforts at helping with this. Unfortunately, it took some time for me to realize that you intended for this code to be run in the PsychoPy coder view. After first downloading the latest version of Python onto my machine and running your code there, I realized this. Finally, I got your code to run within PsychoPy but it didn’t produce a .csv output file anywhere I could detect. Furthermore, in the .log file since the individual stimuli used in the experiment are presented variably (e.g., A.png, B.png, etc) there is no apparent way of distinguishing between these different stimulus conditions. The log file does not include these stimulus file names, only the variable that appears in the PsychoPy code (e.g., the variable “cue” to which stimulus A, B, C, etc could be assigned).

Were there any error messages when you ran it? when the gui popped up did you change “target_image” to be the name of the visual component you are trying to detect (it takes one value not a list). If you have one component named “cue” you will just want “cue”


Hello Becca,

This is what shows up in the PsychoPy Runner window:

########### Running: /Users/macbook_pro_16/Desktop/ ############

2022-01-20 12:17:48.686 python[44721:4301698] ApplePersistenceIgnoreState: Existing state will not be touched. New state will be written to /var/folders/c4/b2w24qj56l33cbj7zhffg4xr0000gn/T/org.opensciencetools.psychopy.savedState

Experiment ended.

I loaded the code you sent into PsychoPy and then ran it. A window pops up looking for something. When I select the code file ( then I get the screen for entering the name of the stimulus. In my case the variable name in the Excel conditions file is “Cues” (without the quotes). Then the program runs and the Runner screen gives me the message above, but I don’t see anything else.


OK I should clarify how this works:

  1. Open PsychoPy coder view.
  2. Open in coder view and press run.
  3. Select the log file you want to analyse
  4. Enter the name of the component you are checking i.e. cue

The csv file will appear where the .py file is stored.


I’m afraid that no csv file is produced. I load your code into the Coder view and Run. I then select the log file (i.e., the .log and NOT the .log.gz file) that I have placed on the desktop. It asks for the name of the component and I enter its name, Cue. Then the program stops and there is no csv file to be found anywhere on the Desktop.

I’m using a Mac running Monterey but I don’t see how that would matter…