| Reference | Downloads | Github

Weird (extremely long) reaction times in the output file

Dear all,
I’ve just created a simple experiment (visual stimuli, keyboard responses) using the Builder of version 1.83.01., under Windows 7.
Everything runs smoothly, but in the output file I see a couple of weird RTs, like 160 billion ms… This happens mostly in the blocks’ first trials but not only.
I tried to look into it in the topics here and also elsewhere on the internet, but haven’t succeeded so I still have no idea what happens. Is there a tiny detail I’m not noticing? Is it simply an Excel problem? I’ve attached a couple of these files.
I hope you can help me out. Thanks a lot!

sbj1.xlsx (12.9 KB)
sbj2.xlsx (12.6 KB)

Not sure immediately, but I notice the refresh rate is also showing up as a ridiculous value (should be 59 or 60). My guess is something to do with the formatting of the numbers (rather than their recording). e.g. in English-speaking locales a dot place indicates a decimal whereas in many locales a comma is used for this, so I imagine some piece of software has mis-interpreted the value

You’ve sent xlsx files but could you send the corresponding psydat, csv and log files for one of them, so we can see what happened? Did the xlsx file come directly from psychopy or has it been (re-)saved by you from excel?

Hi Jon,I’m attaching all three files from another trial run.I think that xlsx file I sent before was originally a csv from psychopy, I usually save my data like that. When I noticed the problem, I tried to save it in different ways as well and I kept getting weird results.

In the meantime someone suggested to me to change event.clearEvents(eventType=‘keyboard’) to event.clearEvents() in the response updates section of the builder’s code and that seems to work and produce normal RTs for the time being. But of course understanding better what happens here would be better. If this mis-interpretation is the case that will happen later on as well. I have the same problem both on my laptop and my office computer…


2_number_decision task_2017_Jan_12_1358.csv (4.29 KB)

OK, yes, when I open that csv file it looks fineand when I save it I get the following xlsx file
2_number_decision task_2017_Jan_12_1358.xlsx (30.5 KB)
which all looks sensible. I think the issue is that Excel on your machine is incorrectly interpreting some of the numbers as being huge.

I don’t know quite how to fix that. You might be able to change the import settings in Excel, or use a different package for the conversion (libre office?). Potentially we could find/write a conversion program to switch the csv to xlsx without this problem.

Ideally we need a way to write the csv file that indicates to excel that “.” is the decimal separator for this file but I don’t know how to do that yet

I think this aspect is unrelated. I think the times when excel misinterpreted the reaction times were when it was actually an RT>=1.0 and I suspect it’s just chance that on some files this ever occurred.

@sebastiaan do you know any way to provide meta-data in a csv file so that Excel will know whether to interpret numbers as “International” or “Continental”? I’ve googled plenty and can’t find a way to do it. As far as I can see users have to import the file from within Excel (rather than double-click) and specify the decoding parameters on a per-file basis, or change the locale settings for their entire system. Seems amazing that nobody has ever added a header line to csv files like <encoding="utf-8", delimitter=",", decimal=".", EOL="LF">

I’m pretty sure that’s not possible, and also falls somewhat out of the scope of the csv format: The format cares only about separating data into rows, columns, and cells, but not about what cells contain.

Looking at the csv, I would say that it’s better to double-quote all fields, and escape the contents. Then the ambiguity is gone and most software will interpret it correctly. It’s a valid csv file as it is, don’t get me wrong—but it’s also asking for trouble.

Hi Jon, Sebastiaan and Noémi,

I am sorry for retrieving a bit ‘old’ topic, but I had the same issue on my data and I could solve it out through Excel (Windows system, Office 2013 package), making what Jon was looking:

  • When importing data from text sources to a new sheet (selecting as source a csv file), the last step of setting the data imported allows the user to format the data on each column.

  • Then, by clicking on the ‘Advanced’ button it can be choosed what decimal and thousands separator to use.

(I am sorry that the screenshot is in Spanish, I can upload it then in English if it is necessary)


1 Like

Hi Fer,

Just passed by to say that your solution worked perfectly! I was having the same problem but system in Brazilian Portuguese.Thanks very much for sharing.

1 Like