psychopy.org | Reference | Downloads | Github

Issue pairing reaction time responses with target times, controlling for false alarms etc

Hi everyone

To preface, OS is windows 8.1 / 10, running psychopy 1.85.2 and have built my study using coder.

To give some context, I have recently finished collecting data on a psychological experiment. The specifics are largely irrelevant, all that is really needed to be known here is that subjects are required to complete a set amount of trials (e.g. 30) in which they have to respond to a target whenever it happens to appear.

My CSV results file includes a list of target times, and a separate list of reaction times, which is fine and dandy. However, because I am silly, and did not have the foresight to fix this up pre-experiment, I am now having to find a way to e.g. combine these lists, in such a way that reaction times are paired with their respective target times; this so I can analyse and calculate values such as d prime and overall reaction time.

The issue is, of course, that I can’t simply combine them together due to misses, false alarms, etc. (which in some cases are extremely high), as that would simply end up being completely inaccurate e.g.

stimulus_time = [‘1.000’, ‘2.000’, ‘3.000’, ‘4.000’, ‘5.000’, etc.]
reaction_time = [‘1.500’, ‘3.500’, ‘9.000’, ‘27.000’, etc.]

Considering reaction time data is used so frequently, I was wondering whether anyone has faced a similar issue and could provide any insights into a solution. I would happily do this manually but I think that would end up taking a good 60+ hours to fully calculate out d prime, accuracy, and reaction times for all subjects across all trials.

Apologies if this question is perhaps inappropriate here (considering it is a post-experimental coding question, and therefore one perhaps more concerned with python coding more generally). But nevertheless any help whatsoever would be hugely appreciated!

Best,
Bmau

Just to be clear, are you getting separate CSV files for ST and RT? I assume it’s not just one line per trial or this would be trivial.

I guess the critical thing is, what is the information in your RT list that allows you to associate it with a particular trial in the first place?

Hi Jonathon,

All information is placed into a single CSV file. Trials are split by rows, with ST and RT lists in separate columns, so all RTs are located within the row of their respective trial. It is trivial in the sense that it can be eyeballed, but again that will take multiple days (I am happy to do this worst case scenario)

Oh, that’s not too difficult then. There’s a few different ways to go about it.

If you aren’t committed to using Python to analyze your data, R will pretty much take care of this for you. If you read in the CSV as a data frame, it will leave the blanks where they are and simply ignore them when computing the relevant results.

If you’re committed to doing it in python, I’d have it read in each trial as its own list. The ones with blanks will have a blank index (so, [trialnum,ST, ,whatever]) and you can just check whether that index has a value greater than zero, and if not either ignore the trial or replace it with some kind of nonsense code to flag it. Alternately you can just go into excel and find/replace the blanks with nonsense codes like -99 or ‘N/A’ so your lists are the right length, and just tell your code to ignore that trial if the index matches the nonsense code.

1 Like