psychopy.org | Reference | Downloads | Github

Psycholinguistic experiment


#1

Hello everyone!
I am very new to PsychoPy and i’m trying to build a priming experiment.
My design is structured as follows:

I have 4 grammatical constructions, for each constructions there are 10 sentences (so 40 stimuli). I’m using these sentences as primes.

As target words, I have 3 different types of stimuli: non-words, words that pertain to the grammatical construction (I’ll call them words_construction), and words that pertain to the specific sentence (I’ll call them words_sentence).

My spreadsheet is therefore formed by 4 columns:

  1. prime (sentence)
  2. target 1 (word_sentence)
  3. target2 (word_construction)
  4. non word

I would like to show the sentence and then show either one of the two target words (which are associated to the sentence, and cannot therefore be randomized) OR a non word (which can clearly be in random order). I want to record reaction times to see if the lexical decision is faster with target1 or with target2.

Is it doable? Do you have any suggestion on how to do it? Thank you in advance :smile:


#2

Hi Lucia,

If you are happy that the choice between the three types of stimuli is random from trial to trial (and hence likely slightly unbalanced within each subject), then you could insert a code component in Builder and put this code in the “Begin routine” tab, so that a stimulus type is selected and recorded at the start of each trial:

# choose a representation of one of the variable names:
options = ['word_sentence', 'word_construction', 'non_word']
shuffle(options) # randomise the list
stimulus_type = options[0] # select the first entry

# record the choice in the data:
thisExp.addData('stimulus_type', stimulus_type) # need to record type what was chosen
stimulus_text = eval(stimulus_type) # now actually need its contents
thisExp.addData('stimulus_text', stimulus_text) # store it for convenience too
# even though it occurs in one of the other columns (makes
# analysis easier if in a single column).

You now have a variable called stimulus_text which can be used as required. e.g. put $stimulus_text in a text component, set to update on every routine.

Make sure that the code component is above the text component, so that the variable is chosen before the text component needs to refer to it.

Some of this stuff is actually a little bit tricky if you are not familiar with programming, and Python in particular. i.e. here we are using string representations of variable names (e.g. 'word_sentence') for certain purposes (e.g. storing in the data file). But we then need to convert that string of characters into the actual variable name (word_sentence without the quotes). That is done using the eval() function.

i.e. stimulus_type is a variable that contains just a literal string of characters like 'word_sentence'. When we “evaluate” that string of characters, we turn it into the actual variable name word_sentence (no quotes), that actually points to your real sentence content ('Hello world' or whatever). This distinction may or may not make sense to you, but it is one of the flexible things about Python that makes it easy to achieve some things like this.


#3

Hi Micheal,
thank you SO MUCH for the answer! I’ll try this right away! I’m unfortunately not familiar with Python (but it’s non my to do list) and I don’t have absolutely a “programmer’s mind”, but I know R quite well, so I get a hang of what you said :slight_smile:
I’ll try using your suggestions, thanks again!


#4

Hi Lucia,

I’m not sure I completely understand your design, but would it perhaps be possible to pair sentence and target type in advance (and randomise conditions in the input file)? Something along these lines:

sentences

(I’m not sure if sentences will actually be repeated. If this is not the case, you might want to use an approach similar to the one that Michael described.)

Jan


#5

Hi Jan,
thank you! Yes, I thought of doing that in the input file in Excel but I was trying to see if there was a way to do this in PsychoPy directly. The method suggested by Micheal turned out to work just perfectly! I still have to check for the balance of the stimuli, though, but I will ask my supervisor about it.


#6

Jan’s @jderrfuss approach is definitely superior if proper counter-balancing is needed at the within-subject level, rather than relying on random sampling to take care of it across a large number of subjects. Plus it is simpler.