If isTrials is checked then the data file will have a separate row for each iteration of the loop. Programmatically, the command thisExp.nextEntry() is processed at the end of each iteration of the loop.
If isTrials is unchecked then any data saved during the loop will replace existing data rather than appearing on the next row. If you accidentally uncheck it you will therefore only get data from your final trial and will need to process the log files if you want to recover lost data. Online, this was implemented in 2022.2.0.
If you want to choose when data is saved to a new row then isTrials should be unchecked and thisExp.nextEntry() should be added accordingly.
In Python colours are simply defined by a list of three numbers, representing red, green and blue on a scale of -1 to +1.
Online you have to define a colour object using these three numbers. Builder components create these colour objects automatically, as do text.setColor(newColor) or polygon.setFillColor(newColor) in code components (where newColor can be either a colour name or a list of three numbers).
Polygon border colours can also be set via the component. On the other hand, if you want to set a polygon border colour in code then you need to change the JavaScript side from polygon.setLineColor(newColor) to polygon.setLineColor(new util.Color(newColor))
To avoid this manual translation, I create colour objects in a Both code component in my setup routine.
white = new util.Color("white");
grey = new util.Color([0, 0, 0]);
transparent = null;
Then polygon.setLineColor(white) or text.LineColor(transparent) will work in an Auto translated code component. Do not use polygon.setBorderColor(newColor) since this will fail online.
If you want to split the text from an editable textbox into words (for example in a free recall task) online, there there are two issues that need to be overcome.
The Python function .split() which splits on any whitespace is auto translated into .split(" ") which only splits on spaces.
The text in the data file has no new line breaks at all. It appears the the newline characters in an editable textbox are stripped from the text when the routine ends.
To solve the first issue, define a split function in Before Experiment.
Python
def split(item):
return item.split()
JavaScript
function split(item) {
return item.split(/[\s\n]+/);
}
To solve the second issue, the split function must be called in Each Frame code at the point the routine is ending. To do this, put the code component at the bottom of the routine.
if continueRoutine == False:
words = split(textbox.text)
This is the equivalent of Use PsychoPy version in PsychoPy Builder. Changing the survey version does not change the survey JSON file. The only change is the URL that you use to recruit participants. This determines which interpreter turns your JSON file into a survey to be viewed by your participants.
2024.1.0 can only cope with a single block and is therefore not suitable for multi-block surveys imported from Qualtrics. Variables do not include the block name which makes logic and data files easier to work with. Used in embedded surveys using PsychoPy 2024.1.x.
2024.2.0 allows for multi-block designs but has some difficulties with accessing variables. Used in embedded surveys using PsychoPy 2024.2.x.
2024.3.0 allows researchers with a Pavlovia licence to save data at the end of each page, instead of data only being saved at the end of the whole survey. Whether you can use this data will depend on research ethics approval. Used in embedded surveys using PsychoPy 2025.1.x.
When you return to the survey page you will see the selection back at 2024.2.0 rather than the version you most recently selected. The version seen by your participants will depend on the URL you give them. If you selected to save on each page, this choice will be remembered, even though you cannot change it unless you reselect 2024.3.0.
When you make changes to a Pavlovia Survey, they arenāt saved immediately. Make sure you press the save button (if it is green) before you try to pilot or run it.
Online, if you an attribute of a visual stimulus is updated Each Frame then the first frame of presentation will use default values (e.g. position [0,0]) even if the stimulus starts after the routine has started. To solve this issue, set suitable default values in Begin Routine.
Replace the local copy of the same file with the downloaded version.
Open the old file in Builder and resync.
There is no need to replace old Python, JavaScript or index.html files since they can be recreated.
Another possibility is to revert in GitLab directly but I got an error message when I attempted this route.
Sorry, we cannot revert this commit automatically. This commit may already have been reverted, or a more recent commit may have updated some of its content.
If you try to run an experiment online and you get a blank screen with the word initialising⦠in the centre, you need to investigate further before asking for help.
This indicates a launch error, which is usually a syntax error in your JavaScript file that wasnāt caught by the checks during synchronisation, and may be in a Before Experiment code component tab. The most common cause for this error is that you are trying to import a Python library, such as random or numpy, which donāt exist in JavaScript. Use Developer Tools to look for more information and then search for that error on the forum.
If you post Python code, the tabs are essential for readability, both by forum users and PsychoPy itself. To maintain the tabs, you need to use preformatted text which you can select from the cog menu.
You can also manually surround you code with one or three backticks.
When posting code, please clarify which tab is comes from: Before experiment, Begin experiment, Begin Routine, Each frame, End Routine or End experiment.
Please do not post code from Python or JavaScript files generated by Builder unless you have a question directly related to the compilation process.
Please do not post JavaScript code unless you are pasting the lines related to an error message in developer tools, using a JavaScript only code component or have modified the autogenerated code by using a Both code component (in which case you should also show your Python code and highlight the changes to the auto generated code you made).
If you are using Coder, please try to only post the code relevant to your issue.
PsychoPy Builder components can only present to a single screen. However, locally you can create a window on another screen and present to it using code components.
Select which screen is used by your main components in Experiment Settings / Screen
Use object.draw() and win2.flip() commands whenever you want to update the objects.
message.text = "Press space to start"
message.draw()
win2.flip()
Close the window in End Experiment
win2.close()
Note that there may be some inconsistencies in screen numbering. When I set both win and win2 to screen 2, they appear on different screens. I think this is because the code component starts numbering from 0, and Experiment Settings start from 1.
Also, while in principle it should be possible to make the second window full screen using fullscr=True, when I just tested it, they both appeared on the same monitor (i.e. with only one of the two windows visible). It might be worth trying to create both windows in code.
If you have a lot of resources then you might not want to download them all at the start of the experiment. If they arenāt too big, then you might even be able to download them during an ITI before each trial.
If your loop points at your conditions spreadsheet and your spreadsheet contains the full paths of your resources, then they may get downloaded automatically. You can stop this, either by changing the spreadsheet name into a variables or removing the folder names and/or file extensions from the resources.
To download one file at a time during an ITI, put the following code in Begin Routine in a JavaScript code component.
Note that the name and path are identical strings. In this case imageFile is the column name in my spreadsheet and the images are in a folder called āimagesā.
To ensure that the ITI doesnāt end too early (before the resource is available) but usually lasts for the intended ITI duration put the following in Each Frame
if ((t > .25) && (typeof psychoJS.serverManager.getResource("images/" + imageFile) !== "undefined")) {
continueRoutine = false; // Move to the next routine
}
The ITI routine will need a component with a blank duration. To run the same exaperiment locally, change the resource routine to Both and put the following on the Python side.
if t > .25:
continueRoutine = False
If you want to check what resources are currently available, use
To download a list of resources at the same time (for example while the participant is reading instructions or doing hard coded practice trials) create a list of {name: resourcePath, path: resourcePath} dictionaries and use that list in prepareResources.
Showing different trials to a returning participant
Todayās tip is a little different. With the help of ChatGPT, I have written a solution which would allow participants to press escape during a local experiment and then be shown different trials when they restart. This will only work locally, not online.
Put the following code into the Begin Experiment tab of a Python code component.
The code counts the number of rows in a conditions file and then looks for values of trials.thisIndex in data files starting the with current participant number, creating useRows as a list of indices that havenāt yet been seen.
import pandas as pd
from glob import glob
# Set variables
conditions_file = "conditions.xlsx" # Can be .xlsx or .csv
loop_name = "trials"
# Determine file type and read data
if conditions_file.endswith('.xlsx'):
df_conditions = pd.read_excel(conditions_file)
elif conditions_file.endswith('.csv'):
df_conditions = pd.read_csv(conditions_file)
else:
raise ValueError("Unsupported file format. Use .xlsx or .csv.")
# Get the total number of rows
total_rows = len(df_conditions)
# Get all CSV files starting with participant ID
csv_files = glob(os.path.join("data", f"{expInfo['participant']}*.csv"))
# Extract trials.thisIndex values
trial_indices = set() # Using a set for fast lookup
for file in csv_files:
try:
df = pd.read_csv(file)
if loop_name + '.thisIndex' in df.columns:
trial_indices.update(df[loop_name + '.thisIndex'].dropna().astype(int))
except Exception as e:
print(f"Error reading {file}: {e}")
# Generate the full range of expected indices
all_indices = set(range(total_rows))
# Find missing indices
useRows = sorted(all_indices - trial_indices)
# Output results
print('useRows', useRows)
Then set the selected rows of the loop as $useRows.
Sometimes I want to load the stimuli before the main loop starts, so that I can, for example, insert prospective memory targets or N-Back trials, or apply constraints to the randomisation. The easiest way to do this is to use add a trial handler in a code component. This solution is taken from my PsychoPy Code Component Snippets - Google Docs.
In Python, the code for a trial handler is:
myData = data.TrialHandler(
nReps=1,
method='sequential', # or 'random'
extraInfo=expInfo,
originPath=-1,
trialList=data.importConditions('conditions.xlsx'), # Add ", selection=useRows" inside the brackets for selected rows
seed=None,
name='myData')
# Add the following for random
sequenceIndices = [ ]
for Idx in myData.sequenceIndices:
sequenceIndices.append( Idx[0] )
myData = new TrialHandler({
psychoJS: psychoJS,
nReps: 1,
method: TrialHandler.Method.SEQUENTIAL, // or .RANDOM
extraInfo: expInfo,
originPath: undefined,
trialList: 'conditions.xlsx',
// or trialList: TrialHandler.importConditions(psychoJS.serverManager, 'conditions.xlsx', useRows),
seed: undefined,
name: 'myData'});
// Add the following for random
sequenceIndices = myData._trialSequence[0];
Access individual values using: aValue = myData.trialList[Idx]['variableName'] for sequential and aValue = myData.trialList[sequenceIndices[Idx]]['variableName'] for random.
The spreadsheet can be replaced with a list of dictionaries, e.g.
conditions=[ ]
for Idx in range (10):
conditions.append({"cue_ori": 10*Idx, "target_x": 300*(Idx%2)})
Please do not share with every designer, unless you have a particularly useful survey to share. Instead, give individual Pavlovia users Write and/or Read access by entering their username next to your own.
If you get the username wrong, you should get the following error:
If you use visual components, then they are all created during Begin Experiment and then updated as you change the parameters, whether every repeat or each frame.
If you create visual objects in code, you should mimic this behaviour. Create the objects in a Begin Experiment code component tab, and then make changes in code. If you create visual objects in Begin Routine within a loop then you the memory load will increase with the total number of objects created, which can eventually cause PsychoPy to crash.
Thank you to @dgrtz for identifying this issue. While the memory overload may only occur online, I would also recommend this tip for local experiments as good practice.
There may be exceptions, however, if you do not know how many simultaneous objects you will need at Begin Experiment. In that case, ensure that the total number of new objects is kept to a minimum.
People often want help with runtime errors on the forum, but either post a pilot link or a link to their project page. In the former case the pilot link has often expired before it gets used and in the latter the project page is not visible to other users unless the project is set to public. While you could set the project to public, this would mean that the person trying to help you would need to make a fork and run their own copy. They would also have access to your data (if there is any).
If your experiment isnāt running properly then you can safely set it to ārunningā, add some credits (if you donāt have a licence) and share the run link. Turn off āsave incomplete resultsā. If you want to make sure that the experiment isnāt finished, add a routine at the end (or at any point after the place your want help with) with a text component and no ending condition. This is also suitable for piloting a working experiment apart from for checking the correct data is being saved.
If the error doesnāt occur near the start, either add a copy of the routine that is failing to the beginning of the flow or disable some of the intervening routines. This is especially important if you have embedded surveys with required responses. If you canāt do this, then try to replicate the error in a smaller version of your experiment.
The bottom line is ā make it as easy as possible for other forum users to recreate your error, especially if you havenāt managed to identify which line of code is causing it via the developer tools.
There are three main levels of randomisation that typically exist within an experiment.
Random order of trials
Random order of conditions
Random assignment to groups
Order of trials is set either by setting the loop to sequential, random or fullRandom. When the loop is sequential, then the trials themselves could be an a specific order (e.g. of increasing difficulty), a pre-randomised order or a shuffled order of selected rows. Having a random order has the advantage of being easier to set up ā you donāt need to worry about randomising your spreadsheet. On the other hand, it does add noise to your data. Some participants may get longer runs of similar trials than others. If you have four trial types then there is about a 25% chance that you could get a run of at least five in a row of the same type in a block of 100 trials.
If all your participants get the same random order, this will reduce the noise in your data, but you risk having artefacts caused by the specific random order you chose.
Randomising the order of your conditions is a far bigger issue. Later conditions may have worse performance due to fatigue or improved performance due to practice. If you have a small number of different conditions (two or three) and a large number of participants, the easiest option is to randomise the order and record the order chosen as a variable you can include in your analysis. With a larger number of conditions you may need to select a subset rather than allowing all possible permutations if you want to include the order as a variable. Alternatively, you may decide to fix the order, especially if your conditions are different tasks so practice is less likely to be important.
If you think order is likely to affect your results and you have tens rather than hundreds of participants, the best option is probably to counterbalance the order to ensure that similar numbers of participants are assigned to each order. This advice also goes for other between-participants variables. If you are assigning 100 participants to four groups then there is a 95% probability that each group will contain between 17 and 33 participants, but that still means that your largest group could be nearly twice as big as your smallest. If this is too much potential variation then you need to counterbalance.
I wonāt go into the details of how to counterbalance in this tip but there are three basic methods.
Assign each participant a consecutive participant number and allocate based on participant number % (modulo) number of groups.
Assign each participant to a group using a counterbalance routine or app.
As above but also take into account non-finishers so that if a participant doesnāt complete your experiment their group assignment can be reassigned to a later participant.
PsychoPy is a set of open-source Python libraries and an application which is free to use. Online, the companion PsychoJS libraries are also open-source, and Pavlovia offers free server space on GitLab. Experiments experiments hosted on Pavlovia do, however, require an institutional licence (£1800 + VAT per year for unlimited users and responses) or cost credits (24p + VAT per session) to run, but this is considerably cheaper than comparable offerings. Gorilla charges 99p per session or between £4580 and £18600 for a departmental licence, depending on the number of users and respondents. My last Qualtrics renewal was £15855 for 600 users.
Why is Pavlovia so much cheaper? There are many reasons, but the one Iām going to focus on here is support. Open Science Tools does not have the revenue to have a customer support team or offer free email support to its users. I am a half-time Science Officer at OST and my job is to support consultancy clients and help test the releases. This forum is a community support forum and the support I give here is primarily as a PsychoPy superuser rather than as a Science Officer. I try to help as many people as I can, but we rely on our community to help each other. Users who have shown themselves to be particularly helpful can be identified by a blue pentagon āflairā. Be kind to them. Itās not their job to help you, to a large extent, itās not mine either.
If you canāt get the support youāre looking for from the community, there is another option. You can help support PsychoPy by engaging the consultancy services of the science team to build or debug your experiments. Science team costs are Ā£70+VAT per hour for academic and charitable institutions, and you start with a free scoping chat ā which is sometimes all thatās needed.
In addition to our basic licence, we now also offer two upgrades. For Ā£2000 + VAT per you also get three one-hour virtual workshops and for Ā£5000 + VAT per year (comparable with Gorillaās cheapest departmental offering) the workshops are supplemented by 40 one-hour support clinics.