A new experiment builder language with PsychoPy backend

Hi,

I’m working on a new language for experiment specification. It is conceptually similar to Builder but it is text-based. I chose PsychoPy as the first fully supported run-time target as I find it to be the most feature complete and Python-based which makes things a lot easier.

I would really appreciate any feedback on both language and the generated code.
I’m far from being an expert in PsychoPy so I’m sure that generated code can be greatly improved.

To get a quick overview I recorded a set of videos available here:

The first video is a short (~3min), fast-paced overview of the language while the other two are getting started guide and a Posner Cueing Task implementation inspired by previous Jon’s and Becca’s tutorials for PsychoPy.

The project is free and open-source:

Any feedback is greatly appreciated.

Thanks!
Igor

2 Likes

Are you planning to add JavaScript output?

Nice idea to have a DSL like this. Another good export target would actually be the XML format of Builder .psyexp files from your DSL specification (this static format would be relatively straightforward compared to generating executable code). Then Builder itself could be used to generate either Python or JavaScript from it. The .psyexp format is discussed here:

@Michael Thanks for the feedback. Yes, we have already thought about generating PsychoPy Builder XML. That would be a nice way to use builders already existing code generation capabilities (@Uli and get JavaScript support for free). Builder XML seems relatively simple so it shouldn’t be hard to do.

Just worked my way through your second video. Just by viewing a few snippets, the generated Python code seems to be very closely modelled on what Builder generates. If so, although it tends to be very verbose, it does incorporate best-practice techniques for stimulus timing.

I guess to get buy-in, it would be useful to generate some tests that compare performance measures of PsychoPy Builder-generated code of a given experiment specification, against equivalent code generated from pyflies.

A question I have would be about the extensibility of the DSL itself. e.g. in your third video, you give an example of specifying a range of possible ISI values with an expression like 500..900 choose. In PsychoPy Builder, this would require using a snippet of custom Python or JavaScript code. In that case, we would also need to specify the granularity of values between those limits (e.g. using 50 ms steps so that the ISI is an integer number of 60 Hz screen refreshes). But there could be any number of other constraints that could be applied (e.g. using a gaussian distribution of values rather than a rectangular one, selecting from a fixed set with or without replacement, using a non-ageing decay function to avoid predictability and so on). That is all possible because the user has access to a complete programming language to add that level of functionality. And this is not just due to access to Python itself, but the functions provided by libraries like numpy and so on.

Similar needs would arise for implementing custom experimental designs, particularly given the many ways in which randomisation and selection of conditions can be specified. How much flexibility does the pyflies DSL provide for arbitrary design/procedure specifications like that?

pyFlies is built in textX which is a Python library for developing DSLs and enable easy evolution of the language so the language should be easily extended and improved. DSLs are by their nature meant to be evolved as we gather new insights about the domain (e.g. my background is computer science and software engineering so I’m still not aware of all the corner cases in psychological experiment design) and the way users would like to use the language. That’s one of the reasons I’m here :slightly_smiling_face:.

Currently, there is no direct access to the underlying target platform from pyFlies expressions but it would indeed be a good idea to provide it in a controlled manner. For example, operations could be mapped to target specific functions in the target configuration part of the experiment.

Something like:

...
500..900 gaussian_rnd
...
target PsychoPy {
    gaussian_rnd = some.module.gaussian
}

and then the generated code could generate import statement for the given function and its usage. Although, it would be platform-specific it would still provide experimenter an insight that there are platform-specific mappings in the experiment and that a good substitution must be provided if the experiment is going to be ported/generated to a different platform.

One of the current constraints that will be worked on for the next version is that all expressions are evaluated in Python during compilation. This makes developing generators easier as they get concrete values but is problematic especially for generating random values. If, instead, we make translation of all pyFlies types and operations to target platform and do the evaluation in runtime then it would be straightforward to implement calling arbitrary platform function.

It is also possible to do embedding of platform specific code inside the pyFlies. E.g., having custom Python snippets. VS Code editor extension can be instructed to provide editor services for the embedded languages (syntax highlighting, code completion etc.). But that would make the experiment even more tied to the target platform so that technique IMO should be used as a last resort.