Light version of experiment run for external use

Hi!

I would like to know if there is a way of running experiment without Psychopy installation (just Python, maybe?)
I need to send it to several respondents and would love to save them from hassle of Psychopy installstion just for one-time experiment.
I sadly can not use Pavlovia, as I have critical small time offsets that just do not work correctly online.

Thanks in advance!
Ann

Hi Ann,

PsychoPy needs to be installed in some way in order for its scripts to run. Python is just a general purpose language: to do the things that PsychoPy does requires a bunch of other libraries to be available to Python (not just the PsychoPy modules themselves, but all of the other external dependencies that PsychoPy requires, to do things like displaying stimuli, playing sounds, connecting to hardware, and so on).

And “just Python” is not a given either: Python exists in Linux, and an old version ships with Mac OS, but it doesn’t come as standard with Windows. And as you won’t know what version the users will have, if they have one at all, they’ll probably need to install a specific Python anyway. And then you still need the PsychoPy related stuff, and installing this into a Python installation manually is not trivial for people who aren’t used to this sort of thing (and even for those who are).

Given that, you might as well install the PsychoPy standalone application (which bundles its own version of Python, as well as all the other dependencies). It’s a one-step process that saves you from all of these hassles.

If you can’t run your experiment online, there just isn’t a practical alternative to installing PsychoPy locally, whether stand-alone or otherwise.

Thank you!

It would be useful to know what issues you found here: although the timing online won’t be as precise as when running locally, it is supposed to be good enough for most purposes. If you can report your specific issues with as much detail as possible, either:

  • the developers might be able to incorporate fixes for it,
  • we might be able to spot issues with your implementation that could be addressed, or
  • if it can’t be fixed, at least other people would become aware of the limitation.

Would love to!
I built my experiment in Builder, and it presents visual stimuli from png and audial tone with time offsets ranging from 0 ms to 300 ms (with 50 ms step) + collects participant’s responds via keyboard.
The way I dealt with offset is not internal component in Bulder but just timestamps in corresponding Excel file (viz columns dictating picture start and sound start time).
When I try to run it through Pavlovia the offset seems to be visibly off from planned.