Hi. So I am soon (a few weeks?) about to do an alpha release of a Python based package which will allow people to run Bayesian Adaptive Design on their experiments from within PsychoPy. Think along the lines of staircases, but Bayesian, which takes you to QUEST / Psi-Marginal etc, but better, and then add the ability to simultaneously generate multiple design variables (aka stimulus properties) so as to maximis information gain per experimental trial. The primary set of experiments will be based around delayed and risky choice tasks, but the approach can be extended to any 2-choice tasks such as yes/no, 2AFC.
I was hoping to get some advice or pointers on whether my general approach was good, or needed some tweaks. The plan is to make it as easy as possible, and to work on mac, pc, ms surface, etc…
install Anaconda Python 3
Download my toolbox code containing PsychoPy experiments and my Python code
run one of the psychopy experiments.
Question 1: Am I right in going for Python 3 here? I’ve managed to get PsychoPy 1.90.1 and Python 3.6 working on both Mac and PC, but I have had some issues making it work on a Mac with Python 3.7. So as far as I’m concerned, it works, but I’m not clear if I’m making life harder for myself going for Python 3?
Question 2: My objective of simplicity for the experimenter. I want to maximise probability that this will work first time with no headaches. But if I wanted to use other Python packages is it best to
Figure out some way of pip or conda installing these packages then figuring out how to make PsychoPy aware of those packages
Go old school and just copy/paste other people’s packages into mine. This seems easiest, but potentially it’s flawed in a way that I’m not aware of now.
or… Just really don’t. There is no reliable method
I haven’t personally tried anything on Python 3.7 yet. If there are fixes we need to make to PsychoPy (keeping compatible with previous versions still if possible) then do let us know (or fix is even better! )
Can I suggest Option 4? For standalone installs I’d probably be willing to add your package and dependencies into PsychoPy, and you can just tell your users “install psychopy and you’re good to go” (e.g. we do this with psychopy_ext by Jonas Kubilius). Caveat is that if any dependencies are large or hard to compile I reserve the right to whinge about it a bit.
For non-Standalone users (and to make my life easier), yes, you should release DARC as a python package on PyPI set up your dependencies using the standard pip methods.
Try not to copy/paste other code into yours. It means you don’t benefit from upstream bugfixes and soon you become incompatible.
I’m not necessarily after using the cutting edge features of Python, so I’ll stick with 3.6 to make life easier. I’m happy to help with compatibility although I’m still pretty new to Python - and don’t know how much of it is just editing Python code as compared to knowing about app development. If it’s relatively easy but just needs some person hours, I’d be willing to try. But you’ve certainly not warned me away from 3.6, so I’ll stick with this for the moment
I’ll bearing mind your offer for Option 4 of bundling it into PsychoPy. That sounds pretty awesome, but also something which will require my code to be much more polished and stable. So I’ll get back to you on this further down the line after I get some folk to test and find bugs and solidify the API. Sounds like an excellent possibility Would that work as a Git Submodule?
I am trying to keep things simple by just using core packages like scipy distributions, numpy, pandas etc, but if I’m forced into using other packages then I’ll explore the PyPI option. I’m still fairly green with Python (although rapidly picking it up), so it’s just a matter of learning packaging etc.
I think the approach I’ll use is to proceed stepwise. Get the core functionality and API down then maybe bundle it with PsychoPy. Then make incremental new features available over time. Does that sound like a reasonable approach? You have way more experience at software development then me
I started this work way back in 2014, as an implementation of PsiMarginal. But I met Tom Rainforth at the Machine Learning Summer School in Tübingen in 2015 and we decided to use his expertise in order to make it more efficient.
Quest+ was published while our code was in development, and in many ways it is very similar. both can deal with multiple stimulus parameters. Ours is restricted to binary responses. I can’t quite tell if Quest+ is similarly restricted - would have to dig into it again.
the bad package provides domain general Bayesian Adaptive Design.
the darc package uses bad and implements various Delayed And Risky Choice tasks and models.
in the future there could also be a psycho package which also uses bad to implement various psychometric tasks. At the moment it is not clear if it is worthwhile given that it would highly overlap with Quest+. But the point is, you could.
The paper on Quest+ is quite light on implementation details. But one area where we make advances is in efficiency… we use a particle based representation of the posterior (as opposed to grid approximation) but our method is particularly efficient when it comes to combining both inference and design optimisation steps. We’ve got more detail in our pre-print
Vincent, B. T., & Rainforth, T. (2018, May 11). The DARC Toolbox: automated, flexible, and efficient delayed and risky choice experiments using Bayesian adaptive design. OSF
Although it is likely to emerge in the form of TWO papers with fairly major updates. Using PsychoPy as the framework was a core aspect of improving it and dealing with reviewer feedback. I’m abandoning the Matlab implementation and forging ahead with the Python/PsychoPy implementation.