R-(Shiny-App)-Workflow to monitor Pavlovia-Experiments - Blog-Series

Hi Community,

I’ve just wanted to share a small blog-post series (Part 1 and 2 are done, 3 is WIP) on working with the Gitlab-API and R(-Shiny). Using the Gitlab-API and Token-based access to your repository you are able to build your own enhanced Online Experiment Dashboard (e.g. with plots).

Blog-Series

In Part 3 I plan to release a VESPR Study Portal-like Shiny-App to run balanced-mixed-designs with Pavlovia (e.g. “check if condition A has run 6 times successfully”) and enhanced Dashboard-capabilities.

Right now, I am lookig for collaborators on the project to beta test / further develop the last part before releasing it. PM me if you are familiar with Shiny-App-Development.

(Here is a live-demo of the Part-2-app hosted on Shinyapps.io: Live-Demo PW: example for demos / stroop · GitLab)

Best,
Luke

2 Likes

This is very cool. We are all about trying to create tools that empower users to go further, and this is a fantastic demonstration of that in action. Kudos @Luke :slightly_smiling_face: :muscle:

1 Like

Thank you, Jon!

I will continue to use and integrate R/Shiny with Pavlovia. You can imagine using Shiny-Apps to give participants direct feedback for their own just generated data and generate automatic reports and so on…

If any R/R-Shiny-Dev or Advanced User is interested: I would like to create a R Package for working with Pavlovia/the underlying Gitlab/put together simple dashboards. .

Part 3 of the Blog will be released in 1-2 weeks.

Here is sneak preview for the new part 3 of the blog series:

You now can generate your access tokens in the Shiny App, directly. So inexperienced users only need their username and password to get access to all projects and data.

An Universal Pavlovia Shiny Dashboard:
UPS-Dashboard

Please be careful with your access tokens!

The last part of the series is released:

1 Like

Thanks for the props in your article. You’re completely correct that my study portal has no sight of the quality of the actual data. In theory I could add a flag for data quality but it would have to be created / calculated within the PsychoPy experiment (e.g. average accuracy above x) and then simply passed to the portal debrief page.

Thank you for reading!

I agree, using live-calculation of e.g. “average accuracy” right in the experiment is a more practical way of dealing with problematic participants.

In theory, this would allow you to just abort the experiment and send them to an “incomplete”-URL.

The Shiny-Study-Portal in the current state has solved the problem (using flags and live calculation would have too, indeed) and in combination with shinyapps.io and Google Docs it may offer a good open-source options for some people. But I admit, the whole shiny-redirection concept is not completely thought-through.

Especially, experiments that involve giving participants insights/reports about their own generated data would benefit from a Shiny-Based-Redirection-App.
Using Shiny only as a data dashboard application (not as a GET-Redirector-Database-dependend App) is probably a better approach. But maybe some people find more use cases for both apps (and their combination).