How to use mTurk for recruiting

HI all,
Just wondering whether there is a step-by-step guide or some instructions on how to link Pavlovia with mTurk. I read that normally mTurk needs to pass certain parameters to the external site to ensure payment can be processed etc.
How does this work in practice?

The general instructions are here:
with specific instructions for Prolific Academic and Sona there as well. I’m afraid I don’t have an MTurk account to test or create screenshots, but I believe the concepts are the same as above.


I’ve run 3 PsychoPy experiments on MTurk in the past couple weeks. It’s a simple as selecting the Survey Link option and pasting the URL into the link box. It seemed to run smoothly for me.

Hope it works out!

1 Like

HI @unagi_pie. Sorry about the belated response. I only had a chance to try this now.
Silly question: I have some code that generates a UUID that I can use to identify subjects and pay them. Subjects then have to copy that and paste it into mTurk to get paid.
But how can I make it paste-able – i.e. on my experiment when the UUID is shown on the screen I cannot select any text

@unagi_pie: How do you administer payment for your turkers? Usual practice is to have a randomly generated ID that they copy and paste to show they have completed the experiment.
But you cannot copy text from JS

For most systems you can use the script itself to forward a variable as part of the on-completion URL, as described in my instructions linked above

MTurk seems to come in 2 different flavours: The MTurk UI, and the MTurk API - the latter usually made available through 3rd-party services such as TurkPrime or Prolific Academic, though if you have the technical skills, then you could develop an application yourself.

Either way, the process is similar:

  1. Create a project on MTurk and include the URL to your Pavlovia experiment. On the MTurk UI, this is done in Create > New Project > Survey Link > Create Project > Design Layout.
  2. In PsychoPy, go to Experiment Settings > Basic > Experiment info, and add a field called “workerId” to record the participant’s MTurk Worker ID. All other fields can be removed if not used.
  3. In PsychoPy, add a routine at the end of the experiment to provide participants with a survey code.
  4. Set your Pavlovia experiment to Running, assign sufficient credits to it, then save and publish the MTurk project when ready.

Collecting MTurk Worker IDs automatically in MTurk UI:

The MTurk API automatically provides the MTurk Worker ID in a URL parameter called workerId. It also provides hitId (the HIT) and assignmentId (the worker+HIT assignment). PsychoJS automatically captures all URL parameters and includes them in output data files.

The MTurk UI on the other hand, does not provide the Worker ID by default. One way to deal with this is to ask participants to enter it manually using a field in the experiment’s info dialog called “Please copy+paste your MTurk Worker ID here:”. However, it is also possible (if you wish) to hack the MTurk UI to provide the same information as the API automatically:

  1. On MTurk, go to Create > Edit the project > Design Layout > Source.
  2. Paste 1 line of Javascript (jQuery) code - the one with ‘dont-break-out’ - at the end of the source, so that it looks like this:
    // end expand/collapse
    $('.dont-break-out').attr('href',function(i,href){ return href +; });
</script><!-- Close internal javascript -->

This code appends all URL parameters to the experiment URL much like the API so that participants do not need to enter it manually.
Note: If your experiment URL already contains parameters, then you will need to modify the above code accordingly.

Note: You may choose not to collect Worker IDs in order to preserve participant anonymity, and rely on survey codes for assignment validation instead. This is the default in the MTurk UI, but is not the default for the MTurk API, and typically needs to be turned off if you want to go this route. Also note that this only hides the Worker IDs from the Pavlovia data output files - Worker IDs are still listed in MTurk when approving assignments. Since Worker IDs themselves are already fairly anonymous, this route has limited value, and anonymity can also be addressed by removing the Worker IDs from data output files after assignments are validated.

Providing the survey code:

The MTurk UI “Survey Link” project template requires workers to submit a survey code after they complete the task.

For this, just add a routine at the end of the experiment, with a text component instructing participants to copy an arbitrary code into the MTurk HIT: “Please use the following survey code when submitting the HIT: 23764” (use whatever number you like). Note that workers must enter this survey code manually - I have not found a way to automatically send this to MTurk as Jon suggests above.

When approving assignments, ensure that Worker IDs appear in Pavlovia’s data output files. If Pavlovia is set to Save incomplete results, then you may also want to check if each worker actually completed the experiment. Unfortunately, this process is tedious when there are many participants. It can presumably be automated with the help of the MTurk API.

Note: If you do not collect MTurk Worker IDs, then survey codes should be generated randomly and recorded in data output files instead. If you do collect Worker IDs as above, then the submitted survey codes can actually be disregarded - use the Worker IDs to validate assignments. It is also possible (if you wish) to hack the MTurk UI to remove the survey code requirement, using the following changes to the the same source as above:

    // end expand/collapse
    $('.dont-break-out').attr('href',function(i,href){ return href +; });
    $('.form-group label').html('Click the Submit button when done.  Task completion is recorded during the experiment.');
    $('.form-group input').attr('type','hidden').val('NA');
</script><!-- Close internal javascript -->

Hey @arnon_weinberg thanks for your fantastic response.

I have a question for you about the completion code: if I’ve setup my experiment in the MTurk UI “Survey Link” format, can the completion code that I setup in PsychoPy be a static code (say just a string or a list of numbers) that isn’t randomized each ‘hit’?

My PsychoPy file has participants input their MTurk Worker ID during the load-in page as well and I’m assuming this is output in the output csv file upon completion. I’m assuming that because the participant puts in their MTurk Worker ID into both MTurk and into PsychoPy that you can verify that the experiment was properly completed by cross-referencing their MTurk Worker ID in the output csv file with the MTurk Worker ID that shows up when they enter in the correct survey link completion code? Is this how it works? Any advice would be greatly appreciated :slight_smile:

@melonhead Yes, I’m pretty sure that’s how I described it above.