Yes in principle though it will depend a little bit on exactly how you’re doing recruitment. You can pass information to Pavlovia through query strings from a recruitment website or pre-task survey and use that to determine which set of stimuli they see.
I believe this is recorded by default and if it isn’t, it’s pretty easy to add.
See Emotion detection software in online experiment
There is a demo for using Google’s face API linked in that thread. I haven’t tried it myself recently so I don’t know how well it works, but it should be possible.