How to minimize the Asynchronization (bias) of Audiovisual stimulus

Hi people, I am planning to conduct a multisensory (audiovisual) research on Pavlovia and I learnt that there will be an inevitable asynchronization if doing so ( — an article I refered to).
Since all my stimulus would be about 5 seconds long and this is a sophomore level project, there is no need to achieve a millisecond level precision, but I would like to know what are the things I could consider and improve, including hardware, coding tips, or any other potential solutions.

Also, does anyone know what does the “nan” mean here, is it a lack of data?
Screen Shot 2020-06-05 at 11.50.48 AM|690x416

Any help, suggestion, or comment will be appreciated :slight_smile:

NaN means not a number. The values can’t be calculated