Most likely, they just want you to present the RT +/- confidence interval or Standard Error (of the mean). Response times are very accurate to below 0.1 ms on the software-side in psychopy. But on many keyboards there’s a buffer which causes up to 30 ms in jitter. This would apply to all stimulus software.
Note though, that reaction times are not normally distributed, so confidence intervals or standard errors are really poor models since they assume normal distribution. Draw a histogram and see for yourself. And go show your teacher - I’m betting that he/she haven’t thought about it Sometimes RTs can be close to log-normal, i.e. log(RT).
So medians and interquartile-range is usually the best way to represent reaction times.