I happen to be playing around a bit with webgazer, and looking around on the forum, I saw your message. I wouldn’t call it straightforward. For context: I’m quite an experienced programmer. It took me about two hours to get a shabby prototype working. It will easily take me a day or more to have something nice. On the upside, I like to document things carefully, so once I got something, it could be a lot easier to for you to adopt. Don’t wait for it though, since the moment I need to do higher-priority stuff for the PsychoJS team, I’ll put the webgazer stuff on hold :).
On the Facebook-group PsychMAP, I’ve been chatting with someone about webgazer. I quote some info that could be useful below.
What I noticed: (1) you need to keep your head quite still, (2) it was very heavy on the CPU of my laptop (even though I’ve got quite a powerful one). (3) accuracy was good enough for something like an eye-tracking VPT, but not much more. Reliability is the main issue here; many of your participants won’t have sufficient discipline or sufficiently powerful equipment, so your data will be very noisy. https://www.sciencedirect.com/science/article/pii/S0376871615010431?casa_token=FZo6EBfbbwwAAAAA:UG5gGjJjEP6Ozp_EFb_4CchUh4VMtXXhcSYiqQEAhiR9StU4fRn_doRy2OSnmdGTF5QRXiKI
I looked at what Labvanced and Gorilla used. Labvanced is open-source, so I could establish they also use the webgazer library. Gorilla is closed-source (and their eye-tracking is in closed beta), so I cannot establish what tech they use. However, if I look at their reference documentation, it really looks like webgazer too. This means that on the level of technology, they are the same;. Where they could distinguish themselves is on how good their calibration tasks are (i.e. look here, now look there). When I got something ready, I’ll try to build it in such a way that you’ll have a lot of control over calibration. Does mean it will be a bit more work to set up, but given how iffy eye-tracking via web-cam is, that’s probably worth it.