Miika Toivanen, Kristian Lukander, Kai Puolamäki: “Do-it-yourself open source gaze tracking system — presentation, demonstration, and announcement”
We have built an open-source mobile gaze tracking system. The physical part consists of 3D-printed frames, three USB-cameras, infrared LED lights, and simple electronic components. The software utilizes physical model of the eye, advanced Bayesian tracking models, and free software libraries. With visual markers, the system can react with the environment, enabling, e.g., novel user interfaces. The presentation contains brief desciption of system, as well as a live demonstration and “official” announcement of our first public release.
Kristian Lukander, Miika Toivanen, Kai Puolamäki: “Novel experimental setup for inferring user intent and activity in a game-based task”
Gaze-based inference of user intent and activity has so far concentrated on isolated naturalistic tasks such as tea and sandwich making in kitchen environments, and a range of image tasks presented on 2D screens, such as visual search. While successful inference rates have been reported, these contributions can only be related in a speculative manner, and are hard to generalize or apply in real-world tasks. Our objective here is to study and demonstrate a classifier for user intent and activity in a continuum of real-world tasks. To this end, we have developed a novel experimental setup composed of tasks motivated by basic cognitive activities: visual search, reading, simple and complex object manipulation, sorting, ordering, model copying and problem-solving. As such, each task represents one or more components of everyday tasks performed in work environments. The tasks are performed in a game-like environment with physical game-pieces. The subjects performed the task while wearing our custom-built gaze tracker, which supplies video-based gaze tracking and the subjects’ point of view. The problem-solving task was monitored with an external overhead camera. As an intermediate result,the experimental setup and the gaze data will be published for others to replicate and iterate.