Gaze tracking in psychological, cognitive, and user interaction studies has recently evolved toward mobile solutions, as they make it possible to directly assess users’ visual attention in natural environments. In addition to attention, gaze can provide information about users’ action and intention: what the user is doing and what will she do next.
Gaze behavior in natural, unstructured task environments is quite complex and cannot be satisfactorily explained using “simple” behavioral models that are induced with typical stimulus-response testing in controlled laboratory environments. To evolve the inference of gaze-action behavior in natural environments, a more holistic approach bringing together a number of disciplines from cognitive sciences to machine learning is needed.
We call for 2-4 page papers in the SIGCHI Extended Abstracts format related, but not limited, to
Technical solutions for mobile gaze tracking
These would include any hardware or software solutions for developing mobile gaze tracking devices and making the devices more accessible. Some examples could be approaches for making mobile gaze tracking devices oneself, algorithms for computing the gaze location, and methods for improving the accuracy and precision of mobile gaze tracking systems.
Computational methods for using the gaze data for inferring the user action
These include various (real-time) approaches for applying machine learning algorithms to infer user action from gaze data.
Applications and potential use of gaze data
These include various psychological and application aspects of the gaze – action coordination; e.g., in which kinds of tasks can gaze be useful as a predictor of user action and how can it be utilized effectively. We are also interested in utilizing gaze in virtual and augmented reality applications and user interfaces.
Cognitive aspects of mobile gaze tracking
These refer to understanding tasks and human action from eye movements, for example, by identifying the characteristics or phases of different tasks. Here, the focus is in understanding what eye movements can reveal about human action and cognitive processing.
We especially encourage hands-on demonstrations and interactive material to facilitate a lively conversation at the workshop. We also accept position papers describing researchers’ interests and/or previous work related to the workshop topic.
The papers should be submitted through https://cmt3.research.microsoft.com/GAZE2016. The papers will be peer-reviewed by the program committee. At least one author of accepted papers needs to register for the workshop and for the conference itself. Accepted workshop papers will be included in the MobileHCI 2016 Adjunct Proceedings.
The deadline for the submissions is June 5th 2016, and acceptance will be notified by June 20th 2016. Camera-ready versions of accepted papers must be provided to the workshop chair at latest by June 29th 2016. Accepted workshop papers will be included in the ACM Digital Library as part of the MobileHCI 2016 Adjunct Proceedings.
A pdf version of the call for papers can be found here.