EyeFormer: Predicting Personalized Scanpaths with Transformer-Guided Reinforcement Learning

Yue Jiang*1   Zixin Guo*1 (Equal Contribution)   Hamed R. Tavakoli3 Luis A. Leiva2   Antti Oulasvirta1  
Proceedings of ACM Symposium on User Interface Software and Technology (UIST 2024), Pittsburgh, USA.

Download Video(MP4, 57.7 MB)

Abstract

From a visual-perception perspective, modern graphical user interfaces (GUIs) comprise a complex graphics-rich two-dimensional visuospatial arrangement of text, images, and interactive objects such as buttons and menus. While existing models can accurately predict regions and objects that are likely to attract attention ``on average'', no scanpath model has been capable of predicting scanpaths for an individual. To close this gap, we introduce EyeFormer, which utilizes a Transformer architecture as a policy network to guide a deep reinforcement learning algorithm that predicts gaze locations. Our model offers the unique capability of producing personalized predictions when given a few user scanpath samples. It can predict full scanpath information, including fixation positions and durations, across individuals and various stimulus types. Additionally, we demonstrate applications in GUI layout optimization driven by our model.

Resources and Downloads


Citation

BibTeX, 1 KB

@inproceedings{jiang2024eyeformer,
	author = {Jiang, Yue and Guo, Zixin and Rezazadegan Tavakoli, Hamed and Leiva, Luis A. and Oulasvirta, Antti},
	title = {EyeFormer: Predicting Personalized Scanpaths with Transformer-Guided Reinforcement Learning},
	year = {2024},
	isbn = {9798400706288},
	publisher = {Association for Computing Machinery},
	address = {New York, NY, USA},
	url = {https://doi.org/10.1145/3654777.3676436},
	doi = {10.1145/3654777.3676436},
	abstract = {From a visual-perception perspective, modern graphical user interfaces (GUIs) comprise a complex graphics-rich two-dimensional visuospatial arrangement of text, images, and interactive objects such as buttons and menus. While existing models can accurately predict regions and objects that are likely to attract attention “on average”, no scanpath model has been capable of predicting scanpaths for an individual. To close this gap, we introduce EyeFormer, which utilizes a Transformer architecture as a policy network to guide a deep reinforcement learning algorithm that predicts gaze locations. Our model offers the unique capability of producing personalized predictions when given a few user scanpath samples. It can predict full scanpath information, including fixation positions and durations, across individuals and various stimulus types. Additionally, we demonstrate applications in GUI layout optimization driven by our model.},
	booktitle = {Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology},
	articleno = {47},
	numpages = {15},
	location = {Pittsburgh, PA, USA},
	series = {UIST '24}
	}

Contact

For questions and clarifications, please get in touch with:
Yue Jiang yuenj.jiang@gmail.com