Temporal continuity of visual attention for future gaze prediction in immersive virtual reality
1. Department of Computer Science and Technology, School of Electronic Engineering and Computer Science, Peking University, Beijing 100871, China
2. Beijing Engineering Technology Research Center of Virtual Simulation and Visualization, Peking University, Beijing100871,China
Abstract
Keywords: Temporal continuity ; Visual attention ; Autocorrelation analysis ; Gaze prediction ; Virtual reality
Content






Reference
Duchowski A T. Gaze-based interaction: a 30 year retrospective. Computers & Graphics, 2018, 73, 59–69 DOI:10.1016/j.cag.2018.04.002
Mardanbegi D, Mayer B, Pfeuffer K, Jalaliniya S, Gellersen H, Perzl A. EyeSeeThrough: unifying tool selection and application in virtual environments. In: 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). Osaka, Japan, IEEE, 2019, 474–483 DOI:10.1109/vr.2019.8797988
Guenter B, Finch M, Drucker S, Tan D, Snyder J. Foveated 3D graphics. ACM Transactions on Graphics, 2012, 31(6): 164 DOI:10.1145/2366145.2366183
Patney A, Salvi M, Kim J, Kaplanyan A, Wyman C, Benty N, Luebke D, Lefohn A. Towards foveated rendering for gaze-tracked virtual reality. ACM Transactions on Graphics, 2016, 35(6): 1–12 DOI:10.1145/2980179.2980246
Alghofaili R, Solah M S, Huang H K, Sawahata Y, Pomplun M, Yu L F. Optimizing visual element placement via visual attention analysis. In: 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). Osaka, Japan, IEEE, 2019, 464–473 DOI:10.1109/vr.2019.8797816
Hu Z M, Zhang C Y, Li S, Wang G P, Manocha D. SGaze: a data-driven eye-head coordination model for realtime gaze prediction. IEEE Transactions on Visualization and Computer Graphics, 2019, 25(5): 2002–2010 DOI:10.1109/tvcg.2019.2899187
Berton F, Olivier A H, Bruneau J, Hoyet L, Pettre J. Studying gaze behaviour during collision avoidance with a virtual walker: influence of the virtual reality setup. In: 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). Osaka, Japan, IEEE, 2019, 717–725 DOI:10.1109/vr.2019.8798204
Chen J, Mi L T, Chen C P, Liu H W, Jiang J H, Zhang W B. Design of foveated contact lens display for augmented reality. Optics Express, 2019, 27(26): 38204–38219 DOI:10.1364/oe.381200
Zhou L, Chen C P, Wu Y S, Zhang Z L, Wang K Y, Yu B, Li Y. See-through near-eye displays enabling vision correction. Optics Express, 2017, 25(3): 2130–2142 DOI:10.1364/oe.25.002130
Itti L. Models of bottom-up and top-down visual attention. California Institute of Technology. 2000
Connor C E, Egeth H E, Yantis S. Visual attention: bottom-up versus top-down. Current Biology, 2004, 14(19): R850–R852 DOI:10.1016/j.cub.2004.09.041
Pinto Y, van der Leij A R, Sligte I G, Lamme V A F, Scholte H S. Bottom-up and top-down attention are independent. Journal of Vision, 2013, 13(3): 16 DOI:10.1167/13.3.16
Rottach K G, von Maydell R D, Das V E, Zivotofsky A Z, Discenna A O, Gordon J L, Landis D M D, Leigh R J. Evidence for independent feedback control of horizontal and vertical saccades from Niemann-Pick type C disease. Vision Research, 1997, 37(24): 3627–3638 DOI:10.1016/s0042-6989(96)00066-1
Sitzmann V, Serrano A, Pavel A, Agrawala M, Gutierrez D, Masia B, Wetzstein G. Saliency in VR: how do people explore virtual environments? IEEE Transactions on Visualization and Computer Graphics, 2018, 24(4): 1633–1642 DOI:10.1109/tvcg.2018.2793599
Henderson J. Human gaze control during real-world scene perception. Trends in Cognitive Sciences, 2003, 7(11): 498–504 DOI:10.1016/j.tics.2003.09.006
Henderson J M, Nuthmann A, Luke S G. Eye movement control during scene viewing: Immediate effects of scene luminance on fixation durations. Journal of Experimental Psychology: Human Perception and Performance, 2013, 39(2): 318–322 DOI:10.1037/a0031224
Henderson J M, Olejarczyk J, Luke S G, Schmidt J. Eye movement control during scene viewing: Immediate degradation and enhancement effects of spatial frequency filtering. Visual Cognition, 2014, 22(3/4): 486–502 DOI:10.1080/13506285.2014.897662
Cheng M M, Zhang G X, Mitra N J, Huang X L, Hu S M. Global contrast based salient region detection. In: CVPR 2011. Colorado Springs, CO, USA, IEEE, 2011: 409–416 DOI:10.1109/cvpr.2011.5995344
Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(11): 1254–1259 DOI:10.1109/34.730558
Borji A, Sihite D N, Itti L. Probabilistic learning of task-specific visual attention. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI, IEEE, 2012, 470–477 DOI:10.1109/cvpr.2012.6247710
Harel J, Koch C, Perona P. Graph-based visual saliency. In: Advances in neural information processing systems. 2007, 545–552 DOI: 10.7551/mitpress/7503.003.0073
Marcella C, Lorenzo B, Giuseppe S, Rita C. Predicting human eye fixations via an LSTM-based saliency attentive model. IEEE Transactions on Image Processing, 2018, 27(10): 5142–5154 DOI:10.1109/tip.2018.2851672
Koulieris G A, Drettakis G, Cunningham D, Mania K. Gaze prediction using machine learning for dynamic stereo manipulation in games. In: 2016 IEEE Virtual Reality (VR). Greenville, SC, USA. IEEE, 2016, 113–120 DOI:10.1109/vr.2016.7504694
Arabadzhiyska E, Tursun O T, Myszkowski K, Seidel H P, Didyk P. Saccade landing position prediction for gaze-contingent rendering. ACM Transactions on Graphics, 2017, 36(4): 1–12 DOI:10.1145/3072959.3073642
Box G E, Jenkins G M, Reinsel G C. Time series analysis: forecasting and control. John Wiley & Sons, 2015
Lachenbruch P A, Cohen J. Statistical power analysis for the behavioral sciences (2nd Ed.). Journal of the American Statistical Association, 1989, 84(408): 1096 DOI:10.2307/2290095
Rumsey D J. Statistics II for dummies. John Wiley & Sons, 2009