Optoelectronics Letters, Volume. 16, Issue 1, 45(2020)

Visual focus of attention estimation based on improved hybrid incremental dynamic Bayesian network

Yuan LUO1... Xue-feng CHEN1,*, Yi ZHANG2, Xu CHEN2, Xing-yao LIU1 and Ting-kai FAN1 |Show fewer author(s)
Author Affiliations
  • 1Institute of Photoelectric Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065,China
  • 2Engineering Research Center for Information Accessibility and Service Robots, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
  • show less

    In this paper, a visual focus of attention (VFOA) detection method based on the improved hybrid incremental dynamic Bayesian network (IHIDBN) constructed with the fusion of head, gaze and prediction sub-models is proposed aiming at solving the problem of the complexity and uncertainty in dynamic scenes. Firstly, gaze detection sub-model is improved based on the traditional human eye model to enhance the recognition rate and robustness for different subjects which are detected. Secondly, the related sub-models are described, and conditional probability is used to establish regression models respectively. Also an incremental learning method is used to dynamically update the parameters to improve adaptability of this model. The method has been evaluated on two public datasets and daily experiments. The results show that the method proposed in this paper can effectively estimate VFOA from user, and it is robust to the free deflection of the head and distance change.

    Tools

    Get Citation

    Copy Citation Text

    LUO Yuan, CHEN Xue-feng, ZHANG Yi, CHEN Xu, LIU Xing-yao, FAN Ting-kai. Visual focus of attention estimation based on improved hybrid incremental dynamic Bayesian network[J]. Optoelectronics Letters, 2020, 16(1): 45

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Received: Feb. 20, 2019

    Accepted: Apr. 19, 2019

    Published Online: Dec. 25, 2020

    The Author Email: Xue-feng CHEN (17783195443@163.com)

    DOI:10.1007/s11801-020-9026-0

    Topics