Zeyu Xiong, Jiahao Wang, Wangkai Jin, Junyu Liu, Yicun Duan, Zilin Song, and Xiangjun Peng
Published in HCI 2022!
Human-Computer-Interaction
A comprehensive roadmap to deliver user-friendly, low-cost and effective alternatives for extracting drivers’ statistics.
Raw video streams (facial expressions contains many noisy pixels)
Pre-processing input images to retrieve only the facial expressions and enhance the performance of Face2Statistics
Exploring different deep neural network-driven predictors
First Attempt: Convolution Neural Networks (CNNs)
Second Attempt: Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN)
Third Attempt: Bidirectional Long-Short-Term Memory (BiLSTM) Recurrent Neural Network (RNN)
Visualizing predicted results
(1) We utilize HSV color space instead of RGB color space to reduce the variance of illumination among different pixels.
(2) We apply personalized parameters via pearson correlation coefficients to conditional random field to realize customizing.
The results of training and validation accuracy, in terms of DenseNet, LSTM and BiLSTM are displayed as follows.
The results of training and validation accuracy, in terms of RGB and HSV are displayed as follows.
The results of the comparative validation accuracy of BiLSTM w/ and w/out CRF support for four different drivers are displayed as follows.
Zeyu Xiong, Jiahao Wang, Wangkai Jin, Junyu Liu, Yicun Duan, Zilin Song, and Xiangjun Peng. 2021. Face2Statistics: User-Friendly, Low-Cost and Effective Alternative to In-Vehicle Sensors/Monitors for Drivers. In Proceedings of the 24th International Conference on Human-Computer Interactions (HCI’22).