From the RWC database, 17 patterns of gestures shown in Figure5 from 6 persons were used in the following recognition experiment. At first, the size of each image in the image sequence was reduced to a half or a quarter. Then the first and second order PARCOR images were computed for each frame using online estimation of autocorrelations. The size of PARCOR images was reduced to a half again and HLAC features were extracted. Thus we can obtain 70 dimensional feature vector for each frame. The sequence of feature vectors were fed into a HMM based recognizer.
In the first experiment, the parameters of HMM with 7 hidden states shown in Figure3 were estimated using all the samples and then recognition rates were evaluated. The results are shown in Table3 as ``closed''. In the second experiment, the samples of first three repetitions were used for learning of HMM and the recognition rates were evaluated by using the fourth repetition. The results are shown in Table3 as ``open''. These results are not higher than expected. The reason is probably that the number of training samples is too small to estimate the parameters of such high dimensional HMM.
To reduce the dimension of feature vectors and to obtain uncorrelated
features, we applied Principal Component Analysis (PCA) to HLAC feature
vectors. New features are obtained by linear combinations of the HLAC
feature
with weights
C=[cij] as
![]() |
(10) |
Size of PARCOR images | Rate (%) | |
closed |
![]() |
61.76 |
closed |
![]() |
71.08 |
open |
![]() |
40.69 |
open |
![]() |
61.03 |
closed (PCA) |
![]() |
79.66 |
closed (PCA) |
![]() |
91.18 |
open (PCA) |
![]() |
66.67 |
open (PCA) |
![]() |
82.11 |