Title |
Identifying Human Actions Via Long-Term Recurrent Convolutional Network |
Abstract |
Automatic identification of human actions from videos has seen significant advancements. Typically, Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks are employed individually for this purpose. CNNs are trained on pre-existing models to extract visual features from video frames. These features are then utilized by LSTMs to predict outcomes. However, integrating CNN and LSTM layers into a unified architecture known as Long Short-Term Recurrent Convolutional Network (LRCN) yields superior performance. Our study illustrates that a unified LRCN model achieves higher accuracy compared to using CNN and LSTM models separately |
Keywords |
Machine learning, LSTM, LRCN, CNN, Human Action Identification. |
Reserch Area |
Engineering |
Reserch Paper |
AIJFR2404001 - V2 I4 - 1-12.pdf |
Author(s) |
K R Vignesh, J D Gowthm, Dr. K S Sivle |
Country |
India |