logo
×

Baby physical safety monitoring in smart home using action recognition system

Summary: Researchers from the University of Cincinnati recently studied advances in action recognition, as humans are able to intuitively deduce actions that took place between two states in observations via deductive reasoning. This is because the brain operates on a bidirectional communication model, which has radically improved the accuracy of recognition and prediction based on features connected to previous experiences. During the past decade, deep learning models for action recognition have significantly improved.

Research Challenge: Deep neural networks typically struggle with these tasks on a smaller dataset for specific Action Recognition (AR) tasks. As with most action recognition tasks, the ambiguity of accurately describing activities in spatial-temporal data is a drawback that can be overcome by curating suitable datasets, including careful annotations and preprocessing of video data for analyzing various recognition tasks.

Findings: The researchers presented a novel lightweight framework combining transfer learning techniques with a Conv2D LSTM layer to extract features from the pre-trained I3D model on the Kinetics dataset for a new AR task (Smart Baby Care) that requires a smaller dataset and less computational resources. Furthermore, they developed a benchmark dataset and an automated model that uses LSTM convolution with I3D (ConvLSTM-I3D) for recognizing and predicting baby activities in a smart baby room. By implementing video augmentation to improve model performance on the smart baby care task and comparing it to to other benchmark models, their experimental framework achieved better performance with less computational resources.

How Labelbox was used: Labelbox was used to create a curated dataset from scratch for this project to develop an AR model for a specific Action Recognition task (Smart Baby Care). Our data source came from an open-source video from social media platforms such as (YouTube, Instagram, Pexels, etc.). The videos were manually downloaded and then trimmed with a python script using the inbuilt FF-MPEG library and annotated frame-wise using Labelbox.