[Illinois] Recent Advances in Kinect-Based Action and Event Recognition

By Bingbing Ni

Advanced Digital Sciences Center (ADSC), Singapore

Published on

Abstract

The emergence of depth cameras brings new opportunities for action and event recognition. In this talk, we will introduce the Kinect camera, which collects RGB+D (color + depth) image data and briefly overview its applications. Then we present two topics from our recent work at ADSC on recognition of action and events using Kinect cameras. First, we describe a home-monitoring oriented human activity recognition benchmark database. This database aims at encouraging research on human activity recognition based on RGB+D sensors. We present two RGB+D fusion schemes developed from two state-of-the-art feature representation methods for action recognition. Experimental results show superior accuracies achieved by using the depth feature. Second, we develop a RGB+D feature fusion method for video event detection, for the application of fall prevention in hospital ward. A multi-channel, multi-modal and multi-kernel based fusion scheme is proposed for event detection. Experimental results demonstrate the high accuracy and low false-alarm rate achieved in event detection.

Bio

Bingbing Ni is a researcher in the Advanced Digital Sciences Center (ADSC), Singapore. He received his Bachelor's degree (B.Eng.) in Electrical Engineering from Shanghai Jiaotong University (SJTU), China, in 2005. He began his Ph.D. study at the National University of Singapore in 2006, and submitted his Ph.D. thesis in September 2010. His research interests include computer vision, multimedia computing and machine learning.

Cite this work

Researchers should cite this work as follows:

  • Bingbing Ni (2012), "[Illinois] Recent Advances in Kinect-Based Action and Event Recognition," https://nanohub.org/resources/14383.

    BibTex | EndNote

Submitter

Charlie Newman, NanoBio Node

University of Illinois at Urbana-Champaign

Tags