UNN-6: An Activities of Daily Life Dataset
for Fall Detection and Recognition

Introduction

The UNN-6 includes a colour (RGB) clip set and an infrared (IR) clip set, with sample frames shwon above. Each set contains 36 clips (i.e. 6 videos per class) that were recorded using the Microsoft® Kinect™ v2 in an indoor environment with varying lighting conditions and dynamic backgrounds (e.g. TV is playing in the background). For simulating real-world CCTV system and improving the computational efficiency , the resolution for all video clips are uniformly resized from 1920×1080 to 320×240. Each video clip in both sets are cropped into 150 frames with 25 frame rate applied, i.e. each video lasts for 6 seconds as a fall or similar action always happens within 6 seconds. In addition, the generated IR clips do not include any depth information, as depth cameras are still costly and not widely deployed for digital healthcare.

Download

  • UNN-6 Dataset
    • contains both Colour Set and InfraRed Set (41 MB)
    • [Google Drive]
  • UNN-6 Colour Set
  • UNN-6 InfraRed Set

Citation


If you use this datset in your research, please refer to the following paper:
@InProceedings{ryan17FallDetection,
author={Cameron, Ryan and Zuo, Zheming and Sexton, Graham and Yang, Longzhi",
title={A Fall Detection/Recognition System and an Empirical Study of Gradient-Based Feature Extraction Approaches},
booktitle={Advances in Computational Intelligence Systems},
year={2018},
publisher={Springer International Publishing},
address={Cham},
pages={276--289},
isbn={978-3-319-66939-7},}

For more experimental results on the UNN-6 dataset, please refer to the following work:
@INPROCEEDINGS{zuo18SstVladSstFv,
    author    = {Zheming Zuo and Daniel Organisciak and Hubert P. H. Shum and Longzhi Yang},
    title     = {Saliency-Informed Spatio-Temporal Vector of Locally Aggregated Descriptors and Fisher Vectors for Visual Action Recognition},
    booktitle = {2018 BMVA British Machine Vision Conference (BMVC)},
    pages     = {321.1--321.11},
    year      = {2018}}