Robust feature matching in long-running poor quality videos
MetadataShow full item record
We describe a methodology that is designed to match key point and region-based features in real-world images, acquired from long-running security cameras with no control over the environment. We detect frame duplication and images from static scenes that have no activity to prevent processing saliently identical images, and describe a novel blur-sensitive feature detection method, a combinatorial feature descriptor, and a distance calculation that efficiently unites texture and color attributes to discriminate feature correspondence in low-quality images. Our methods are tested by performing key point matching on real-world security images such as outdoor closed-circuit television videos that are low quality and acquired in uncontrolled conditions with visual distortions caused by weather, crowded scenes, emergency lighting, or the high angle of the camera mounting. We demonstrate an improvement in accuracy of matching key points between images compared with state-of-the-art feature descriptors. We use key point features from a Harris corner detector, scale-invariant feature transform, speeded-up robust features, binary robust invariant scalable keypoints, and features from accelerated segment test as well as MSER and MSCR region detectors to provide a comprehensive analysis of our generic method. We demonstrate feature matching using a 138D descriptor that improves the matching performance of a state-of-the-art 384D color descriptor with just 36% of the storage requirements.
AuthorsHenderson, C; IZQUIERDO, E
- College Publications