We propose a novel method for extracting features from images of people using co-occurrence attributes, which are then used for person re-identification. Existing methods extract features based on simple attributes such as gender, age, hair style, or clothing. Our method instead extracts more informative features using co-occurrence attributes, which are combinations of physical and adhered human characteristics (e.g., a man wearing a suit, 20-something woman, or long hair and wearing a skirt). Our co-occurrence attributes were designed using prior knowledge of methods used by public websites that search for people. Our method first trains co-occurrence attribute classifiers. Given an input image of a person, we generate a feature by vectorizing confidences estimated using the classifiers and compute a distance between input and reference vectors with a metric learning technique. Our experiments using a number of publicly available datasets show that our method substantially improved the matching performance of the person re-identification results, when compared with existing methods. We also demonstrated how to analyze the most important co-occurrence attributes.
[Publication (Japanese) ]