To capture robust person features, learning discriminative,
style and view invariant descriptors is a key challenge in person Re-Identification (re-id). Most deep Re-ID models learn
single scale feature representation which are unable to grasp
compact and style invariant representations. In this paper,
we present a multi branch Siamese Deep Neural Network
with multiple classifiers to overcome the above issues. The
multi-branch learning of the network creates a stronger descriptor with fine-grained information from global features of
a person. Camera to camera image translation is performed
with generative adversarial network to generate diverse data
and add style invariance in learned features. Experimental
results on benchmark datasets demonstrate that the proposed
method performs better than other state of the arts methods.