Videos can be represented using raw pixel values, gradients, or optical flow. In this work, we propose a better low-level representation of the video “Particle Flow Field” (PFF) which is based on optical flow and particle advection. Optical flow at each pixel gives the information of where that pixel moved in the next frame. This representation basically consists of the motion information of each pixel between two consecutive frames. In order to have better understanding of how a pixel moves and evolves, we do particle advection where each pixel in a given frame is considered as a particle and we comprehend the motion of the particle over a time period. Unlike optical flow which shows the velocity of a particle in two frames, particle flow field captures the velocity of the same particle in n consecutive frames. Particle Flow Field implicitly has the information to generate particle trajectories. Our low-level representation PFF is novel in the following ways:
- “Particle Flow Field” is a more robust representation of motion in videos than optical flow.
- PFF can be substituted in any action recognition framework where gradients and optical flow are used.
- Motion descriptors generated using PFF have better performance than other low-level representations such as gradients and optical flow in Bag of Visual Words framework.