Optical flow cores

Optical flow computation estimates bidimensional motion fields projected by the scene on the image plane of a camera from a sequence of captured frames. In general, we assume the constancy of the intensity for consecutive frames and usually a smooth motion and small displacements.

Optical flow is a low-level vision feature widely used for many applications, as for instance, motion in depth, structure from motion, or ego-motion. Furthermore, its potential applications encompass video stabilization, object tracking, segmentation, or active vision. All of them are useful in a wide range of fields such as autonomous robot navigation, video surveillance, or driving assistance systems. Some results of our implementation are shown in Fig 1.

As mentioned in Image-processing-core-library, all the work is supported by several publications in international scientific journals. However, the main difference is that here we are only including the cores for the one-scale computation, not for the whole coarse-to-fine process. The reason is that the coarse-to-fine structure requires intensive use of the memory and therefore, it is platform-dependent in our case.

Fig 1. Optical flow estimation for some examples of the Middlebury dataset.

Contact

Javier Serrano