Research Results
 

Several types of devices exist for capturing real light fields, which go from arrays of cameras capturing the scene from slightly different viewpoints, to single cameras mounted on moving gantries and plenoptic cameras, using arrays of microlenses in front of the photosensor to obtain angular information about the captured scene.

In this project we consider both light fields captured by arrays of cameras (sparse light fields with large baselines) and by micro-lenses based cameras (plenoptic cameras) leading to dense light fields with narrow baselines, the two types of light fields presenting different characteristics. The problems addressed are in particular:

  • Compressive acquisition, low rank and sparse approximation of light fields
  • Graph-based representation and compression of light fields
  • Learning dictionaries, subspaces, manifolds for light fields
  • Scene analysis from light fields: depth estimation, scene flow analysis
  • Compression and restoration
  • Light field editing: segmentation, edit propagation, inpainting, super-resolution, restoration
  • Light fields acquisition

    Decoding Raw Light Fields Captured by Plenoptic Cameras

    The raw light fields data captured by plenotic cameras is a lenslet image from which sub-aperture images (or views) can be extracted. A matlab implementation of this decoding pipeline is available in the matlab light field toolbox. The decoding process which extracts the sub-aperture images from the lenslet image, includes several steps: de-vignetting, color de-mosaicking, conversion of the hexagonal to a rectangular sampling grid, and colour correction. We have first analyzed the different steps of this decoding pipeline to identify the issues which lead to various artefacts in the extracted views. This analysis led us to propose a method for white image guided color demosaicing of the lenslet image. Similarly, we have proposed an interpolation guided by the white image for aligning the micro-lens array and the sensor. [More here ...]

    Scene analysis from light fields

    Depth estimation with occlusion handling from a sparse set of light field views

    We have addressed the problem of depth estimation for every viewpoint of a dense light field, exploiting information from only a sparse set of views. This problem is particularly relevant for applications such as light field reconstruction from a subset of views, for view synthesis and for compression. Unlike most existing methods for scene depth estimation from light fields, the proposed algorithm computes disparity (or equivalently depth) for every viewpoint taking into account occlusions. In addition, it preserves the continuity of the depth space and does not require prior knowledge on the depth range. The experiments show that, both for synthetic and real light fields, our algorithm achieves competitive performance to state-of-the-art algorithms which exploit the entire light field and usually yield to the depth map for the center viewpoint only. . [More here ...]

    Representations of Light Fields: Low rank, sparse and graph representations

    Homography-based Low Rank Approximation for Light Fields Compression

    We study the problem of low rank approximation of light fields for compression. A homography-based approximation method has been developed which jointly searches for homographies to align the different views of the light field together with the low rank approximation matrices. A coding algorithm relying on this homography-based low rank approximation has then been designed. The two key parameters of the coding algorithm (rank and quantization parameter) are, for a given target bit-rate, predicted with a model learned from input light fields texture and disparity features, using radom trees. The results show the benefit of the joint optimization of the homographies together with the low-rank approximation as well as PSNR-rate performance gains compared with those obtained by directly applying HEVC on the light field views re-structured as a video sequence.[More here ...]

    Graph-based Representation of Light Fields

    We have explored the use of graph-based representations for light fields. The graph connections are derived from the disparity and hold just enough information to synthesize other sub-aperture images from one reference image of the light field. Based on the concept of epipolar segment, the graph connections are sparsified (less important segments are removed) by a rate-distortion optimization. The graph vertices and connections are compressed using HEVC. The graph connections capturing the inter-view dependencies are used as the support of a Graph Fourier Transform used to encode disoccluded pixels. [More here ...]

    Graph-based Transforms for Predictive Light Field Compression based on Super-Pixels

    We have explored the use of graph-based transforms to capture correlation in light fields. We consider a scheme in which view synthesis is used as a first step to exploit inter-view correlation. Local graph-based transforms (GT) are then considered for energy compaction of the residue signals. The structure of the local graphs is derived from a coherent super-pixel over-segmentation of the different views. The GT is computed and applied in a separable manner with a first spatial unweighted transform followed by an inter-view GT. For the inter-view GT, both unweighted and weighted GT have been considered. The use of separable instead of non separable transforms allows us to limit the complexity inherent to the computation of the basis functions. A dedicated simple coding scheme is then described for the proposed GT based light field decomposition. Experimental results show a significant improvement with our method compared to the CNN view synthesis method and to the HEVC direct coding of the light field views.[More here ...]

    Rate-Distortion Optimized Super-Ray Merging for Light Field Compression

    We describe a method for constructing super-rays to be used as the support of a 4D shape-adaptive transforms. Super-rays are used to capture inter-view and spatial redundancy in light fields. Here, we consider a scheme in which a first step of view synthesis based on CNN is used to remove inter-view correlation. The super-ray based transforms are then applied on prediction residues. To ensure that the super-ray segmentation is highly correlated with the residues to be encoded, the super-rays are computed on synthesized residues and optimized in a rate-distortion sense. Experimental results show that the proposed coding scheme outperforms HEVC-based schemes at low bitrate.[More here ...]

    Light Fields editing

    Light fields Inpainting via low rank matrix completion

    We have developed an novel method for propagating the inpainting of the central view of a light field to all the other views in order to inpaint all views in a consistent manner. After generating a set of warped versions of the inpainted central view with random homographies, both the original light field views and the warped ones are vectorized and concatenated into a matrix. Because of the redundancy between the views, the matrix satisfies a low rank assumption enabling us to fill the region to inpaint with low rank matrix completion. A new matrix completion algorithm, better suited to the inpainting application than existing methods, has also been developed. Unlike most existing light field inpainting algorithms, our method does not require any depth prior. Another interesting feature of the low rank approach is its ability to cope with color and illumination variations between the input views of the light field. .[More here ...]

    Light fields Inpainting via PDE-based diffusion in epipolar plane images

    This paper presents a novel approach for light field editing. The problem of propagating an edit from a single view to the remaining light field is solved by a structure tensor driven diffusion on the epipolar plane images. The proposed method is shown to be useful for two applications: light field inpainting and recolorization. While the light field recolorization is obtained with a straightforward diffusion, the inpainting application is particularly challenging, as the structure tensors accounting for disparities are unknown under the occluding mask. We address this issue with a disparity inpainting by means of an interpolation constrained by superpixel boundaries. Results on synthetic and real light field images demonstrate the effectiveness of the proposed method. .[More here ...]

    Fast light field inpainting using angular warping with a color-guided disparity interpolation

    This paper describes a method for fast and efficient inpainting of light fields. We first revisit disparity estimation based on smoothed structure tensors and analyze typical artefacts with their impact for the inpainting problem. We then propose an approach which is computationally fast while giving more coherent disparity in the masked region. This disparity is then used for propagating, by angular warping, the inpainted texture of one view to the entire light field. Performed experiments show the ability of our approach to yield appealing results while running considerably faster.[More here ...]

    Light Fields restoration and super-resolution

    Light fields Super-Resolution based on projections between subspaces of patch volumes

    We have developed an example-based super-resolution algorithm for light fields, which allows the increase of the spatial resolution of the different views in a consistent manner across all sub-aperture images of the light field. The algorithm learns linear projections between subspaces of reduced dimension in which reside patch-volumes extracted from the light field. The method is extended to cope with angular super-resolution, where 2D patches of intermediate sub-aperture images are approximated from neighbouring subaperture images using multivariate ridge regression. Experimental results show significant quality improvement when compared to state-of-the-art single-image super-resolution methods applied on each view separately, as well as when compared to a recent light field super-resolution technique based on deep learning.[More here ...]

    Light fields Super-Resolution based on deep convolutional networks with low rank priors

    This paper describes a learning-based spatial light field super-resolution method that allows the restoration of the entire light field with consistency across all sub-aperture images. The algorithm first uses optical flow to align the light field and then reduces its angular dimension using low-rank approximation. We then consider the linearly independent columns of the resulting low-rank model as an embedding, which is restored using a deep convolutional neural network (DCNN). The super-resolved embedding is then used to reconstruct the remaining sub-aperture images. The original disparities are restored using inverse warping where missing pixels are approximated using a novel light field inpainting algorithm. Experimental results show that the proposed method outperforms existing light field super-resolution algorithms, including using convolutional networks.[More here ...]