# Clustering-based methods for fast epitome generation

contact: M. Alain, C. Guillemot, D. Thoreau, P. Guillotel

## Context and Goal

The concept of epitome was first introduced by Jojic et al. as the condensed representation (meaning its size is only a fraction of the original size) of an image containing the essence of its the textural properties. This original epitomic model is based on a patch-based probabilistic approach, and has different applications in segmentation, denoising, recognition, indexing or texture synthesis.
Several epitomic models have been since proposed, such as the factorized representation of Wang et al. dedicated to texture mapping, or its extension designed for image coding purposes by Chérigui et al. The epitome is in this case the union of epitome charts which are pieces of repeatable textures found in the image. The search for self-similar or repeatable texture patterns, based on the KLT or a block matching (BM) algorithm, is known to be memory and time consuming.
In this work, we propose a clustering-based technique to reduce the self-similarities search complexity.

## Approach

### Epitome generation

We summarize here the epitome generation process as proposed by Chérigui et al, in which the input image $$I$$ is factored in an epitome $$E$$ and an assignation map $$\phi$$. The input image is divided into a regular grid of non-overlapping blocks $$B_i$$ (block-grid) and each block will be reconstructed from an epitome patch. The epitome itself is composed of disjoint texture pieces called "epitome chart" (see Fig. 1). The assignation map links the patches from the epitome to the input image blocks. For more detailed explanations, see the papers listed at the end of this page.

 Epitome Reconstructed image 34 % of input image PSNR = 35.53 dB Fig. 1 - Epitome of a Foreman frame (left) and the corresponding reconstruction (right).

#### Exhaustive self-similarities search

The method first proceeds by finding the self-similarities among the image. The goal of this step is to find for each non-overlapping block in the input image a list of matching patches such that the mean square error (MSE) between a block and its matching patches is below a matching threshold $$\varepsilon_M$$. This parameter eventually determines the size of the epitome. The matching patches are found through exhaustive search in the pixel-grid (see Fig. 2), and the assignation map $$\phi$$ is composed of translation parameters. $$\forall B_i \in I, ML(B_i) = \{M_{i,0},M_{i,1},\dots\}, s.t. \forall j, MSE(B_i,M_{i,j}) \leq \varepsilon_M$$

Fig. 2 - Exhaustive self-similarities search.

#### Epitome generation

The epitome charts are generated based on an iterative process. candidate patches used to initialize or grow the epitome charts are found by reversing the lists of matching patches obtained at the previous step. The selection of the best candidate is then performed based on a rate-distortion optimization (RDO) criterion, which minimizes the error between the input image $$I$$ and the image $$I'$$ as well as the size of the epitome. This criterion is summarized in the equation below, where $$R(E)$$ represents the size of the epitome estimated as the number of pixels. $$(E, \phi) = argmin(MSE(I,I') + \lambda R(E))$$

Fig. 3 - Iterative epitome chart extension.

### Clustering-based methods for fast self-similarities search

We propose two methods to reduce the complexity of the self-similarities search compared to the exhaustive search. The main idea is to group together similar non-overlapping blocks. The lists of matchnig patches are then computed for the group of blocks instead of each block separately. The similarity for grouping non-overlapping blocks is characterized by an assignation threshold $$\varepsilon_A$$, which is smaller than $$\varepsilon_M$$. Thus we define $$\varepsilon_A$$ via a coefficient $$\alpha_A$$ such that $$\varepsilon_A = \alpha_A * \varepsilon_M, 0 \leq \alpha_A < 1$$.

#### List-based method

This method was designed to obtain a simple grouping of similar non-overlapping blocks and follows the steps below:
• For each block $$B_i$$, find all blocks whose distance is below the assignation threshold $$\varepsilon_A$$. The association of $$B_i$$ and such blocks form a potential list $$PL(B_i)$$.
• Find the potential list $$PL_{opt}$$ with the highest cardinal. This potential list is set as an actual list $$AL$$. Blocks in $$AL$$ are removed from the other potential lists they may belong to. If the block $$B$$ used to compute a potential list $$PL(B)$$ is removed from this list, this potential list no longer exists and is simply not considered in future iterations.
• The previous step is iterated until all blocks belong to an actual list.

Finally for each actual list $$AL(B_i)$$ we determine a match list $$ML(B_i) = \{M_{i,0}, M_{i,1}, \dots\}$$ obtained through an exhaustive search in the pixel-grid. The list is computed with respect to $$B_i$$ but is then used by all blocks in $$AL(B_i)$$ for the epitome generation step. To satisfy the constraint that all the blocks have a reconstruction error inferior to $$\varepsilon_M$$, the blocks different from $$B_i$$ in the list $$AL(B_i)$$ only use a subset of $$ML(B_i)$$ defined as: $$\{M \in ML(B_i) | D(B_i,M) \leq \varepsilon_M - \varepsilon_A\}$$

#### Threshold-based clustering

This method proceeds as follows:
• A block $$B_0$$ is randomly selected and used to initialize the first cluster $$C_0$$.
• For every block $$B$$ not assigned to a cluster, compute its distance to the centroid of all existing clusters. If all distances are higher than the assignation threshold $$\varepsilon_A$$, initialize a new cluster with $$B$$ as a seed. Otherwise assign $$B$$ to the closest cluster and recompute the centroid of this cluster as the average of all blocks in the cluster.
• As a result, all blocks are assigned to a cluster. For every cluster, recompute the distances between the cluster blocks and the centroid. If a block $$B$$ is found to have a distance to the centroid superior to $$\varepsilon_A$$, it is considered as a singularity and remove from the cluster. $$B$$ is then used as a seed to initialize a new cluster.

For each cluster $$C_i$$ a match list $$ML(B_i)$$ is then computed through an exhaustive search in the pixel-grid, with respect to the block $$B_i$$ closest to the centroid. Except for the block $$B_i$$, all blocks in $$C_i$$ will use approximate matches. As for the previous grouping method, we want to ensure that all blocks have a reconstruction error inferior to $$\varepsilon_M$$. The blocks different from $$B_i$$ thus only use a subset of $$ML(B_i)$$, as defined in in the equation above.

Fig. 4 - Two steps clusterring-based self-similarities search.

## Experimental Results

Experiments were conducted on a set of 4 images: a frame extracted from the Foreman sequence (CIF), Lena ($$512 \times 512$$), City ($$1280 \times 720$$) and Calendar ($$1280 \times 720$$). The size of the blocks is set to $$8 \times 8$$.
First tests were carried out with $$\varepsilon_M = \{3.0, 5.0, 10.0, 15.0\}$$. The parameter $$\alpha_A$$ was here set to $$0.5$$. Results are displayed in Table 1. Best results between list-based or cluster-based methods are displayed in bold font. The complexity reduction is assessed by the percentage of the optimized memory occupation over the original one, and the optimized method processing time over the original one. Two processing times are displayed: the self-similarities search time, which is the algorithm step actually optimized, and the complete epitome generation time.
The memory occupation is prohibitive for images City and Calendar when $$\varepsilon_M = 15.00$$ with the original method. This shows a very important limitation of the full search method for high resolution images. Because of this, comparative results can not be displayed, but the epitome can be still generated when using optimized methods.

47.85 67.31 70.15 60.12 45.22 55.80 Image $$\varepsilon_M$$ List-based method Cluster-based method Memory load (%) Search time (%) Total time (%) Memory load (%) Search time (%) Total time (%) Foreman 3.0 50.18 68.62 73.93 5.0 49.54 60.72 61.88 10.0 25.00 32.65 50.36 15.0 18.08 21.61 55.25 Lena 3.0 98.18 75.54 78.92 5.0 63.61 61.00 66.19 10.0 29.98 37.49 55.74 15.0 21.64 23.56 64.48 City 3.0 60.54 68.62 69.25 5.0 60.71 64.165 64.160 10.0 51.47 48.07 60.07 Calendar 3.0 84.66 66.44 70.36 68.02 5.0 48.43 54.76 59.94 10.0 53.98 42.43 57.91

The quality of the epitome produced is assessed by the reconstructed image quality. The graphs representing the reconstructed image PSNR as a function of the epitome size are displayed in Fig. 5. For all images, whatever the epitome generation method, the curves are almost identical. The two proposed methods can thus reduce complexity while keeping the same epitome quality.

 Fig. 5 - Reconstruction performances depending on $$\varepsilon_M$$.

Complexity results for different values of $$\alpha_A$$ are displayed in Table 2, with $$\varepsilon_M = 10.00$$. These results illustrates the behavior of the proposed methods, which can reduce drastically the complexity when allowing important approximations. However, when evaluating average reconstruction performances (see Fig. 6), we can see that it has a negative impact, as it can increase the epitome size while decreasing the reconstruction PSNR.

67.86 51.42 58.41 41.39 19.06 28.58 Image $$\alpha_A$$ List-based method Cluster-based method Memory load (%) Search time (%) Total time (%) Memory load (%) Search time (%) Total time (%) Foreman 0.25 70.66 53.17 61.51 0.50 25.00 32.65 50.36 0.75 9.58 21.94 46.74 Lena 0.25 79.99 60.93 72.73 0.50 29.98 37.49 55.74 0.75 15.91 25.48 53.73 City 0.25 86.30 63.30 66.99 0.50 51.47 48.07 59.75 0.75 38.17 40.90 58.83 Calendar 0.25 69.76 51.43 65.30 66.65 0.50 53.98 42.43 57.91 0.75 48.35 36.86 57.94

Fig. 6 - Reconstruction performances depending on $$\alpha_A$$.

## Applications

The concept of epitome presented here was used in different applications such as still image coding and scalable image coding.