Total Variation Image Restoration

SOCP > SOC inequalities | Standard Forms | Group sparsity | Applications > Back | Total Variation Image Restoration | Next
  • The image restoration problem

  • Discretization

  • SOCP formulation

The image restoration problem

Functional problem statement

Digital images always contain noise. In image restoration, the problem is to filter out the noise. Early methods involved least-squares but the solutions exhibited the ‘‘ringing’ phenomenon, with spurious oscillations near edges in the restored image. To address this phenomenon, one may add to the objective of the least-squares problem a term which penalizes the variations in the image.

We may represent a given (noisy) image as function from the square C := [0,1] times [0,1] to mathbf{R}. We define the image restoration problem as minimizing, over functions hat{f} : C rightarrow mathbf{R}, the objective

 int_C | nabla hat{f}(x) |_2 dx + lambda int_C (hat{f}(x) - f(x))^2 dx,

where the function hat{f} is our estimate. The first term penalizes functions which exhibit large variations, while the second term accounts for the distance from the estimate to the noisy image, f.

The above is an infinite-dimensional problem, in the sense that the variable is a function, not a finite-dimensional vector.

Discretization

We can discretize the square with a square grid, as follows:

 x_{ij} = left( begin{array}{c} frac{i}{K}  frac{j}{K} end{array} right) , ;; 0 le i,j le K.

We represent the data of our problem, f, as a matrix F in mathbf{R}^{(K+1) times (K+1)}, with elements F_{ij} = f(x_{ij}). Similarly, we represent the variable hat{f} of our problem with a (K+1) times (K+1) matrix hat{F}, which contains the values of hat{f} at the grid points x_{ij}.

To approximate the gradient, we start from the first-order expansion of a function of two variables, valid for some small increment h:

 frac{partial hat{f}}{partial x}(x,y) approx frac{1}{h}(f(x+h,y) - f(x,y))

Applying this to a grid point, with the small increment set to h = 1/K, we obtain that the gradient of hat{f} at a grid point can be approximated as

 nabla hat{f} (x_{ij}) approx G_{ij} :=  left( begin{array}{c}  K( hat{F}_{i+1,j} - hat{F}_{i,j})  K(hat{F}_{i,j+1} - hat{F}_{ij} )  end{array} right) , ;; 1 le i,j le K

with the convention that the terms involved are zero on the boundary (that is, if either i or j is n).

SOCP formulation

The discretized version of our problem is thus

 min_{hat{F}} : frac{1}{K^2}sum_{0 le i,j le K} left( left| left( begin{array}{c}  K( hat{F}_{i+1,j} - hat{F}_{i,j})  K(hat{F}_{i,j+1} - hat{F}_{ij} )  end{array} right) right|_2 + lambda (hat{F}_{ij} - F_{ij})^2 right).

Examples