Operator Reference
inpainting_ct (Operator)
inpainting_ct
— Perform an inpainting by coherence transport.
Signature
inpainting_ct(Image, Region : InpaintedImage : Epsilon, Kappa, Sigma, Rho, ChannelCoefficients : )
Description
The operator inpainting_ct
inpaints a missing region
Region
of an image Image
by transporting image
information from the region's boundary along the coherence direction
into this region.
Since this operator's basic concept is inpainting by continuing
broken contour lines, the image content and inpainting region must
be such that this idea makes sense. That is, if a contour line hits
the region to inpaint at a pixel p, there should be some opposite
point q where this contour line continues so that the continuation
of contour lines from two opposite sides can succeed. In cases
where there is less geometry in the image, a diffusion-based
inpainter, e.g., harmonic_interpolation
may yield better
results. Alternatively, Kappa
can be set to 0.
An extreme situation with little global geometries are pure
textures. Then the idea behind this operator will fail to produce
good results (think of a checkerboard with a big region to inpaint
relative to the checker fields). For these kinds of images, a
texture-based inpainting, e.g., inpainting_texture
, can be
used instead.
The operator uses a so-called upwind scheme to assign gray values to the missing pixels, i.e.,:
-
The order of the pixels to process is given by their Euclidean distance to the boundary of the region to inpaint.
-
A new value is computed as a weighted average of already known values within a disc of radius
Epsilon
around the current pixel. The disc is restricted to already known pixels. -
The size of this scheme's mask depends on
Epsilon
.
The initially used image data comes from a stripe of thickness
Epsilon
around the region to inpaint. Thus,
Epsilon
must be at least 1 for the scheme to work, but
should be greater. The maximum value for Epsilon
depends
on the gray values that should be transported into the region.
Choosing Epsilon
= 5 can be used in many cases.
Since the goal is to close broken contour lines, the direction of
the level lines must be estimated and used in the weight. This
estimated direction is called the coherence direction, and is
computed by means of the structure tensor S.
and
where * denotes the convolution, u denotes the gray
value image, D the derivative and G Gaussian kernels with
standard deviation and . These
standard deviations are defined by the operator's parameters
Sigma
and Rho
. Sigma
should have the
size of the noise or unimportant little objects, which are then not
considered in the estimation step by the pre-smoothing.
Rho
gives the size of the window around a pixel that will
be used for direction estimation. The coherence direction c then
is given by the eigendirection of S with respect to the minimal
eigenvalue , i.e.
For multichannel or color images, the scheme above is applied to
each channel separately, but the weights must be the same for all
channels to propagate information in the same direction. Since the
weight depends on the coherence direction, the common direction is
given by the eigendirection of a composite structure tensor. If
denote the n channels of the image,
the channel structure tensors are
computed and then combined to the composite structure tensor S.
The coefficients are passed in ChannelCoefficients
,
which is a tuple of length n or length 1. If the tuple length
is 1, the arithmetic mean is used, i.e., . If the
length of ChannelCoefficients
matches the number of
channels, the are set to
in order to get a well-defined convex combination. Hence, the
ChannelCoefficients
must be greater than or equal to zero
and their sum must be greater than zero. If the tuple length is
neither 1 nor the number of channels or the requirement above is
not satisfied, the operator returns an error message.
The purpose of using other ChannelCoefficients
than the
arithmetic mean is to adapt to different color codes. The coherence
direction is a geometrical information of the composite image, which
is given by high contrasts such as edges. Thus the more contrast a
channel has, the more geometrical information it contains, and
consequently the greater its coefficient should be chosen (relative
to the others). For RGB images, [0.299, 0.587, 0.114] is a
good choice.
The weight in the scheme is the product of a directional component
and a distance component. If p is the 2D coordinate vector of the
current pixel to be inpainted and q the 2D coordinate of a pixel
in the neighborhood (the disc restricted to already known pixels),
the directional component measures the deviation of the vector p-q
from the coherence direction. If the deviation exponentially scaled
by is large, a low directional component is
assigned, whereas if it is small, a large directional component is
assigned. is controlled by Kappa
(in
percent):
Kappa
defines how important it is to propagate information
along the coherence direction, so a large Kappa
yields
sharp edges, while a low Kappa
allows for more diffusion.
A special case is when Kappa
is zero: In this case the
directional component of the weight is constant (one). The
direction estimation step is then skipped to save computational
costs and the parameters Sigma
, Rho
,
ChannelCoefficients
become meaningless, i.e, the
propagation of information is not based on the structures visible in
the image.
The distance component is 1/|p-q|. Consequently, if q is far away from p, a low distance component is assigned, whereas if it is near to p, a high distance component is assigned.
Attention
Note that filter operators may return unexpected results if an image with a reduced domain is used as input. Please refer to the chapter Filters.
Execution Information
- Multithreading type: reentrant (runs in parallel with non-exclusive operators).
- Multithreading scope: global (may be called from any thread).
- Automatically parallelized on tuple level.
Parameters
Image
(input_object) (multichannel-)image(-array) →
object (byte / uint2 / real)
Input image.
Region
(input_object) region →
object
Inpainting region.
InpaintedImage
(output_object) (multichannel-)image(-array) →
object (byte / uint2 / real)
Output image.
Epsilon
(input_control) number →
(real)
Radius of the pixel neighborhood.
Default: 5.0
Value range:
1.0
≤
Epsilon
≤
20.0
Minimum increment: 1.0
Recommended increment: 1.0
Kappa
(input_control) number →
(real)
Sharpness parameter in percent.
Default: 25.0
Value range:
0.0
≤
Kappa
≤
100.0
Minimum increment: 1.0
Recommended increment: 1.0
Sigma
(input_control) number →
(real)
Pre-smoothing parameter.
Default: 1.41
Value range:
0.0
≤
Sigma
≤
20.0
Minimum increment: 0.001
Recommended increment: 0.01
Rho
(input_control) number →
(real)
Smoothing parameter for the direction estimation.
Default: 4.0
Value range:
0.001
≤
Rho
≤
20.0
Minimum increment: 0.001
Recommended increment: 0.01
ChannelCoefficients
(input_control) number(-array) →
(real)
Channel weights.
Default: 1
Example (HDevelop)
read_image (Image, 'claudia') gen_circle (Circle, 333, 164, 35) inpainting_ct (Image, Circle, InpaintedImage, 15, 25, 1.5, 3,1.0)
Alternatives
harmonic_interpolation
,
inpainting_aniso
,
inpainting_mcf
,
inpainting_ced
,
inpainting_texture
References
Folkmar Bornemann, Tom März: “Fast Image Inpainting Based On Coherence Transport”; Journal of Mathematical Imaging and Vision; vol. 28, no. 3; pp. 259-278; 2007.
Module
Foundation