Operator Reference

scene_flow_uncalibscene_flow_uncalibSceneFlowUncalibSceneFlowUncalibscene_flow_uncalib (Operator)

scene_flow_uncalibscene_flow_uncalibSceneFlowUncalibSceneFlowUncalibscene_flow_uncalib — Compute the uncalibrated scene flow between two stereo image pairs.

Signature

Herror scene_flow_uncalib(const Hobject ImageRect1T1, const Hobject ImageRect2T1, const Hobject ImageRect1T2, const Hobject ImageRect2T2, const Hobject Disparity, Hobject* OpticalFlow, Hobject* DisparityChange, double SmoothingFlow, double SmoothingDisparity, const char* GenParamName, const char* GenParamValue)

Herror T_scene_flow_uncalib(const Hobject ImageRect1T1, const Hobject ImageRect2T1, const Hobject ImageRect1T2, const Hobject ImageRect2T2, const Hobject Disparity, Hobject* OpticalFlow, Hobject* DisparityChange, const Htuple SmoothingFlow, const Htuple SmoothingDisparity, const Htuple GenParamName, const Htuple GenParamValue)

void SceneFlowUncalib(const HObject& ImageRect1T1, const HObject& ImageRect2T1, const HObject& ImageRect1T2, const HObject& ImageRect2T2, const HObject& Disparity, HObject* OpticalFlow, HObject* DisparityChange, const HTuple& SmoothingFlow, const HTuple& SmoothingDisparity, const HTuple& GenParamName, const HTuple& GenParamValue)

HImage HImage::SceneFlowUncalib(const HImage& ImageRect2T1, const HImage& ImageRect1T2, const HImage& ImageRect2T2, const HImage& Disparity, HImage* DisparityChange, const HTuple& SmoothingFlow, const HTuple& SmoothingDisparity, const HTuple& GenParamName, const HTuple& GenParamValue) const

HImage HImage::SceneFlowUncalib(const HImage& ImageRect2T1, const HImage& ImageRect1T2, const HImage& ImageRect2T2, const HImage& Disparity, HImage* DisparityChange, double SmoothingFlow, double SmoothingDisparity, const HString& GenParamName, const HString& GenParamValue) const

HImage HImage::SceneFlowUncalib(const HImage& ImageRect2T1, const HImage& ImageRect1T2, const HImage& ImageRect2T2, const HImage& Disparity, HImage* DisparityChange, double SmoothingFlow, double SmoothingDisparity, const char* GenParamName, const char* GenParamValue) const

HImage HImage::SceneFlowUncalib(const HImage& ImageRect2T1, const HImage& ImageRect1T2, const HImage& ImageRect2T2, const HImage& Disparity, HImage* DisparityChange, double SmoothingFlow, double SmoothingDisparity, const wchar_t* GenParamName, const wchar_t* GenParamValue) const   ( Windows only)

def scene_flow_uncalib(image_rect_1t1: HObject, image_rect_2t1: HObject, image_rect_1t2: HObject, image_rect_2t2: HObject, disparity: HObject, smoothing_flow: Union[float, int], smoothing_disparity: Union[float, int], gen_param_name: MaybeSequence[str], gen_param_value: MaybeSequence[Union[int, float, str]]) -> Tuple[HObject, HObject]

Description

scene_flow_uncalibscene_flow_uncalibSceneFlowUncalibSceneFlowUncalibscene_flow_uncalib computes the uncalibrated scene flow between two consecutive rectified stereo image pairs. The scene flow is the three-dimensional position and motion of surface points in a dynamic scene. The movement in the images can be caused by objects that move in the world or by a movement of the camera (or both) between the acquisition of the two image pairs. To calculate the calibrated scene flow, scene_flow_calibscene_flow_calibSceneFlowCalibSceneFlowCalibscene_flow_calib can be used.

The two consecutive stereo image pairs of the image sequence are passed in ImageRect1T1ImageRect1T1ImageRect1T1imageRect1T1image_rect_1t1, ImageRect2T1ImageRect2T1ImageRect2T1imageRect2T1image_rect_2t1, ImageRect1T2ImageRect1T2ImageRect1T2imageRect1T2image_rect_1t2, and ImageRect2T2ImageRect2T2ImageRect2T2imageRect2T2image_rect_2t2. Each stereo image pair must be rectified. Note that the images can be rectified by using the operators calibrate_camerascalibrate_camerasCalibrateCamerasCalibrateCamerascalibrate_cameras, gen_binocular_rectification_mapgen_binocular_rectification_mapGenBinocularRectificationMapGenBinocularRectificationMapgen_binocular_rectification_map, and map_imagemap_imageMapImageMapImagemap_image. Furthermore, a single-channel DisparityDisparityDisparitydisparitydisparity image is required, which specifies for each pixel (r,c1) of the image ImageRect1T1ImageRect1T1ImageRect1T1imageRect1T1image_rect_1t1 a matching pixel (r,c2) of ImageRect2T1ImageRect2T1ImageRect2T1imageRect2T1image_rect_2t1 according to the equation c2=c1+d(r,c1), where d(r,c) is the DisparityDisparityDisparitydisparitydisparity at pixel (r,c). The disparity image can be computed using binocular_disparitybinocular_disparityBinocularDisparityBinocularDisparitybinocular_disparity or binocular_disparity_mgbinocular_disparity_mgBinocularDisparityMgBinocularDisparityMgbinocular_disparity_mg.

The computed uncalibrated scene flow is returned in OpticalFlowOpticalFlowOpticalFlowopticalFlowoptical_flow and DisparityChangeDisparityChangeDisparityChangedisparityChangedisparity_change. The vectors in the vector field OpticalFlowOpticalFlowOpticalFlowopticalFlowoptical_flow represent the movement in the image plane between ImageRect1T1ImageRect1T1ImageRect1T1imageRect1T1image_rect_1t1 and ImageRect1T2ImageRect1T2ImageRect1T2imageRect1T2image_rect_1t2. The single-channel image DisparityChangeDisparityChangeDisparityChangedisparityChangedisparity_change describes the change in disparity between ImageRect2T1ImageRect2T1ImageRect2T1imageRect2T1image_rect_2t1 and ImageRect2T2ImageRect2T2ImageRect2T2imageRect2T2image_rect_2t2. A world point is projected into ImageRect1T1ImageRect1T1ImageRect1T1imageRect1T1image_rect_1t1 at position (r,c). The same point is projected into

where u(r,c) and v(r,c) denote the values of the row and column components of the vector field image OpticalFlowOpticalFlowOpticalFlowopticalFlowoptical_flow, d(r,c) denotes the DisparityDisparityDisparitydisparitydisparity, and dc(r,c) the DisparityChangeDisparityChangeDisparityChangedisparityChangedisparity_change at the pixel (r,c).

ImageRect1T1 ImageRect1T2 ImageRect2T2 ImageRect2T1 (r,c) (r+u(r,c),c+v(r,c)) (r+u(r,c),c+v(r,c)+d(r,c)+dc(r,c)) Disparity d(r,c) (r,c+d(r,c)) OpticalFlow (u(r,c),v(r,c)) dc(r,c) d(r,c) + Disparity DisparityChange
Relations between the four images and the optical flow as well as the disparities in the disparity images.

Parameter Description

The rectified input images are passed in ImageRect1T1ImageRect1T1ImageRect1T1imageRect1T1image_rect_1t1, ImageRect2T1ImageRect2T1ImageRect2T1imageRect2T1image_rect_2t1, ImageRect1T2ImageRect1T2ImageRect1T2imageRect1T2image_rect_1t2, and ImageRect2T2ImageRect2T2ImageRect2T2imageRect2T2image_rect_2t2. The computation of the scene flow is performed on the domain of ImageRect1T1ImageRect1T1ImageRect1T1imageRect1T1image_rect_1t1, which is also the domain of the scene flow in OpticalFlowOpticalFlowOpticalFlowopticalFlowoptical_flow and DisparityChangeDisparityChangeDisparityChangedisparityChangedisparity_change. DisparityDisparityDisparitydisparitydisparity describes the disparity between the rectified images ImageRect1T1ImageRect1T1ImageRect1T1imageRect1T1image_rect_1t1 and ImageRect2T1ImageRect2T1ImageRect2T1imageRect2T1image_rect_2t1.

SmoothingFlowSmoothingFlowSmoothingFlowsmoothingFlowsmoothing_flow and SmoothingDisparitySmoothingDisparitySmoothingDisparitysmoothingDisparitysmoothing_disparity specify the regularization weights and with respect to the data term. The larger the value of these parameters, the smoother the computed scene flow is. For byte images with a gray value range of , values around 40 typically yield good results.

The parameters of the iteration scheme and for the coarse-to-fine warping strategy can be specified with the generic parameters GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name and GenParamValueGenParamValueGenParamValuegenParamValuegen_param_value.

Usually, it is sufficient to use one of the default parameter sets for the parameters by using GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name = 'default_parameters'"default_parameters""default_parameters""default_parameters""default_parameters" and GenParamValueGenParamValueGenParamValuegenParamValuegen_param_value = 'very_accurate'"very_accurate""very_accurate""very_accurate""very_accurate", 'accurate'"accurate""accurate""accurate""accurate", 'fast'"fast""fast""fast""fast", or 'very_fast'"very_fast""very_fast""very_fast""very_fast". If necessary, individual parameters can be modified after the default parameter set has been chosen by specifying a subset of the parameters and corresponding values after 'default_parameters'"default_parameters""default_parameters""default_parameters""default_parameters" in GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name and GenParamValueGenParamValueGenParamValuegenParamValuegen_param_value (e.g., GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name = ['default_parameters','warp_zoom_factor']["default_parameters","warp_zoom_factor"]["default_parameters","warp_zoom_factor"]["default_parameters","warp_zoom_factor"]["default_parameters","warp_zoom_factor"] and GenParamValueGenParamValueGenParamValuegenParamValuegen_param_value = ['accurate',0.6]["accurate",0.6]["accurate",0.6]["accurate",0.6]["accurate",0.6]). The meaning of the individual parameters is described in detail below. The default parameter sets are given by:

'default_parameters'"default_parameters""default_parameters""default_parameters""default_parameters" 'very_accurate'"very_accurate""very_accurate""very_accurate""very_accurate" 'accurate'"accurate""accurate""accurate""accurate" 'fast'"fast""fast""fast""fast" 'very_fast'"very_fast""very_fast""very_fast""very_fast"
'warp_zoom_factor'"warp_zoom_factor""warp_zoom_factor""warp_zoom_factor""warp_zoom_factor" 0.75 0.5 0.5 0.5
'warp_levels'"warp_levels""warp_levels""warp_levels""warp_levels" 0 0 0 0
'warp_last_level'"warp_last_level""warp_last_level""warp_last_level""warp_last_level" 1 1 1 2
'outer_iter'"outer_iter""outer_iter""outer_iter""outer_iter" 10 7 5 4
'inner_iter'"inner_iter""inner_iter""inner_iter""inner_iter" 2 2 2 2
'sor_iter'"sor_iter""sor_iter""sor_iter""sor_iter" 3 3 3 3
'omega'"omega""omega""omega""omega" 1.9 1.9 1.9 1.9

If the parameters should be specified individually, GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name and GenParamValueGenParamValueGenParamValuegenParamValuegen_param_value must be set to tuples of the same length. The values corresponding to the parameters specified in GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name must be specified at the corresponding position in GenParamValueGenParamValueGenParamValuegenParamValuegen_param_value. For a deeper understanding of the following parameters, please refer to the section Algorithm below.

  • GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name = 'warp_zoom_factor'"warp_zoom_factor""warp_zoom_factor""warp_zoom_factor""warp_zoom_factor" can be used to specify the resolution ratio between two consecutive warping levels in the coarse-to-fine warping hierarchy. 'warp_zoom_factor'"warp_zoom_factor""warp_zoom_factor""warp_zoom_factor""warp_zoom_factor" must be selected from the open interval (0,1). For performance reasons, 'warp_zoom_factor'"warp_zoom_factor""warp_zoom_factor""warp_zoom_factor""warp_zoom_factor" is typically set to 0.5, i.e., the number of pixels is halved in each direction for each coarser warping level. Values for 'warp_zoom_factor'"warp_zoom_factor""warp_zoom_factor""warp_zoom_factor""warp_zoom_factor" close to 1 can lead to slightly better results. However, they require a disproportionately larger computation time.

  • GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name = 'warp_levels'"warp_levels""warp_levels""warp_levels""warp_levels" can be used to restrict the warping hierarchy to a maximum number of levels. For 'warp_levels'"warp_levels""warp_levels""warp_levels""warp_levels" = 0, the largest possible number of levels is used. If the image size does not allow to use the specified number of levels (taking the resolution ratio 'warp_zoom_factor'"warp_zoom_factor""warp_zoom_factor""warp_zoom_factor""warp_zoom_factor" into account), the largest possible number of levels is used. Usually, 'warp_levels'"warp_levels""warp_levels""warp_levels""warp_levels" should be set to 0.

  • GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name = 'warp_last_level'"warp_last_level""warp_last_level""warp_last_level""warp_last_level" can be used to specify the number of warping levels for which the flow increment should no longer be computed. Usually, 'warp_last_level'"warp_last_level""warp_last_level""warp_last_level""warp_last_level" is set to 1 or 2, i.e., a flow increment is computed for each warping level, or the finest warping level is skipped in the computation. In the latter case, the computation is performed on an image of half the resolution of the original image and then interpolated to the full resolution.

  • GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name = 'outer_iter'"outer_iter""outer_iter""outer_iter""outer_iter" can be used to specify the number of outer iterations in the minimization scheme. Typically, the larger 'outer_iter'"outer_iter""outer_iter""outer_iter""outer_iter", the more accurate the numerical results are. Higher values for this parameter lead to an increase in the computation time. Typically, 'outer_iter'"outer_iter""outer_iter""outer_iter""outer_iter" is set to values between 5 and 10.

  • GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name = 'inner_iter'"inner_iter""inner_iter""inner_iter""inner_iter" can be used to specify the number of inner iterations in the minimization scheme. Typically, the larger 'inner_iter'"inner_iter""inner_iter""inner_iter""inner_iter", the more accurate the numerical results are. Higher values for this parameter lead to an increase in the computation time. Usually, two inner iterations are sufficient.

  • GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name = 'sor_iter'"sor_iter""sor_iter""sor_iter""sor_iter" can be used to specify the number of SOR iterations for solving the linear system of equations. Typically, the larger 'sor_iter'"sor_iter""sor_iter""sor_iter""sor_iter", the more accurate the numerical results are. Higher values for this parameter lead to an increase in the computation time. Usually, three SOR iterations are sufficient.

  • GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name = 'omega'"omega""omega""omega""omega" can be used to specify the relaxation factor of the SOR method. 'omega'"omega""omega""omega""omega" must be selected from the open interval (1,2). Typically, 'omega'"omega""omega""omega""omega" is set to 1.9.

Algorithm

The scene flow is estimated by minimizing a suitable energy functional: where f=(u,v,dc) is the optical flow field and the disparity change. denotes the data term and the smoothness (regularization) term. The algorithm is based on the following assumptions, which lead to the data and smoothness terms:

Brightness Constancy

It is assumed that the gray value of a point remains constant in all four input images, resulting in the following four constraints: Here, , , , and denote ImageRect1T1ImageRect1T1ImageRect1T1imageRect1T1image_rect_1t1, ImageRect2T1ImageRect2T1ImageRect2T1imageRect2T1image_rect_2t1, ImageRect1T2ImageRect1T2ImageRect1T2imageRect1T2image_rect_1t2, and ImageRect2T2ImageRect2T2ImageRect2T2imageRect2T2image_rect_2t2, respectively.

Piecewise smoothness of the scene flow

The solution is assumed to be piecewise smooth. This smoothness is achieved by penalizing the first derivatives of the flow . The use of a statistically robust (linear) penalty function with provides the desired preservation of edges in the movement in the scene flow to be determined.

Because the disparity image d is given, the first constraint can be omitted. Taking into account all of the above assumptions, the energy functional can be written as where and are the regularization parameters passed in SmoothingFlowSmoothingFlowSmoothingFlowsmoothingFlowsmoothing_flow and SmoothingDisparitySmoothingDisparitySmoothingDisparitysmoothingDisparitysmoothing_disparity.

To calculate large displacements, coarse-to-fine warping strategies use two concepts that are closely interlocked: The successive refinement of the problem (coarse-to-fine) and the successive compensation of the current image pair by already computed displacements (warping). Algorithmically, such coarse-to-fine warping strategies can be described as follows:

  1. First, all images are zoomed down to a very coarse resolution level.

  2. Then, the scene flow is computed on this coarse resolution.

  3. The scene flow is required on the next resolution level: It is applied there to the second image pair of the image sequence, i.e., the problem on the finer resolution level is compensated by the already computed scene flow. This step is also known as warping.

  4. The modified problem (difference problem) is now solved on the finer resolution level, i.e., the scene scene flow is computed there.

  5. The steps 3-4 are repeated until the finest resolution level is reached.

  6. The final result is computed by adding up the scene flow from all resolution levels.

This incremental computation of the scene flow has the following advantage: While the coarse-to-fine strategy ensures that the displacements on the finest resolution level are very small, the warping strategy ensures that the displacements remain small for the incremental displacements (scene flow of the difference problems). Since small displacements can be computed much more accurately than larger displacements, the accuracy of the results typically increases significantly by using such a coarse-to-fine warping strategy. However, instead of having to solve a single correspondence problem, an entire hierarchy of these problems must be solved.

The minimization of functionals is mathematically very closely related to the minimization of functions: Like the fact that the zero crossing of the first derivative is a necessary condition for the minimum of a function, the fulfillment of the so called Euler-Lagrange equations is a necessary condition for the minimizing function of a functional (the minimizing function corresponds to the desired scene flow in this case). The Euler-Lagrange equations are partial differential equations. By discretizing these Euler-Lagrange equations using finite differences, large sparse nonlinear equation systems have to be solved in this algorithm.

For each warping level a single equation system must be solved. The algorithm uses an iteration scheme consisting of two nested iterations (called the outer and inner iteration) and the SOR (Successive Over-Relaxation) method. The outer loop contains the linearization of the nonlinear terms resulting from the data constraints. The nonlinearity of is removed by the inner fixed point iteration scheme. The resulting linear system of equations can be solved efficiently by the SOR method.

Execution Information

  • Multithreading type: reentrant (runs in parallel with non-exclusive operators).
  • Multithreading scope: global (may be called from any thread).
  • Automatically parallelized on internal data level.

Parameters

ImageRect1T1ImageRect1T1ImageRect1T1imageRect1T1image_rect_1t1 (input_object)  singlechannelimage(-array) objectHImageHObjectHObjectHobject (byte / uint2 / real)

Input image 1 at time .

ImageRect2T1ImageRect2T1ImageRect2T1imageRect2T1image_rect_2t1 (input_object)  singlechannelimage(-array) objectHImageHObjectHObjectHobject (byte / uint2 / real)

Input image 2 at time .

ImageRect1T2ImageRect1T2ImageRect1T2imageRect1T2image_rect_1t2 (input_object)  singlechannelimage(-array) objectHImageHObjectHObjectHobject (byte / uint2 / real)

Input image 1 at time .

ImageRect2T2ImageRect2T2ImageRect2T2imageRect2T2image_rect_2t2 (input_object)  singlechannelimage(-array) objectHImageHObjectHObjectHobject (byte / uint2 / real)

Input image 2 at time .

DisparityDisparityDisparitydisparitydisparity (input_object)  singlechannelimage(-array) objectHImageHObjectHObjectHobject (real)

Disparity between input images 1 and 2 at time .

OpticalFlowOpticalFlowOpticalFlowopticalFlowoptical_flow (output_object)  singlechannelimage(-array) objectHImageHObjectHObjectHobject * (vector_field)

Estimated optical flow.

DisparityChangeDisparityChangeDisparityChangedisparityChangedisparity_change (output_object)  singlechannelimage(-array) objectHImageHObjectHObjectHobject * (real)

Estimated change in disparity.

SmoothingFlowSmoothingFlowSmoothingFlowsmoothingFlowsmoothing_flow (input_control)  number HTupleUnion[float, int]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Weight of the regularization term relative to the data term (derivatives of the optical flow).

Default: 40.0

Suggested values: 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0

Restriction: SmoothingFlow > 0.0

SmoothingDisparitySmoothingDisparitySmoothingDisparitysmoothingDisparitysmoothing_disparity (input_control)  number HTupleUnion[float, int]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Weight of the regularization term relative to the data term (derivatives of the disparity change).

Default: 40.0

Suggested values: 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0

Restriction: SmoothingDisparity > 0.0

GenParamNameGenParamNameGenParamNamegenParamNamegen_param_name (input_control)  attribute.name(-array) HTupleMaybeSequence[str]HTupleHtuple (string) (string) (HString) (char*)

Parameter name(s) for the algorithm.

Default: 'default_parameters' "default_parameters" "default_parameters" "default_parameters" "default_parameters"

Suggested values: 'default_parameters'"default_parameters""default_parameters""default_parameters""default_parameters", 'warp_levels'"warp_levels""warp_levels""warp_levels""warp_levels", 'warp_zoom_factor'"warp_zoom_factor""warp_zoom_factor""warp_zoom_factor""warp_zoom_factor", 'warp_last_level'"warp_last_level""warp_last_level""warp_last_level""warp_last_level", 'outer_iter'"outer_iter""outer_iter""outer_iter""outer_iter", 'inner_iter'"inner_iter""inner_iter""inner_iter""inner_iter", 'sor_iter'"sor_iter""sor_iter""sor_iter""sor_iter", 'omega'"omega""omega""omega""omega"

GenParamValueGenParamValueGenParamValuegenParamValuegen_param_value (input_control)  attribute.value(-array) HTupleMaybeSequence[Union[int, float, str]]HTupleHtuple (string / integer / real) (string / int / long / double) (HString / Hlong / double) (char* / Hlong / double)

Parameter value(s) for the algorithm.

Default: 'accurate' "accurate" "accurate" "accurate" "accurate"

Suggested values: 'very_accurate'"very_accurate""very_accurate""very_accurate""very_accurate", 'accurate'"accurate""accurate""accurate""accurate", 'fast'"fast""fast""fast""fast", 'very_fast'"very_fast""very_fast""very_fast""very_fast", 0, 1, 2, 3, 4, 5, 6, 0.5, 0.6, 0.7, 0.75, 3, 5, 7, 2, 3, 1.9

Result

If the parameter values are correct, the operator scene_flow_uncalibscene_flow_uncalibSceneFlowUncalibSceneFlowUncalibscene_flow_uncalib returns the value 2 ( H_MSG_TRUE) . If the input is empty (no input images are available) the behavior can be set via set_system('no_object_result',<Result>)set_system("no_object_result",<Result>)SetSystem("no_object_result",<Result>)SetSystem("no_object_result",<Result>)set_system("no_object_result",<Result>). If necessary, an exception is raised.

Possible Predecessors

binocular_disparitybinocular_disparityBinocularDisparityBinocularDisparitybinocular_disparity, binocular_disparity_mgbinocular_disparity_mgBinocularDisparityMgBinocularDisparityMgbinocular_disparity_mg

Possible Successors

thresholdthresholdThresholdThresholdthreshold, vector_field_lengthvector_field_lengthVectorFieldLengthVectorFieldLengthvector_field_length

Alternatives

scene_flow_calibscene_flow_calibSceneFlowCalibSceneFlowCalibscene_flow_calib, optical_flow_mgoptical_flow_mgOpticalFlowMgOpticalFlowMgoptical_flow_mg

References

A. Wedel, C. Rabe, T. Vaudrey, T. Brox, U. Franke and D. Cremers: “Efficient dense scene flow from sparse or dense stereo data”; In: Proceedings of the 10th European Conference on Computer Vision: Part I, pages 739-751. Springer-Verlag, 2008.

Module

Foundation