Update bundled version of libmv

- Ensures fix for msvc2012 is applying correct.
- Some code cleanup to match libmv's code style.
- Do not include points which were intersect
  behind the camera to a reconstruction.
- Includes changes needed for keyframe selection.
This commit is contained in:
Sergey Sharybin 2013-05-12 17:06:00 +00:00
parent c3b1f0fa20
commit 5637b0d39b
10 changed files with 423 additions and 294 deletions

302
extern/libmv/ChangeLog vendored

@ -1,3 +1,151 @@
commit f003b9e3031db4592c2d91b1ea2538c73b7e767d
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Sun May 12 22:34:54 2013 +0600
Cleanup in simple pipeline's bundler
- Better match Google's code style conventions.
- Move evaluation part into own function, makes
bundling itself easier to follow.
- Made evaluation an optional parameter.
- Removed note about unsupported camera intrinsics
refining flags. Technically, all combinations
are possible.
commit f0e68f69e5c5f0fd82334246d382e59f1eb20164
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Sun May 12 22:19:31 2013 +0600
Remove check for zero focal length in BA cost functor
This check is actually redundant, because empty intrinsics
will have focal length of 1.0, which means original comment
about BundleIntrinsics was not truth.
It is possible that external user will send focal length of
zero to be refined, but blender prevents this from happening.
commit 7ed5e4da65d2c0df63a08b1e1f4b4de1855f1bf0
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Sat May 11 20:33:54 2013 +0600
Fix compilation error with msvc2012
Using change from glog's upstream for this.
commit 7e162266f96abc25d80e2352cd77f21ed93593b7
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Sat May 11 19:50:57 2013 +0600
Style cleanup, mainly pointed by Sameer in Ceres's codereview
commit 42da053c6410b4f3fb13798c7e9c5f4a861b6825
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Fri May 10 18:30:40 2013 +0600
Keyframe selection improvements
Added additional criteria, which ignores
keyframe pair if success intersection factor
is lower than current candidate pair factor.
This solves issue with keyframe pair at which
most of the tracks are intersecting behind the
camera is accepted (because variance in this
case is really small),
Also tweaked generalized inverse function,
which now doesn't scale epsilon by maximal
matrix element. This gave issues at really bad
candidates with unstable covariance. In this
case almost all eigen values getting zeroed
on inverse leading to small expected error.
commit f3eb090f7240f86799099fe86ce9386eb2bd3007
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Fri May 10 17:59:40 2013 +0600
Keyframe selection code cleanup
- Updated comments in code.
- Removed currently unused functions.
Actually, they'd better be moved to some generic
logging module, but we don't have it now so was
lazy to create one now. Could happen later using
code from git history
- Renamed function to match better to what's going
on in it.
commit b917b48bd877eedd17dec907cacf0b27a36e717d
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Fri May 10 17:44:49 2013 +0600
Optimization for reconstruction variance
Achieved by replacing SVD-based pseudo-inverse with
an eigen solver pseudo inverse.
New function works in the same way: it decomposes
matrix to V*D*V^-1, then inverts diagonal of D
and composes matrix back.
The same way is used to deal with gauges - last
7 eigen values are getting zeroed.
In own tests gives approx 2x boost, without
visible affect on selected keyframe quality.
commit 041b4b54fff66311347a307a5922c2516c76ee44
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Thu Mar 14 14:53:42 2013 +0600
Initial commit of reconstruction variance criteria
which is an addition for GRIC-based keyframe selection.
Uses paper Keyframe Selection for Camera Motion and Structure
Estimation from Multiple Views,
ftp://ftp.tnt.uni-hannover.de/pub/papers/2004/ECCV2004-TTHBAW.pdf
as a basis.
Currently implemented camera positions reconstructions,
bundle positions estimation and bundle adjustment step.
Covariance matrix is estimating using generalized inverse
with 7 (by the number of gauge freedoms) zeroed eigen values
of J^T * J.
Additional changes:
- Added utility function FundamentalToEssential to extract
E from F matrix, used by both final reconstruction pipeline
and reconstruction variance code.
- Refactored bundler a bit, so now it's possible to return
different evaluation data, such as number of cameras and
points being minimized and also jacobian.
Jacobian currently contains only camera and points columns,
no intrinsics there yet. It is also currently converting to
an Eigen dense matrix. A bit weak, but speed is nice for
tests.
Columns in jacobian are ordered in the following way:
first columns are cameras (3 cols for rotation and 3 cols
for translation), then goes 3D point columns.
- Switched F and H refining to normalized space. Apparently,
refining F in pixel space squeezes it a lot making it wrong.
- EuclideanIntersect will not add point to reconstruction if
it happened to be behind the camera.
- Cleaned style a bit.
commit 94c4654f1145f86779bd0a7e8cda1fd2c751d27d
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Fri May 10 13:27:21 2013 +0600
Left extra debugging print in reconstruction scale by accident.
commit 3886b488575ec5e79debce7b9f2e9824f5297c0f
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Fri May 10 12:23:03 2013 +0600
@ -493,157 +641,3 @@ Date: Fri Mar 1 17:33:27 2013 +0600
Use threaded cost function, jacobian and linear solver
computation, so bundling is as fast as it could be with
current parameter block structure.
commit 931fe37a10212b91b525d4f6eb753990a338b471
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Fri Mar 1 17:29:21 2013 +0600
Fixed comment for euclidean bundling,
which is now supports raidal bundling independently
from other intrinsics.
commit 217d8e6edc3de1a853fb84275d2d2dd898e7529c
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Tue Feb 26 18:19:01 2013 +0600
Allow K1,K2 refirement combination
It is now possible to refine only radial distortion
with new Ceres based bundler and this new combination
is already used in Blender.
commit d8850addc944d400f7a9c358396c437d9e4acc70
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Tue Feb 26 18:17:09 2013 +0600
Switch euclidean intersection code to use Ceres
Would not expect any significant changes in solver
behavior, but it could be more accurate in some cases.
Switching projective intersection to ceres is marked
as a TODO for now.
commit 6990b7946ec96b3cb2dcfc8a1beaaba9538b0802
Author: Keir Mierle <mierle@gmail.com>
Date: Mon Feb 25 20:00:48 2013 +0000
Switch motion tracker bundle adjustment to Ceres.
Patch originally written by me, then finished by Sergey. Big
thanks to Sergey for troopering through and fixing the many issues
with my original (not compilable) patch.
The Ceres implementation uses 2 parameter blocks for each camera
(1 for rotation and 1 for translation), 1 parameter block for
common intrinsics (focal length etc) and 1 parameter block for
each track (e.g. bundle or 3D point).
We turn on some fancy optimizer options to get better performance,
in particular:
options.preconditioner_type = ceres::SCHUR_JACOBI;
options.linear_solver_type = ceres::ITERATIVE_SCHUR;
options.use_inner_iterations = true;
options.use_nonmonotonic_steps = true;
options.max_num_iterations = 100;
Special thanks to Sameer Agarwal of Ceres fame for splitting out
the SCHUR_JACOBI preconditioner so that it didn't depend on
CHOLMOD. Previously we could not use that preconditioner in
Blender because CHOLMOD is too large of a dependency for Blender.
BundleIntrinsicsLogMessage:
- Moved bunch of if(foo) LG << "bar" into this function, to make
EuclideanBundleCommonIntrinsics a little bit easier to follow.
EuclideanBundle:
- Fix RMSE logging.
commit 1696342954614b54133780d74d6ee0fbcbe224f0
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Tue Feb 26 18:10:33 2013 +0600
Upgrade ceres to latest upstream version
This is needed because of some features of new Ceres
for further development.
commit 575336f794841ada90aacd783285014081b8318c
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Mon Jan 7 15:58:40 2013 +0600
Fixed for keyframe selection
- Calculate residuals for GRIC in pixel space rather than
in normalized space.
This seems to be how it's intended to be used.
Algebraic H and F will still use normalized coordinates which
are more stable, after this matrices are converted to pixel
space and Ceres refinement happens in pixel space.
- Standard deviation calculation was wrong in GRIC. It shouldn't
be deviation of residuals, but as per Torr it should be deviation
of measurement error, which is constant (in our case 0.1)
Not sure if using squared cost function is correct for GRIC,
but cost function is indeed squared and in most papers cost
function is used for GRIC. After some further tests we could
switch GRIC residuals to non-squared distance.
- Bring back rho part of GRIC, in unit tests it doesn't make
sense whether it's enabled or not, lets see how it'll behave
in real-life footage.
- Added one more unit test based on elevator scene and manual
keyframe selection.
commit 24117f3c3fc5531beb6497d79bb6f1780a998081
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Sun Jan 6 19:07:06 2013 +0600
Added test for keyframe selection based on manual selection
Additional changes:
- Reduce minimal correspondence to match real-world manually
tracked footage
- Returned back squares to SymmetricEpipolarDistance and
SymmetricGeometricDistance -- this is actually a cost
functions, not distances and they shall be squared.
commit 770eb0293b881c4c419c587a6cdb062c47ab6e44
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Fri Dec 21 00:43:30 2012 +0600
Improvements for keyframe selection
- Changed main keyframe selection cycle, so in cases there're no
more next keyframes for current keyframe could be found in the
image sequence, current keyframe would be moved forward and
search continues.
This helps in cases when there's poor motion in the beginning
of the sequence, then markers becomes occluded. There could be
good keyframes in the middle of the shot still.
- Extended keyframe_selection_test with real world cases.
- Moved correspondences constraint to the top, so no H/F estimation
happens if there's bad correspondence. Makes algorithm go a bit
faster.
Strangely, but using non-squared distances makes neighbor frames
test fail, using squared distances makes this tests pass.
However, using non-squared distances seems to be working better
in real tests i've been doing. So this requires more investigation/
commit 7415c62fbda36c5bd1c291bc94d535a66da896d0
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Thu Dec 20 18:46:09 2012 +0600
Cosmetic change to correspondences reports in keyframe selection

@ -425,25 +425,28 @@ static void libmv_solveRefineIntrinsics(libmv::Tracks *tracks, libmv::CameraIntr
int bundle_constraints = libmv::BUNDLE_NO_CONSTRAINTS)
{
/* only a few combinations are supported but trust the caller */
int libmv_refine_flags = 0;
int bundle_intrinsics = 0;
if (refine_intrinsics & LIBMV_REFINE_FOCAL_LENGTH) {
libmv_refine_flags |= libmv::BUNDLE_FOCAL_LENGTH;
bundle_intrinsics |= libmv::BUNDLE_FOCAL_LENGTH;
}
if (refine_intrinsics & LIBMV_REFINE_PRINCIPAL_POINT) {
libmv_refine_flags |= libmv::BUNDLE_PRINCIPAL_POINT;
bundle_intrinsics |= libmv::BUNDLE_PRINCIPAL_POINT;
}
if (refine_intrinsics & LIBMV_REFINE_RADIAL_DISTORTION_K1) {
libmv_refine_flags |= libmv::BUNDLE_RADIAL_K1;
bundle_intrinsics |= libmv::BUNDLE_RADIAL_K1;
}
if (refine_intrinsics & LIBMV_REFINE_RADIAL_DISTORTION_K2) {
libmv_refine_flags |= libmv::BUNDLE_RADIAL_K2;
bundle_intrinsics |= libmv::BUNDLE_RADIAL_K2;
}
progress_update_callback(callback_customdata, 1.0, "Refining solution");
libmv::EuclideanBundleCommonIntrinsics(*(libmv::Tracks *)tracks, libmv_refine_flags,
reconstruction, intrinsics, bundle_constraints);
libmv::EuclideanBundleCommonIntrinsics(*(libmv::Tracks *)tracks,
bundle_intrinsics,
bundle_constraints,
reconstruction,
intrinsics);
}
static void cameraIntrinsicsFromOptions(libmv::CameraIntrinsics *camera_intrinsics,
@ -573,9 +576,9 @@ struct libmv_Reconstruction *libmv_solveModal(struct libmv_Tracks *libmv_tracks,
libmv::CameraIntrinsics empty_intrinsics;
libmv::EuclideanBundleCommonIntrinsics(normalized_tracks,
libmv::BUNDLE_NO_INTRINSICS,
libmv::BUNDLE_NO_TRANSLATION,
reconstruction,
&empty_intrinsics,
libmv::BUNDLE_NO_TRANSLATION);
&empty_intrinsics);
/* Refinement. */
if (libmv_reconstruction_options->refine_intrinsics) {

@ -388,4 +388,23 @@ bool MotionFromEssentialAndCorrespondence(const Mat3 &E,
}
}
void FundamentalToEssential(const Mat3 &F, Mat3 *E) {
Eigen::JacobiSVD<Mat3> svd(F, Eigen::ComputeFullU | Eigen::ComputeFullV);
// See Hartley & Zisserman page 294, result 11.1, which shows how to get the
// closest essential matrix to a matrix that is "almost" an essential matrix.
double a = svd.singularValues()(0);
double b = svd.singularValues()(1);
double s = (a + b) / 2.0;
LG << "Initial reconstruction's rotation is non-euclidean by "
<< (((a - b) / std::max(a, b)) * 100) << "%; singular values:"
<< svd.singularValues().transpose();
Vec3 diag;
diag << s, s, 0;
*E = svd.matrixU() * diag.asDiagonal() * svd.matrixV().transpose();
}
} // namespace libmv

@ -139,6 +139,11 @@ bool MotionFromEssentialAndCorrespondence(const Mat3 &E,
Mat3 *R,
Vec3 *t);
/**
* Find closest essential matrix E to fundamental F
*/
void FundamentalToEssential(const Mat3 &F, Mat3 *E);
} // namespace libmv
#endif // LIBMV_MULTIVIEW_FUNDAMENTAL_H_

@ -24,7 +24,6 @@
#include "ceres/ceres.h"
#include "ceres/rotation.h"
#include "libmv/base/scoped_ptr.h"
#include "libmv/base/vector.h"
#include "libmv/logging/logging.h"
#include "libmv/multiview/fundamental.h"
@ -55,15 +54,20 @@ enum {
namespace {
// Cost functor which computes reprojection error of 3D point X
// on camera defined by angle-axis rotation and it's translation
// (which are in the same block due to optimization reasons).
//
// This functor uses a radial distortion model.
struct OpenCVReprojectionError {
OpenCVReprojectionError(double observed_x, double observed_y)
OpenCVReprojectionError(const double observed_x, const double observed_y)
: observed_x(observed_x), observed_y(observed_y) {}
template <typename T>
bool operator()(const T* const intrinsics,
const T* const R_t, // Rotation denoted by angle axis
// followed with translation
const T* const X, // Point coordinates 3x1.
const T* const X, // Point coordinates 3x1.
T* residuals) const {
// Unpack the intrinsics.
const T& focal_length = intrinsics[OFFSET_FOCAL_LENGTH];
@ -84,8 +88,9 @@ struct OpenCVReprojectionError {
x[2] += R_t[5];
// Prevent points from going behind the camera.
if (x[2] < T(0))
if (x[2] < T(0)) {
return false;
}
// Compute normalized coordinates: x /= x[2].
T xn = x[0] / x[2];
@ -109,66 +114,56 @@ struct OpenCVReprojectionError {
// The error is the difference between the predicted and observed position.
residuals[0] = predicted_x - T(observed_x);
residuals[1] = predicted_y - T(observed_y);
return true;
}
double observed_x;
double observed_y;
const double observed_x;
const double observed_y;
};
void BundleIntrinsicsLogMessage(int bundle_intrinsics) {
// Print a message to the log which camera intrinsics are gonna to be optimixed.
void BundleIntrinsicsLogMessage(const int bundle_intrinsics) {
if (bundle_intrinsics == BUNDLE_NO_INTRINSICS) {
LG << "Bundling only camera positions.";
} else if (bundle_intrinsics == BUNDLE_FOCAL_LENGTH) {
LG << "Bundling f.";
} else if (bundle_intrinsics == (BUNDLE_FOCAL_LENGTH |
BUNDLE_PRINCIPAL_POINT)) {
LG << "Bundling f, px, py.";
} else if (bundle_intrinsics == (BUNDLE_FOCAL_LENGTH |
BUNDLE_PRINCIPAL_POINT |
BUNDLE_RADIAL)) {
LG << "Bundling f, px, py, k1, k2.";
} else if (bundle_intrinsics == (BUNDLE_FOCAL_LENGTH |
BUNDLE_PRINCIPAL_POINT |
BUNDLE_RADIAL |
BUNDLE_TANGENTIAL)) {
LG << "Bundling f, px, py, k1, k2, p1, p2.";
} else if (bundle_intrinsics == (BUNDLE_FOCAL_LENGTH |
BUNDLE_RADIAL |
BUNDLE_TANGENTIAL)) {
LG << "Bundling f, px, py, k1, k2, p1, p2.";
} else if (bundle_intrinsics == (BUNDLE_FOCAL_LENGTH |
BUNDLE_RADIAL)) {
LG << "Bundling f, k1, k2.";
} else if (bundle_intrinsics == (BUNDLE_FOCAL_LENGTH |
BUNDLE_RADIAL_K1)) {
LG << "Bundling f, k1.";
} else if (bundle_intrinsics == (BUNDLE_RADIAL_K1 |
BUNDLE_RADIAL_K2)) {
LG << "Bundling k1, k2.";
LOG(INFO) << "Bundling only camera positions.";
} else {
LOG(FATAL) << "Unsupported bundle combination.";
std::string bundling_message = "";
#define APPEND_BUNDLING_INTRINSICS(name, flag) \
if (bundle_intrinsics & flag) { \
if (!bundling_message.empty()) { \
bundling_message += ", "; \
} \
bundling_message += name; \
} (void)0
APPEND_BUNDLING_INTRINSICS("f", BUNDLE_FOCAL_LENGTH);
APPEND_BUNDLING_INTRINSICS("px, py", BUNDLE_PRINCIPAL_POINT);
APPEND_BUNDLING_INTRINSICS("k1", BUNDLE_RADIAL_K1);
APPEND_BUNDLING_INTRINSICS("k2", BUNDLE_RADIAL_K2);
APPEND_BUNDLING_INTRINSICS("p1", BUNDLE_TANGENTIAL_P1);
APPEND_BUNDLING_INTRINSICS("p2", BUNDLE_TANGENTIAL_P2);
LOG(INFO) << "Bundling " << bundling_message << ".";
}
}
// Pack intrinsics from object to an array for easier
// and faster minimization
void PackIntrinisicsIntoArray(CameraIntrinsics *intrinsics,
// and faster minimization.
void PackIntrinisicsIntoArray(const CameraIntrinsics &intrinsics,
double ceres_intrinsics[8]) {
ceres_intrinsics[OFFSET_FOCAL_LENGTH] = intrinsics->focal_length();
ceres_intrinsics[OFFSET_PRINCIPAL_POINT_X] = intrinsics->principal_point_x();
ceres_intrinsics[OFFSET_PRINCIPAL_POINT_Y] = intrinsics->principal_point_y();
ceres_intrinsics[OFFSET_K1] = intrinsics->k1();
ceres_intrinsics[OFFSET_K2] = intrinsics->k2();
ceres_intrinsics[OFFSET_K3] = intrinsics->k3();
ceres_intrinsics[OFFSET_P1] = intrinsics->p1();
ceres_intrinsics[OFFSET_P2] = intrinsics->p2();
ceres_intrinsics[OFFSET_FOCAL_LENGTH] = intrinsics.focal_length();
ceres_intrinsics[OFFSET_PRINCIPAL_POINT_X] = intrinsics.principal_point_x();
ceres_intrinsics[OFFSET_PRINCIPAL_POINT_Y] = intrinsics.principal_point_y();
ceres_intrinsics[OFFSET_K1] = intrinsics.k1();
ceres_intrinsics[OFFSET_K2] = intrinsics.k2();
ceres_intrinsics[OFFSET_K3] = intrinsics.k3();
ceres_intrinsics[OFFSET_P1] = intrinsics.p1();
ceres_intrinsics[OFFSET_P2] = intrinsics.p2();
}
// Unpack intrinsics back from an array to an object
void UnpackIntrinsicsFromArray(CameraIntrinsics *intrinsics,
double ceres_intrinsics[8]) {
// Unpack intrinsics back from an array to an object.
void UnpackIntrinsicsFromArray(const double ceres_intrinsics[8],
CameraIntrinsics *intrinsics) {
intrinsics->SetFocalLength(ceres_intrinsics[OFFSET_FOCAL_LENGTH],
ceres_intrinsics[OFFSET_FOCAL_LENGTH]);
@ -189,46 +184,132 @@ void UnpackIntrinsicsFromArray(CameraIntrinsics *intrinsics,
// Element with index i matches to a rotation+translation for
// camera at image i.
vector<Vec6> PackCamerasRotationAndTranslation(
const Tracks &tracks,
EuclideanReconstruction *reconstruction) {
vector<Vec6> cameras_R_t;
const Tracks &tracks,
const EuclideanReconstruction &reconstruction) {
vector<Vec6> all_cameras_R_t;
int max_image = tracks.MaxImage();
cameras_R_t.resize(max_image + 1);
all_cameras_R_t.resize(max_image + 1);
for (int i = 0; i <= max_image; i++) {
EuclideanCamera *camera = reconstruction->CameraForImage(i);
const EuclideanCamera *camera = reconstruction.CameraForImage(i);
if (!camera)
if (!camera) {
continue;
}
ceres::RotationMatrixToAngleAxis(&camera->R(0, 0),
&cameras_R_t[i](0));
cameras_R_t[i].tail<3>() = camera->t;
&all_cameras_R_t[i](0));
all_cameras_R_t[i].tail<3>() = camera->t;
}
return cameras_R_t;
return all_cameras_R_t;
}
// Convert cameras rotations fro mangle axis back to rotation matrix
// Convert cameras rotations fro mangle axis back to rotation matrix.
void UnpackCamerasRotationAndTranslation(
const Tracks &tracks,
EuclideanReconstruction *reconstruction,
vector<Vec6> cameras_R_t) {
const Tracks &tracks,
const vector<Vec6> &all_cameras_R_t,
EuclideanReconstruction *reconstruction) {
int max_image = tracks.MaxImage();
for (int i = 0; i <= max_image; i++) {
EuclideanCamera *camera = reconstruction->CameraForImage(i);
if (!camera)
if (!camera) {
continue;
}
ceres::AngleAxisToRotationMatrix(&cameras_R_t[i](0),
ceres::AngleAxisToRotationMatrix(&all_cameras_R_t[i](0),
&camera->R(0, 0));
camera->t = cameras_R_t[i].tail<3>();
camera->t = all_cameras_R_t[i].tail<3>();
}
}
// Converts sparse CRSMatrix to Eigen matrix, so it could be used
// all over in the pipeline.
//
// TODO(sergey): currently uses dense Eigen matrices, best would
// be to use sparse Eigen matrices
void CRSMatrixToEigenMatrix(const ceres::CRSMatrix &crs_matrix,
Mat *eigen_matrix) {
eigen_matrix->resize(crs_matrix.num_rows, crs_matrix.num_cols);
eigen_matrix->setZero();
for (int row = 0; row < crs_matrix.num_rows; ++row) {
int start = crs_matrix.rows[row];
int end = crs_matrix.rows[row + 1] - 1;
for (int i = start; i <= end; i++) {
int col = crs_matrix.cols[i];
double value = crs_matrix.values[i];
(*eigen_matrix)(row, col) = value;
}
}
}
void EuclideanBundlerPerformEvaluation(const Tracks &tracks,
EuclideanReconstruction *reconstruction,
vector<Vec6> *all_cameras_R_t,
ceres::Problem *problem,
BundleEvaluation *evaluation) {
int max_track = tracks.MaxTrack();
// Number of camera rotations equals to number of translation,
int num_cameras = all_cameras_R_t->size();
int num_points = 0;
for (int i = 0; i <= max_track; i++) {
const EuclideanPoint *point = reconstruction->PointForTrack(i);
if (point) {
num_points++;
}
}
LG << "Number of cameras " << num_cameras;
LG << "Number of points " << num_points;
evaluation->num_cameras = num_cameras;
evaluation->num_points = num_points;
if (evaluation->evaluate_jacobian) {
// Evaluate jacobian matrix.
ceres::CRSMatrix evaluated_jacobian;
ceres::Problem::EvaluateOptions eval_options;
// Cameras goes first in the ordering.
int max_image = tracks.MaxImage();
bool is_first_camera = true;
for (int i = 0; i <= max_image; i++) {
const EuclideanCamera *camera = reconstruction->CameraForImage(i);
if (camera) {
double *current_camera_R_t = &(*all_cameras_R_t)[i](0);
// All cameras are variable now.
if (is_first_camera) {
problem->SetParameterBlockVariable(current_camera_R_t);
is_first_camera = false;
}
eval_options.parameter_blocks.push_back(current_camera_R_t);
}
}
// Points goes at the end of ordering,
for (int i = 0; i <= max_track; i++) {
EuclideanPoint *point = reconstruction->PointForTrack(i);
if (point) {
eval_options.parameter_blocks.push_back(&point->X(0));
}
}
problem->Evaluate(eval_options,
NULL, NULL, NULL,
&evaluated_jacobian);
CRSMatrixToEigenMatrix(evaluated_jacobian, &evaluation->jacobian);
}
}
} // namespace
void EuclideanBundle(const Tracks &tracks,
@ -236,15 +317,18 @@ void EuclideanBundle(const Tracks &tracks,
CameraIntrinsics intrinsics;
EuclideanBundleCommonIntrinsics(tracks,
BUNDLE_NO_INTRINSICS,
BUNDLE_NO_CONSTRAINTS,
reconstruction,
&intrinsics);
&intrinsics,
NULL);
}
void EuclideanBundleCommonIntrinsics(const Tracks &tracks,
int bundle_intrinsics,
const int bundle_intrinsics,
const int bundle_constraints,
EuclideanReconstruction *reconstruction,
CameraIntrinsics *intrinsics,
int bundle_constraints) {
BundleEvaluation *evaluation) {
LG << "Original intrinsics: " << *intrinsics;
vector<Marker> markers = tracks.AllMarkers();
@ -255,43 +339,44 @@ void EuclideanBundleCommonIntrinsics(const Tracks &tracks,
// intrinsics into a single block and rely on local parameterizations to
// control which intrinsics are allowed to vary.
double ceres_intrinsics[8];
PackIntrinisicsIntoArray(intrinsics, ceres_intrinsics);
PackIntrinisicsIntoArray(*intrinsics, ceres_intrinsics);
// Convert cameras rotations to angle axis and merge with translation
// into single parameter block for maximal minimization speed
// into single parameter block for maximal minimization speed.
//
// Block for minimization has got the following structure:
// <3 elements for angle-axis> <3 elements for translation>
vector<Vec6> cameras_R_t =
PackCamerasRotationAndTranslation(tracks, reconstruction);
vector<Vec6> all_cameras_R_t =
PackCamerasRotationAndTranslation(tracks, *reconstruction);
// Parameterization used to restrict camera motion for
// modal solvers
ceres::SubsetParameterization *motion_parameterization = NULL;
// Parameterization used to restrict camera motion for modal solvers.
ceres::SubsetParameterization *constant_translation_parameterization = NULL;
if (bundle_constraints & BUNDLE_NO_TRANSLATION) {
std::vector<int> constant_motion;
std::vector<int> constant_translation;
// First three elements are rotation, ast three are translation
constant_motion.push_back(3);
constant_motion.push_back(4);
constant_motion.push_back(5);
// First three elements are rotation, ast three are translation.
constant_translation.push_back(3);
constant_translation.push_back(4);
constant_translation.push_back(5);
motion_parameterization =
new ceres::SubsetParameterization(6, constant_motion);
constant_translation_parameterization =
new ceres::SubsetParameterization(6, constant_translation);
}
// Add residual blocks to the problem.
int num_residuals = 0;
bool have_locked_camera = false;
for (int i = 0; i < markers.size(); ++i) {
const Marker &marker = markers[i];
EuclideanCamera *camera = reconstruction->CameraForImage(marker.image);
EuclideanPoint *point = reconstruction->PointForTrack(marker.track);
if (!camera || !point) {
if (camera == NULL || point == NULL) {
continue;
}
// Rotation of camera denoted in angle axis
double *camera_R_t = &cameras_R_t[camera->image] (0);
// Rotation of camera denoted in angle axis followed with
// camera translaiton.
double *current_camera_R_t = &all_cameras_R_t[camera->image](0);
problem.AddResidualBlock(new ceres::AutoDiffCostFunction<
OpenCVReprojectionError, 2, 8, 6, 3>(
@ -300,18 +385,19 @@ void EuclideanBundleCommonIntrinsics(const Tracks &tracks,
marker.y)),
NULL,
ceres_intrinsics,
camera_R_t,
current_camera_R_t,
&point->X(0));
// We lock first camera for better deal with
// scene orientation ambiguity
// We lock the first camera to better deal with scene orientation ambiguity.
if (!have_locked_camera) {
problem.SetParameterBlockConstant(camera_R_t);
problem.SetParameterBlockConstant(current_camera_R_t);
have_locked_camera = true;
}
if (bundle_constraints & BUNDLE_NO_TRANSLATION)
problem.SetParameterization(camera_R_t, motion_parameterization);
if (bundle_constraints & BUNDLE_NO_TRANSLATION) {
problem.SetParameterization(current_camera_R_t,
constant_translation_parameterization);
}
num_residuals++;
}
@ -325,11 +411,12 @@ void EuclideanBundleCommonIntrinsics(const Tracks &tracks,
BundleIntrinsicsLogMessage(bundle_intrinsics);
if (bundle_intrinsics == BUNDLE_NO_INTRINSICS) {
// No camera intrinsics are refining,
// set the whole parameter block as constant for best performance
// No camera intrinsics are being refined,
// set the whole parameter block as constant for best performance.
problem.SetParameterBlockConstant(ceres_intrinsics);
} else {
// Set intrinsics not being bundles as constant
// Set the camera intrinsics that are not to be bundled as
// constant using some macro trickery.
std::vector<int> constant_intrinsics;
#define MAYBE_SET_CONSTANT(bundle_enum, offset) \
@ -354,7 +441,7 @@ void EuclideanBundleCommonIntrinsics(const Tracks &tracks,
problem.SetParameterization(ceres_intrinsics, subset_parameterization);
}
// Configure the solver
// Configure the solver.
ceres::Solver::Options options;
options.use_nonmonotonic_steps = true;
options.preconditioner_type = ceres::SCHUR_JACOBI;
@ -375,14 +462,19 @@ void EuclideanBundleCommonIntrinsics(const Tracks &tracks,
// Copy rotations and translations back.
UnpackCamerasRotationAndTranslation(tracks,
reconstruction,
cameras_R_t);
all_cameras_R_t,
reconstruction);
// Copy intrinsics back.
if (bundle_intrinsics != BUNDLE_NO_INTRINSICS)
UnpackIntrinsicsFromArray(intrinsics, ceres_intrinsics);
UnpackIntrinsicsFromArray(ceres_intrinsics, intrinsics);
LG << "Final intrinsics: " << *intrinsics;
if (evaluation) {
EuclideanBundlerPerformEvaluation(tracks, reconstruction, &all_cameras_R_t,
&problem, evaluation);
}
}
void ProjectiveBundle(const Tracks & /*tracks*/,

@ -21,6 +21,8 @@
#ifndef LIBMV_SIMPLE_PIPELINE_BUNDLE_H
#define LIBMV_SIMPLE_PIPELINE_BUNDLE_H
#include "libmv/numeric/numeric.h"
namespace libmv {
class CameraIntrinsics;
@ -28,6 +30,31 @@ class EuclideanReconstruction;
class ProjectiveReconstruction;
class Tracks;
struct BundleEvaluation {
BundleEvaluation() :
num_cameras(0),
num_points(0),
evaluate_jacobian(false) {
}
// Number of cameras appeared in bundle adjustment problem
int num_cameras;
// Number of points appeared in bundle adjustment problem
int num_points;
// When set to truth, jacobian of the problem after optimization
// will be evaluated and stored in \parameter jacobian
bool evaluate_jacobian;
// Contains evaluated jacobian of the problem.
// Parameters are ordered in the following way:
// - Intrinsics block
// - Cameras (for each camera rotation goes first, then translation)
// - Points
Mat jacobian;
};
/*!
Refine camera poses and 3D coordinates using bundle adjustment.
@ -60,19 +87,14 @@ void EuclideanBundle(const Tracks &tracks,
The cameras, bundles, and intrinsics are refined in-place.
The only supported combinations of bundle parameters are:
BUNDLE_NO_INTRINSICS
BUNDLE_FOCAL_LENGTH
BUNDLE_FOCAL_LENGTH | BUNDLE_PRINCIPAL_POINT
BUNDLE_FOCAL_LENGTH | BUNDLE_PRINCIPAL_POINT | BUNDLE_RADIAL
BUNDLE_FOCAL_LENGTH | BUNDLE_PRINCIPAL_POINT | BUNDLE_RADIAL | BUNDLE_TANGENTIAL
BUNDLE_RADIAL
Constraints denotes which blocks to keep constant during bundling.
For example it is useful to keep camera translations constant
when bundling tripod motions.
If evaluaiton is not null, different evaluation statistics is filled in
there, plus all the requested additional information (like jacobian) is
also calculating there. Also see comments for BundleEvaluation.
\note This assumes an outlier-free set of markers.
\sa EuclideanResect, EuclideanIntersect, EuclideanReconstructTwoFrames
@ -93,11 +115,11 @@ enum BundleConstraints {
BUNDLE_NO_TRANSLATION = 1,
};
void EuclideanBundleCommonIntrinsics(const Tracks &tracks,
int bundle_intrinsics,
const int bundle_intrinsics,
const int bundle_constraints,
EuclideanReconstruction *reconstruction,
CameraIntrinsics *intrinsics,
int bundle_constraints =
BUNDLE_NO_CONSTRAINTS);
BundleEvaluation *evaluation = NULL);
/*!
Refine camera poses and 3D coordinates using bundle adjustment.
@ -111,7 +133,7 @@ void EuclideanBundleCommonIntrinsics(const Tracks &tracks,
The cameras and bundles (homogeneous 3D points) are refined in-place.
\note This assumes an outlier-free set of markers.
\note This assumes that radial distortion is already corrected for, but
\note This assumes that radial distortion is already corrected for, but
does not assume that that other intrinsics are.
\sa ProjectiveResect, ProjectiveIntersect, ProjectiveReconstructTwoFrames
@ -122,3 +144,4 @@ void ProjectiveBundle(const Tracks &tracks,
} // namespace libmv
#endif // LIBMV_SIMPLE_PIPELINE_BUNDLE_H

@ -68,20 +68,8 @@ bool EuclideanReconstructTwoFrames(const vector<Marker> &markers,
NormalizedEightPointSolver(x1, x2, &F);
// The F matrix should be an E matrix, but squash it just to be sure.
Eigen::JacobiSVD<Mat3> svd(F, Eigen::ComputeFullU | Eigen::ComputeFullV);
// See Hartley & Zisserman page 294, result 11.1, which shows how to get the
// closest essential matrix to a matrix that is "almost" an essential matrix.
double a = svd.singularValues()(0);
double b = svd.singularValues()(1);
double s = (a + b) / 2.0;
LG << "Initial reconstruction's rotation is non-euclidean by "
<< (((a - b) / std::max(a, b)) * 100) << "%; singular values:"
<< svd.singularValues().transpose();
Vec3 diag;
diag << s, s, 0;
Mat3 E = svd.matrixU() * diag.asDiagonal() * svd.matrixV().transpose();
Mat3 E;
FundamentalToEssential(F, &E);
// Recover motion between the two images. Since this function assumes a
// calibrated camera, use the identity for K.

@ -136,6 +136,7 @@ bool EuclideanIntersect(const vector<Marker> &markers,
if (x(2) < 0) {
LOG(ERROR) << "POINT BEHIND CAMERA " << markers[i].image
<< ": " << x.transpose();
return false;
}
}

@ -178,9 +178,13 @@ void InternalCompleteReconstruction(
if (reconstructed_markers.size() >= 2) {
CompleteReconstructionLogProress(update_callback,
(double)tot_resects/(max_image));
PipelineRoutines::Intersect(reconstructed_markers, reconstruction);
num_intersects++;
LG << "Ran Intersect() for track " << track;
if (PipelineRoutines::Intersect(reconstructed_markers,
reconstruction)) {
num_intersects++;
LG << "Ran Intersect() for track " << track;
} else {
LG << "Failed Intersect() for track " << track;
}
}
}
if (num_intersects) {

@ -96,7 +96,7 @@ enum { STDIN_FILENO = 0, STDOUT_FILENO = 1, STDERR_FILENO = 2 };
/* In windows-land, hash<> is called hash_compare<> (from xhash.h) */
/* VC11 provides std::hash */
#if defined(_MSC_VER) && (_MSC_VER < 1700)
#define hash hash_compare
#define hash hash_compare
#endif
/* Sleep is in ms, on windows */