This is mainly a maintaince commit which syncs changes
between Blender and Libmv upstream also bringing new
GLog version.
This GLog version is presumably have better support of
MinGW from "the box".
This commit is also aimed to make further 3d part libs
update easier.
Switch the detector API to a single function which accepts
a float image and detector options. This makes usage of
feature detection more unified across different algorithms.
Options structure is pretty much straightforward and contains
detector to be used and all the detector-specific settings.
Also implemented Harris feature detection algorithm which
is not as fast as FAST one but is expected to detect more
robust feature points. It is also likely that less features
are detected, but better quality than quantity.
Blender will now use Harris detector by default, later we'll
remove FAST detector.
Also made libmv-capi use guarded objetc allocation.
Run into some suspecious cases when it was not so
clear whether memory is being freed or not.
Now we'll know for sure whether there're leaks or not :)
Having this macros in a guardedalloc header helps
using them in other areas (for now it's OCIO and libmv,
but in the future it'll be more places).
Implements an automatic keyframe selection algorithm which uses
couple of approaches to find out best keyframes candidates:
- First, slightly modifier Pollefeys's criteria is used, which
limits correspondence ration from 80% to 100%. This allows to
reject keyframe candidate early without doing heavy math in
cases there're not much common features with first keyframe.
- Second step is based on Geometric Robust Information Criteria
(aka GRIC), which checks whether features motion between
candidate keyframes is better defined by homography or
fundamental matrices.
To be a good keyframe candidate, fundamental matrix need to
define motion better than homography (in this case F-GRIC will
be smaller than H-GRIC).
This two criteria are well described in this paper:
http://www.cs.ait.ac.th/~mdailey/papers/Tahir-KeyFrame.pdf
- Final step is based on estimating reconstruction error of
a full-scene solution using candidate keyframes. This part
is based on the following paper:
ftp://ftp.tnt.uni-hannover.de/pub/papers/2004/ECCV2004-TTHBAW.pdf
This step requires reconstruction using candidate keyframes
and obtaining covariance matrix of 3D points positions.
Reconstruction was done pretty much straightforward using
other simple pipeline routines, and for covariance estimation
pseudo-inverse of Hessian is used, which is in this case
(J^T * J)+, where + denotes pseudo-inverse.
Jacobian matrix is estimating using Ceres evaluate API.
This is also crucial to get rid of possible gauge ambiguity,
which is in our case made by zero-ing 7 (by gauge freedoms
number) eigen values in pseudo-inverse.
There're still room for improving and optimizing the code,
but we need some point to start with anyway :)
Thanks to Keir Mierle and Sameer Agarwal who assisted a lot
to make this feature working.
Makes code in tracking.cc much easier to understand and modify,
without worring to breck compulation with Libmv disabled.
It is still possible compilation will break due to libmv-capi
changes, but that's not happening so much often.
Made it so reconstructed scene always scaled in a way
that variance of camera centers is unity.
This solves "issues" when different keyframes will
give the same reprojection error but will give scenes
with different.scale, which could easily have been
considered as a bad keyframe combination.
This change is essential for automatic keyframe
selection algorithm to work reliable for user.
There're some features planned which would
require rigid registration, but this code
would need to be re-done anyway to use new
minimizer and solving some issues with ICP
algorithm there.
Several major things are done in this commit:
- First of all, logic of modal solver was changed.
We do not rely on only minimizer to take care of
guessing rotation for frame, but we're using
analytical rotation computation for point clouds
to obtain initial rotation.
Then this rotation is being refined using Ceres
minimizer and now instead of minimizing average
distance between points of point of two clouds,
minimization of reprojection error of point
cloud onto frame happens.
This gives quite a bit of precision improvement.
- Second bigger improvement here is using bundle
adjustment for a result of first step when we're
only estimating rotation between neighbor images
and reprojecting markers.
This averages error across the image sequence
avoiding error accumulation. Also, this will
tweak bundles themselves a bit for better match.
- And last bigger improvement here is support of
camera intrinsics refirenment.
This allowed to significantly improve solution
for real-life footage and results after such
refining are much more usable than it were before.
Thanks to Keir for the help and code review.
Also made it theraded linear solver, seems it makes
sense for iterative schur with inner iterations
enabled.
Use OpenMO's max therads called from bundler code
to detect how many threads to use. Could be changed
in a way that number of threads is passing in options
from blender side in the future.
Also removed redundant V3D definition from compiler's
flags.
With new bundle adjustment based on Ceres we don't need
SSBA library anymore. This also means we don't need ldl
library and libmv is no longer depends on colamd as well.
SSBA seemed to be working OK last time i've checked it
with MSVC and optimization enabled.
Also, we'll likely replace it with own BA soon, which
works fine with MSVC anyway.
find ldl symbols because order of libraries seems to be critical
for gcc linker.
A bit stupid, but that's how linker works..
Both CMake and SCons shall work fine on linux now.
Root of the issue goes to SSBA library which didn't work
properly when using optimization in MSVC. It was worked
around by disabling optimization for libmv, which is in
fact shame and shouldn't have been done.
It seems after some changes optimization does not affect
on SSBA code, but enabling optimization could be risky so
close to release.
For now solved by splitting SSBA to separate CMake/SCons
library, disabling optimization only for this particular
library and enabling optimization for rest of libmv.
Tested on all files which used to fail with optimization
enabled in SSBA and all of them works the same as before.
Tracking speed is significantly higher now.
After release we'll enable optimization for SSBA as well,
so there'll be no crappy build setup. Later we'll replace
old SSBA library with new BA code based on Ceres.
Bundle script would be broken for until then, so better
not to use it.
libmv still requires optimization switched off because
of some incompatibility of SSBA and MSVC optimizer which
makes bundle adjustment work just wrong.
This shall not be an issue for Ceres and no need to
disable optimization for extern_ceres
- move object_iterators.c --> view3d_iterators. (ED_object.h had to include ED_view3d.h which isn't so nice)
- move projection functions from view3d_view.c --> view3d_project.c (view3d_view was becoming a mishmash of utility functions and operators).
- some some cmake includes as system-includes.
Such stuff better be solved in glog itself.
Should be pretty safe change since it was defined for CMake only
and AFAIR Jens wanted to get rid of this too.
===========================================
Major list of changes done in tomato branch:
- Add a planar tracking implementation to libmv
This adds a new planar tracking implementation to libmv. The
tracker is based on Ceres[1], the new nonlinear minimizer that
myself and Sameer released from Google as open source. Since
the motion model is more involved, the interface is
different than the RegionTracker interface used previously
in Blender.
The start of a C API in libmv-capi.{cpp,h} is also included.
- Migrate from pat_{min,max} for markers to 4 corners representation
Convert markers in the movie clip editor / 2D tracker from using
pat_min and pat_max notation to using the a more general, 4-corner
representation.
There is still considerable porting work to do; in particular
sliding from preview widget does not work correct for rotated
markers.
All other areas should be ported to new representation:
* Added support of sliding individual corners. LMB slide + Ctrl
would scale the whole pattern
* S would scale the whole marker, S-S would scale pattern only
* Added support of marker's rotation which is currently rotates
only patterns around their centers or all markers around median,
Rotation or other non-translation/scaling transformation of search
area doesn't make sense.
* Track Preview widget would display transformed pattern which
libmv actually operates with.
- "Efficient Second-order Minimization" for the planar tracker
This implements the "Efficient Second-order Minimization"
scheme, as supported by the existing translation tracker.
This increases the amount of per-iteration work, but
decreases the number of iterations required to converge and
also increases the size of the basin of attraction for the
optimization.
- Remove the use of the legacy RegionTracker API from Blender,
and replaces it with the new TrackRegion API. This also
adds several features to the planar tracker in libmv:
* Do a brute-force initialization of tracking similar to "Hybrid"
mode in the stable release, but using all floats. This is slower
but more accurate. It is still necessary to evaluate if the
performance loss is worth it. In particular, this change is
necessary to support high bit depth imagery.
* Add support for masks over the search window. This is a step
towards supporting user-defined tracker masks. The tracker masks
will make it easy for users to make a mask for e.g. a ball.
Not exposed into interface yet/
* Add Pearson product moment correlation coefficient checking (aka
"Correlation" in the UI. This causes tracking failure if the
tracked patch is not linearly related to the template.
* Add support for warping a few points in addition to the supplied
points. This is useful because the tracking code deliberately
does not expose the underlying warp representation. Instead,
warps are specified in an aparametric way via the correspondences.
- Replace the old style tracker configuration panel with the
new planar tracking panel. From a users perspective, this means:
* The old "tracking algorithm" picker is gone. There is only 1
algorithm now. We may revisit this later, but I would much
prefer to have only 1 algorithm. So far no optimization work
has been done so the speed is not there yet.
* There is now a dropdown to select the motion model. Choices:
* Translation
* Translation, rotation
* Translation, scale
* Translation, rotation, scale
* Affine
* Perspective
* The old "Hybrid" mode is gone; instead there is a toggle to
enable or disable translation-only tracker initialization. This
is the equivalent of the hyrbid mode before, but rewritten to work
with the new planar tracking modes.
* The pyramid levels setting is gone. At a future date, the planar
tracker will decide to use pyramids or not automatically. The
pyramid setting was ultimately a mistake; with the brute force
initialization it is unnecessary.
- Add light-normalized tracking
Added the ability to normalize patterns by their average value while
tracking, to make them invariant to global illumination changes.
Additional details could be found at wiki page [2]
[1] http://code.google.com/p/ceres-solver
[2] http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Motion_Tracker
Currently only put sources of Ceres library into extern/libmv/third_party and
setup CMake and SCons building systems.
Integration details:
- Even CMake build files are not re-used from Ceres's trunk: they're using some
automatic stuff detection like glog, pthreads, protobuf and so and it's not
so clear how to re-use that files without modifications.
And IMO it's easier if build files are getting re-generated automatically to
match Blender-specific setup rather than keeping changes made locally in
Blender in sync when re-bundling Ceres library. Especially in case when it's
already needed to support SCons build system.
- Integrated only actual sources, all tests were stripped. Probably it'll be nice
to have them, but they'll need clear integration with current module test stuff
in Blender.
- Suitesparse was disabled. It'll help a lot having it, but there are some difficulties
making cholmod working fine on windows. Would be added in future
- collections_port.cc was also stripped. It's not used by Ceres's upstream and
it gives compilation error (undefined uint32 -- looks like namespace issue).
- Currently all schur eliminators are included. Not sure if it makes sense,
also not sure if it makes sense having them switchable on and off -- IMO better
to have single configuration which works and does not require special tweaks
after everything was set up.
To bundle updated version of Ceres:
- Go to extern/libmv/third_party/ceres folder
- Run ./bundle.sh
This will checkout fresh Ceres snapshot of Windows branch (which is currently
most interesting from integration into Blender POV), apply all patches listed
in patches/series and copy needed files into Blender's working copy. This will
also re-generate CMake/SCons build rules.
If you'll need extra files from Ceres repository which are not present in
Blender, you'll need to copy them manually and then run ./mkfiles.sh from
extern/libmv/third_party/ceres folder which will update list of files used
by Blender.
Thanks to Leir Mierle and Sameer Agarwal (and all others who helped developing
Ceres) this library and thanks to Keir Mierle with help integrating it into Blender!
- Deduplicate patetrn sampling used in esm and lmicklt trackers and
move SamplePattern to image/sample.h - Move computation of Pearson
product-moment correlation into own function in new file image/correlation.h
so all trackers can use it to check final correlation.
- Remove SAD tracker. It's almost the same as brute tracker, with only two differences:
1. It does brute search of affine transformation which in some cases helps to track
rotating features 2. It didn't use common tracker api which probably gave some
speed advantage, but lead to a real headache to use it together with other
trackers leading to duplicated code in blender side.
- Switch blenedr to use brute tracker instead of sad tracker which tracking made
source code much more simple to follow.
This version of libmv includes new gflags and glog libraries which makes
it possible to compile libmv with clang compiler.
Also remove code from CMakeLists which was disabling libmv if using clang.
Tested on linux with gcc-4.6 and clang-3.0, windows cmake+msvc and scons+mingw.
Could be some issues with other platforms/build system which shall be simple to resolve.
Comment from Keir's commit:
Add a new hybrid region tracker for motion tracking to libmv, and
add it as an option (under "Hybrid") in the tracking settings. The
region tracker is a combination of brute force tracking for coarse
alignment, then refinement with the ESM/KLT algorithm already in
libmv that gives excellent subpixel precision (typically 1/50'th
of a pixel)
This also adds a new "brute force" region tracker which does a
brute force search through every pixel position in the destination
for the pattern in the first frame. It leverages SSE if available,
similar to the SAD tracker, to do this quickly. Currently it does
some unnecessary conversions to/from floating point that will get
fixed later.
The hybrid tracker glues the two trackers (brute & ESM) together
to get an overall better tracker. The algorithm is simple:
1. Track from frame 1 to frame 2 with the brute force tracker.
This tries every possible pixel position for the pattern from
frame 1 in frame 2. The position with the smallest
sum-of-absolute-differences is chosen. By definition, this
position is only accurate up to 1 pixel or so.
2. Using the result from 1, initialize a track with ESM. This does
a least-squares fit with subpixel precision.
3. If the ESM shift was more than 2 pixels, report failure.
4. If the ESM track shifted less than 2 pixels, then the track is
good and we're done. The rationale here is that if the
refinement stage shifts more than 1 pixel, then the brute force
result likely found some random position that's not a good fit.
svn command used: svn merge -r 42375:42376 -r 42377:42379 ^/branches/soc-2011-tomato
It was error in CMakeLists.txt caused by automatic bundling script which
expanded variables instead of substituting them as-is.
Fixed both of bundling script and CMakeLists.txt
In some cases solving can take a while (especially when refining is used)
and keeping interface locked is a bit annoying. Now camera solver is moved
to job system and interface isn't locking.
Reporting progress isn't really accurate, but trying to make it more linear
can lead to spending more effort on it than having benefit. Also, changing
status in the information line helps to understand that blender isn't hang
up and solving is till working nicely.
Main changes in code:
- libmv_solveReconstruction now accepts additional parameters:
* progress_update_callback - a function which is getting called
from solver algorithm to report progress back to Blender.
* callback_customdata - a user-defined context which is passing
to progress_update_callback so progress can be updated in needed
blender-side data structures.
This parameters are optional.
- Added structure MovieTrackingStats which is placed in MovieTracking
structure. It's supposed to be used for displaying information about
different operations (currently it's only camera solver, but can be
easily used for something else in the future) in clip editor.
This statistics structure is getting allocated for time operator is
working and not saving into .blend file.
- Clip Editor now displays statistics stored in MovieTrackingStats structure
like it's done for rendering.
- Add support for refining the camera's intrinsic parameters
during a solve. Currently, refining supports only the following
combinations of intrinsic parameters:
f
f, cx, cy
f, cx, cy, k1, k2
f, k1
f, k1, k2
This is not the same as autocalibration, since the user must
still make a reasonable initial guess about the focal length and
other parameters, whereas true autocalibration would eliminate
the need for the user specify intrinsic parameters at all.
However, the solver works well with only rough guesses for the
focal length, so perhaps full autocalibation is not that
important.
Adding support for the last two combinations, (f, k1) and (f,
k1, k2) required changes to the library libmv depends on for
bundle adjustment, SSBA. These changes should get ported
upstream not just to libmv but to SSBA as well.
- Improved the region of convergence for bundle adjustment by
increasing the number of Levenberg-Marquardt iterations from 50
to 500. This way, the solver is able to crawl out of the bad
local minima it gets stuck in when changing from, for example,
bundling k1 and k2 to just k1 and resetting k2 to 0.
- Add several new region tracker implementations. A region tracker
is a libmv concept, which refers to tracking a template image
pattern through frames. The impact to end users is that tracking
should "just work better". I am reserving a more detailed
writeup, and maybe a paper, for later.
- Other libmv tweaks, such as detecting that a tracker is headed
outside of the image bounds.
This includes several changes made directly to the libmv extern
code rather expecting to get those changes through normal libmv
channels, because I, the libmv BDFL, decided it was faster to work
on libmv directly in Blender, then later reverse-port the libmv
changes from Blender back into libmv trunk. The interesting part
is that I added a full Levenberg-Marquardt loop to the region
tracking code, which should lead to a more stable solutions. I
also added a hacky implementation of "Efficient Second-Order
Minimization" for tracking, which works nicely. A more detailed
quantitative evaluation will follow.
Original patch by Keir, cleaned a bit by myself.
===========================
Commiting camera tracking integration gsoc project into trunk.
This commit includes:
- Bundled version of libmv library (with some changes against official repo,
re-sync with libmv repo a bit later)
- New datatype ID called MovieClip which is optimized to work with movie
clips (both of movie files and image sequences) and doing camera/motion
tracking operations.
- New editor called Clip Editor which is currently used for motion/tracking
stuff only, but which can be easily extended to work with masks too.
This editor supports:
* Loading movie files/image sequences
* Build proxies with different size for loaded movie clip, also supports
building undistorted proxies to increase speed of playback in
undistorted mode.
* Manual lens distortion mode calibration using grid and grease pencil
* Supervised 2D tracking using two different algorithms KLT and SAD.
* Basic algorithm for feature detection
* Camera motion solving. scene orientation
- New constraints to "link" scene objects with solved motions from clip:
* Follow Track (make object follow 2D motion of track with given name
or parent object to reconstructed 3D position of track)
* Camera Solver to make camera moving in the same way as reconstructed camera
This commit NOT includes changes from tomato branch:
- New nodes (they'll be commited as separated patch)
- Automatic image offset guessing for image input node and image editor
(need to do more tests and gather more feedback)
- Code cleanup in libmv-capi. It's not so critical cleanup, just increasing
readability and understanadability of code. Better to make this chaneg when
Keir will finish his current patch.
More details about this project can be found on this page:
http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011
Further development of small features would be done in trunk, bigger/experimental
features would first be implemented in tomato branch.