Commit Graph

30 Commits

Author SHA1 Message Date
Sergey Sharybin
9f32e83175 Weighted tracks
Added a weight slider to track which defines
how much particular track affects in a final
reconstruction. This weight is for sure
animateable.

Currently it affects on BA step only which in
most cases will work just fine.

The usecase of this slider is to have it set
to 1.0 most of the time where the track is
good, but blend it's weight down to 0 when
tracker looses the track. This will prevent
camera from jump.

Tutorial is to be done by Sebastian.
2013-10-26 13:22:38 +00:00
Sergey Sharybin
eb69cb7de3 Get rid of Allow Fallback option
It was rather confusing from the user usage point
of view and didn't get so much improvement after
new bundle adjuster was added.

In the future we might want to switch resection
to PPnP algorithm, which could also might be a
nice alternative to fallback option.
2013-10-15 15:21:41 +00:00
Sergey Sharybin
2ddbb5d1e1 Fix for plane track jittering
Jittering was caused by homography not being estimated
accurate enough.

Before this, only algebraic estimation was used, which
is indeed not so much great, Now use algebraic estimation
followed with refinement step using Ceres minimizer.

The code was already there since keyframe selection patch,
made such estimation a generic function in multiview/ and
changed API for estimation in order to pass all additional
options via an options structure (the same way as it's
done fr Ceres).

This includes changes to both homography and fundamental
estimation.

TODO:
- Need to document Ceres functors better.
- Need to support homogeneous coordinates (currently
  only euclidean coords are supported).
2013-09-30 09:35:04 +00:00
Sergey Sharybin
24ce60cfe4 Merge plane track feature from tomato branch
This commit includes all the changes made for plane tracker
in tomato branch.

Movie clip editor changes:

- Artist might create a plane track out of multiple point
  tracks which belongs to the same track (minimum amount of
  point tracks is 4, maximum is not actually limited).

  When new plane track is added, it's getting "tracked"
  across all point tracks, which makes it stick to the same
  plane point tracks belong to.

- After plane track was added, it need to be manually adjusted
  in a way it covers feature one might to mask/replace.

  General transform tools (G, R, S) or sliding corners with
  a mouse could be sued for this. Plane corner which
  corresponds to left bottom image corner has got X/Y axis
  on it (red is for X axis, green for Y).

- Re-adjusting plane corners makes plane to be "re-tracked"
  for the frames sequence between current frame and next
  and previous keyframes.

- Kayframes might be removed from the plane, using Shit-X
  (Marker Delete) operator. However, currently manual
  re-adjustment or "re-track" trigger is needed.

Compositor changes:

- Added new node called Plane Track Deform.

- User selects which plane track to use (for this he need
  to select movie clip datablock, object and track names).

- Node gets an image input, which need to be warped into
  the plane.

- Node outputs:
  * Input image warped into the plane.
  * Plane, rasterized to a mask.

Masking changes:

- Mask points might be parented to a plane track, which
  makes this point deforming in a way as if it belongs
  to the tracked plane.

Some video tutorials are available:
- Coder video: http://www.youtube.com/watch?v=vISEwqNHqe4
- Artist video: https://vimeo.com/71727578

This is mine and Keir's holiday code project :)
2013-08-16 09:46:30 +00:00
Joseph Mansfield
c15ae082bb Code cleanup: libmv C API
Clean up inconsistencies in the libmv C API:
- All type identifiers are libmv_TypeName
- All function identifiers libmv_functionName
- Prefer libmv_nounVerb function names (e.g. libmv_featuresDestroy)
- Match Blender code formatting rather than Google
- Spelling corrections

Code review: https://codereview.appspot.com/11494044/
2013-07-31 13:48:12 +00:00
Sergey Sharybin
cf5e979fb4 Motion tracking: automatic keyframe selection
Implements an automatic keyframe selection algorithm which uses
couple of approaches to find out best keyframes candidates:

- First, slightly modifier Pollefeys's criteria is used, which
  limits correspondence ration from 80% to 100%. This allows to
  reject keyframe candidate early without doing heavy math in
  cases there're not much common features with first keyframe.

- Second step is based on Geometric Robust Information Criteria
  (aka GRIC), which checks whether features motion between
  candidate keyframes is better defined by homography or
  fundamental matrices.

  To be a good keyframe candidate, fundamental matrix need to
  define motion better than homography (in this case F-GRIC will
  be smaller than H-GRIC).

  This two criteria are well described in this paper:
  http://www.cs.ait.ac.th/~mdailey/papers/Tahir-KeyFrame.pdf

- Final step is based on estimating reconstruction error of
  a full-scene solution using candidate keyframes. This part
  is based on the following paper:

  ftp://ftp.tnt.uni-hannover.de/pub/papers/2004/ECCV2004-TTHBAW.pdf

  This step requires reconstruction using candidate keyframes
  and obtaining covariance matrix of 3D points positions.
  Reconstruction was done pretty much straightforward using
  other simple pipeline routines, and for covariance estimation
  pseudo-inverse of Hessian is used, which is in this case
  (J^T * J)+, where + denotes pseudo-inverse.

  Jacobian matrix is estimating using Ceres evaluate API.

  This is also crucial to get rid of possible gauge ambiguity,
  which is in our case made by zero-ing 7 (by gauge freedoms
  number) eigen values in pseudo-inverse.

  There're still room for improving and optimizing the code,
  but we need some point to start with anyway :)

  Thanks to Keir Mierle and Sameer Agarwal who assisted a lot
  to make this feature working.
2013-05-30 09:03:49 +00:00
Sergey Sharybin
d4c6ac9a60 Cleanup and small improvements to libmv
- Added const modifiers where it makes sense and
  helps keep code safe.
- Reshuffled argument to match <inputs>,<outputs>
  convention on parameters.
- Pass values to ApplyRadialDistortionCameraIntrinsics
  by a constant reference.
  This will save lots of CPU ticks passing relatively
  heavy jet objects to this function when running
  bundle adjustment.
2013-05-13 14:39:06 +00:00
Sergey Sharybin
beb73831f6 Documentation for functions inside tracking.c
Additional changes:

- Cleaned up sources to reduce mess in some
  big functions.
- Removed unused function from libmv c-api.
- Made functions naming more consistent.
- Use bool for internal stuff in tracking.c.

Shall be no functional changes :)
2013-05-12 16:04:08 +00:00
Sergey Sharybin
6dc4ea34e4 Multi-threaded frame calculation for movie clip proxies
This commit implements multi-threaded calculation of frames
when building proxies. Both scaling and undistortion steps
are now threaded.

Frames and proxy resolution are still handled one-by-one,
saving files after every single step. So if HDD is not so
fast, this commit could have not so much benefit.

Internal changes:

- Added IMB_scaleImBuf_threaded which scales given image
  buffer in multiple threads and uses bilinear filtering.

- libmv's camera intrinsics now have SetThreads() method
  which is used to specify how many OpenMP threads to use
  for buffer distortion/undistortion.

  And yeah, this code is using OpenMP for threading.

- Reshuffled a bit libmv-capi calls and added function
  BKE_tracking_distortion_set_threads to specify number
  of threads used by intrinscis.
2013-03-15 11:59:46 +00:00
Sergey Sharybin
d68e766c8c This lines are also not so much useful for now. 2013-02-28 14:25:33 +00:00
Sergey Sharybin
52f34f017d Modal (aka tripod) solver rework
Several major things are done in this commit:

- First of all, logic of modal solver was changed.
  We do not rely on only minimizer to take care of
  guessing rotation for frame, but we're using
  analytical rotation computation for point clouds
  to obtain initial rotation.

  Then this rotation is being refined using Ceres
  minimizer and now instead of minimizing average
  distance between points of point of two clouds,
  minimization of reprojection error of point
  cloud onto frame happens.

  This gives quite a bit of precision improvement.

- Second bigger improvement here is using bundle
  adjustment for a result of first step when we're
  only estimating rotation between neighbor images
  and reprojecting markers.

  This averages error across the image sequence
  avoiding error accumulation. Also, this will
  tweak bundles themselves a bit for better match.

- And last bigger improvement here is support of
  camera intrinsics refirenment.

  This allowed to significantly improve solution
  for real-life footage and results after such
  refining are much more usable than it were before.

Thanks to Keir for the help and code review.
2013-02-28 14:24:42 +00:00
Sergey Sharybin
ff6ca7dcd0 code cleanup: remove unused and unsupported functions from libmv-capi
---
svn merge -r52855:52856 ^/branches/soc-2011-tomato
2013-02-25 09:38:59 +00:00
Sergey Sharybin
da2dc58773 code cleanup: remove unused and unsupported functions from libmv-capi 2012-12-10 16:38:39 +00:00
Sergey Sharybin
5137b5a146 Camera tracking: libmv distortion API now also uses camera intrinsics structure
instead of passing all the parameters to every function.

Makes it much easier to tweak distortion model.
2012-12-10 16:38:28 +00:00
Sergey Sharybin
ec870eb214 code cleanup: camera tracking
- Moved keyframes and refirement flags into reconstruction options structure
- Moved distortion coefficients and other camera intrinsics into own structure
- Cleaned up reconstruction functions in libmv c-api
2012-12-10 16:38:13 +00:00
Sergey Sharybin
18326d852b Merging r50625 through r51896 from trunk into soc-2011-tomato
Merging just in case we'll want to develop some experimental stuff
2012-11-05 19:42:27 +00:00
Sergey Sharybin
552887251f Masking support for motion tracks
Added option to use Grease Pencil datablock as a mask for pattern
when doing motion tracking. Option could be found in Tracking Settings
panel.

All strokes would be rasterized separately from each other and every
stroke is treating as a closed spline.

Also added option to apply a mask on track preview which is situated
just after B/B/W channel button under track preview.
2012-06-12 11:13:53 +00:00
Sergey Sharybin
6ab087ff99 Scale search area when doing planar tracking
Helps keeping features tracked when there's large scale happens
without need to manually re-adjust search area.

Currently using factor of pattern's boundbox scale, but probably
could be done in more accurate way?
2012-06-11 11:40:54 +00:00
Sergey Sharybin
25bb441301 Planar tracking support for motion tracking
===========================================

Major list of changes done in tomato branch:

- Add a planar tracking implementation to libmv
  This adds a new planar tracking implementation to libmv. The
  tracker is based on Ceres[1], the new nonlinear minimizer that
  myself and Sameer released from Google as open source. Since
  the motion model is more involved, the interface is
  different than the RegionTracker interface used previously
  in Blender.

  The start of a C API in libmv-capi.{cpp,h} is also included.

- Migrate from pat_{min,max} for markers to 4 corners representation

  Convert markers in the movie clip editor / 2D tracker from using
  pat_min and pat_max notation to using the a more general, 4-corner
  representation.

  There is still considerable porting work to do; in particular
  sliding from preview widget does not work correct for rotated
  markers.

  All other areas should be ported to new representation:

  * Added support of sliding individual corners. LMB slide + Ctrl
    would scale the whole pattern
  * S would scale the whole marker, S-S would scale pattern only
  * Added support of marker's rotation which is currently rotates
    only patterns around their centers or all markers around median,

    Rotation or other non-translation/scaling transformation of search
    area doesn't make sense.

  * Track Preview widget would display transformed pattern which
    libmv actually operates with.

- "Efficient Second-order Minimization" for the planar tracker

  This implements the "Efficient Second-order Minimization"
  scheme, as supported by the existing translation tracker.
  This increases the amount of per-iteration work, but
  decreases the number of iterations required to converge and
  also increases the size of the basin of attraction for the
  optimization.

- Remove the use of the legacy RegionTracker API from Blender,
  and replaces it with the new TrackRegion API. This also
  adds several features to the planar tracker in libmv:

  * Do a brute-force initialization of tracking similar to "Hybrid"
    mode in the stable release, but using all floats. This is slower
    but more accurate. It is still necessary to evaluate if the
    performance loss is worth it. In particular, this change is
    necessary to support high bit depth imagery.

  * Add support for masks over the search window. This is a step
    towards supporting user-defined tracker masks. The tracker masks
    will make it easy for users to make a mask for e.g. a ball.

    Not exposed into interface yet/

  * Add Pearson product moment correlation coefficient checking (aka
    "Correlation" in the UI. This causes tracking failure if the
    tracked patch is not linearly related to the template.

  * Add support for warping a few points in addition to the supplied
    points. This is useful because the tracking code deliberately
    does not expose the underlying warp representation. Instead,
    warps are specified in an aparametric way via the correspondences.

- Replace the old style tracker configuration panel with the
  new planar tracking panel. From a users perspective, this means:

  * The old "tracking algorithm" picker is gone. There is only 1
    algorithm now. We may revisit this later, but I would much
    prefer to have only 1 algorithm. So far no optimization work
    has been done so the speed is not there yet.

  * There is now a dropdown to select the motion model. Choices:

        * Translation
        * Translation, rotation
        * Translation, scale
        * Translation, rotation, scale
        * Affine
        * Perspective

  * The old "Hybrid" mode is gone; instead there is a toggle to
    enable or disable translation-only tracker initialization. This
    is the equivalent of the hyrbid mode before, but rewritten to work
    with the new planar tracking modes.

  * The pyramid levels setting is gone. At a future date, the planar
    tracker will decide to use pyramids or not automatically. The
    pyramid setting was ultimately a mistake; with the brute force
    initialization it is unnecessary.

- Add light-normalized tracking

  Added the ability to normalize patterns by their average value while
  tracking, to make them invariant to global illumination changes.

Additional details could be found at wiki page [2]

  [1] http://code.google.com/p/ceres-solver
  [2] http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Motion_Tracker
2012-06-10 15:28:19 +00:00
Sergey Sharybin
51a4188105 Camera tracking: support of tripod motion solving
Expose option into interface to use modal solver which currently
supports only tripod motion.

This solver requires two tracks at least to reconstruct motion.
Using more tracks aren't improving solution in general, just adds
instability into solution and slows down things a lot.

Refirement of camera intrinsics was disabled due to it's not only
refines camera intrinsics but also adjusts camera position which
isn't necessary here

To use this solver just activate "Tripod Motion" checkbox in
solver panel.

Merged from tomato: svn merge ^/branches/soc-2011-tomato -r45622:45624 -r46036:46037

P.S. Quite experimental yet, requires more checking and probably
tweaks to prevent camera jumps when tracks apperars/disappears
from the screen.
2012-04-28 14:54:45 +00:00
Sergey Sharybin
f9d9b4635d Camera tracking: support of tripod motion solving
Expose option into interface to use modal solver which currently
supports only tripod motion.

This solver requires two tracks at least to reconstruct motion.
Using more tracks aren't improving solution in general, just adds
instability into solution and slows down things a lot.

Refirement of camera intrinsics is supported by this solver.

To use this solver just activate "Tripod Motion" checkbox in
solver panel.
2012-04-14 12:02:47 +00:00
Sergey Sharybin
e6c45cc1de libmv: bundle new upstream version from own branch with rigid registration implementation
Currently not used in blender code but is needed for some current work.
2012-04-12 11:37:51 +00:00
Sergey Sharybin
81e3db364d Camera tracking refactoring:
- Deduplicate patetrn sampling used in esm and lmicklt trackers and
  move SamplePattern to image/sample.h - Move computation of Pearson
  product-moment correlation into own function in new file image/correlation.h
  so all trackers can use it to check final correlation.
- Remove SAD tracker. It's almost the same as brute tracker, with only two differences:
  1. It does brute search of affine transformation which in some cases helps to track
     rotating features 2. It didn't use common tracker api which probably gave some
     speed advantage, but lead to a real headache to use it together with other
     trackers leading to duplicated code in blender side.
- Switch blenedr to use brute tracker instead of sad tracker which tracking made
  source code much more simple to follow.
2012-03-30 10:37:39 +00:00
Sergey Sharybin
d568f56f18 Merging remained part of hybrid tracker which adds correlation threshold
Keir's comment:
  Add support for detecting tracking failure in the ESM tracker component of
  libmv. Since both KLT and Hybrid rely on ESM underneath, KLT and Hybrid now
  have a minimum correlation setting to match. With this fix, track failures
  should get detected quicker, with the issue that sometimes the tracker will
  give up too easily. That is fixable by reducing the required correlation (in
  the track properties).

Command used for merge: svn merge -r 42396:42397 -r 42399:42400 ^/branches/soc-2011-tomato
2011-12-07 14:54:03 +00:00
Sergey Sharybin
d261623800 Camera tracking: merge hybrid tracker from tomato branch
Comment from Keir's commit:

Add a new hybrid region tracker for motion tracking to libmv, and
add it as an option (under "Hybrid") in the tracking settings. The
region tracker is a combination of brute force tracking for coarse
alignment, then refinement with the ESM/KLT algorithm already in
libmv that gives excellent subpixel precision (typically 1/50'th
of a pixel)

This also adds a new "brute force" region tracker which does a
brute force search through every pixel position in the destination
for the pattern in the first frame. It leverages SSE if available,
similar to the SAD tracker, to do this quickly. Currently it does
some unnecessary conversions to/from floating point that will get
fixed later.

The hybrid tracker glues the two trackers (brute & ESM) together
to get an overall better tracker. The algorithm is simple:

1. Track from frame 1 to frame 2 with the brute force tracker.
   This tries every possible pixel position for the pattern from
   frame 1 in frame 2. The position with the smallest
   sum-of-absolute-differences is chosen. By definition, this
   position is only accurate up to 1 pixel or so.
2. Using the result from 1, initialize a track with ESM. This does
   a least-squares fit with subpixel precision.
3. If the ESM shift was more than 2 pixels, report failure.
4. If the ESM track shifted less than 2 pixels, then the track is
   good and we're done. The rationale here is that if the
   refinement stage shifts more than 1 pixel, then the brute force
   result likely found some random position that's not a good fit.

svn command used: svn merge -r 42375:42376 -r 42377:42379 ^/branches/soc-2011-tomato
2011-12-04 13:26:11 +00:00
Sergey Sharybin
0668ad2d55 Camera tracking: SAD tracker now supports patterns with any size
(rectangle patterns are getting enlarged to square like it's happening for KLT)
2011-11-28 21:48:49 +00:00
Sergey Sharybin
9f3c921957 Camera tracking: moved camera solver into it's own job
In some cases solving can take a while (especially when refining is used)
and keeping interface locked is a bit annoying. Now camera solver is moved
to job system and interface isn't locking.

Reporting progress isn't really accurate, but trying to make it more linear
can lead to spending more effort on it than having benefit. Also, changing
status in the information line helps to understand that blender isn't hang
up and solving is till working nicely.

Main changes in code:
- libmv_solveReconstruction now accepts additional parameters:
  * progress_update_callback - a function which is getting called
    from solver algorithm to report progress back to Blender.
  * callback_customdata - a user-defined context which is passing
    to progress_update_callback so progress can be updated in needed
    blender-side data structures.

  This parameters are optional.

- Added structure MovieTrackingStats which is placed in MovieTracking
  structure. It's supposed to be used for displaying information about
  different operations (currently it's only camera solver, but can be
  easily used for something else in the future) in clip editor.
  This statistics structure is getting allocated for time operator is
  working and not saving into .blend file.

- Clip Editor now displays statistics stored in MovieTrackingStats structure
  like it's done for rendering.
2011-11-28 13:49:42 +00:00
Sergey Sharybin
0f82384fd0 Camera tracking: code cleanup 2011-11-14 06:41:32 +00:00
Sergey Sharybin
6fbc4186fd Assorted camera tracker improvements
- Add support for refining the camera's intrinsic parameters
  during a solve. Currently, refining supports only the following
  combinations of intrinsic parameters:

    f
    f, cx, cy
    f, cx, cy, k1, k2
    f, k1
    f, k1, k2

  This is not the same as autocalibration, since the user must
  still make a reasonable initial guess about the focal length and
  other parameters, whereas true autocalibration would eliminate
  the need for the user specify intrinsic parameters at all.

  However, the solver works well with only rough guesses for the
  focal length, so perhaps full autocalibation is not that
  important.

  Adding support for the last two combinations, (f, k1) and (f,
  k1, k2) required changes to the library libmv depends on for
  bundle adjustment, SSBA. These changes should get ported
  upstream not just to libmv but to SSBA as well.

- Improved the region of convergence for bundle adjustment by
  increasing the number of Levenberg-Marquardt iterations from 50
  to 500. This way, the solver is able to crawl out of the bad
  local minima it gets stuck in when changing from, for example,
  bundling k1 and k2 to just k1 and resetting k2 to 0.

- Add several new region tracker implementations. A region tracker
  is a libmv concept, which refers to tracking a template image
  pattern through frames. The impact to end users is that tracking
  should "just work better". I am reserving a more detailed
  writeup, and maybe a paper, for later.

- Other libmv tweaks, such as detecting that a tracker is headed
  outside of the image bounds.

This includes several changes made directly to the libmv extern
code rather expecting to get those changes through normal libmv
channels, because I, the libmv BDFL, decided it was faster to work
on libmv directly in Blender, then later reverse-port the libmv
changes from Blender back into libmv trunk. The interesting part
is that I added a full Levenberg-Marquardt loop to the region
tracking code, which should lead to a more stable solutions. I
also added a hacky implementation of "Efficient Second-Order
Minimization" for tracking, which works nicely. A more detailed
quantitative evaluation will follow.

Original patch by Keir, cleaned a bit by myself.
2011-11-14 06:41:23 +00:00
Sergey Sharybin
27d42c63d9 Camera tracking integration
===========================

Commiting camera tracking integration gsoc project into trunk.

This commit includes:

- Bundled version of libmv library (with some changes against official repo,
  re-sync with libmv repo a bit later)
- New datatype ID called MovieClip which is optimized to work with movie
  clips (both of movie files and image sequences) and doing camera/motion
  tracking operations.
- New editor called Clip Editor which is currently used for motion/tracking
  stuff only, but which can be easily extended to work with masks too.

  This editor supports:
  * Loading movie files/image sequences
  * Build proxies with different size for loaded movie clip, also supports
    building undistorted proxies to increase speed of playback in
    undistorted mode.
  * Manual lens distortion mode calibration using grid and grease pencil
  * Supervised 2D tracking using two different algorithms KLT and SAD.
  * Basic algorithm for feature detection
  * Camera motion solving. scene orientation

- New constraints to "link" scene objects with solved motions from clip:

  * Follow Track (make object follow 2D motion of track with given name
    or parent object to reconstructed 3D position of track)
  * Camera Solver to make camera moving in the same way as reconstructed camera

This commit NOT includes changes from tomato branch:

- New nodes (they'll be commited as separated patch)
- Automatic image offset guessing for image input node and image editor
  (need to do more tests and gather more feedback)
- Code cleanup in libmv-capi. It's not so critical cleanup, just increasing
  readability and understanadability of code. Better to make this chaneg when
  Keir will finish his current patch.

More details about this project can be found on this page:
    http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011

Further development of small features would be done in trunk, bigger/experimental
features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00