blender/extern/libmv/libmv-capi.h

174 lines
7.3 KiB
C
Raw Normal View History

Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2011 Blender Foundation.
* All rights reserved.
*
* Contributor(s): Blender Foundation,
* Sergey Sharybin
*
* ***** END GPL LICENSE BLOCK *****
*/
#ifndef LIBMV_C_API_H
#define LIBMV_C_API_H
#ifdef __cplusplus
extern "C" {
#endif
struct libmv_Tracks;
struct libmv_Reconstruction;
struct libmv_Features;
struct libmv_CameraIntrinsics;
/* Logging */
void libmv_initLogging(const char *argv0);
void libmv_startDebugLogging(void);
void libmv_setLoggingVerbosity(int verbosity);
Planar tracking support for motion tracking =========================================== Major list of changes done in tomato branch: - Add a planar tracking implementation to libmv This adds a new planar tracking implementation to libmv. The tracker is based on Ceres[1], the new nonlinear minimizer that myself and Sameer released from Google as open source. Since the motion model is more involved, the interface is different than the RegionTracker interface used previously in Blender. The start of a C API in libmv-capi.{cpp,h} is also included. - Migrate from pat_{min,max} for markers to 4 corners representation Convert markers in the movie clip editor / 2D tracker from using pat_min and pat_max notation to using the a more general, 4-corner representation. There is still considerable porting work to do; in particular sliding from preview widget does not work correct for rotated markers. All other areas should be ported to new representation: * Added support of sliding individual corners. LMB slide + Ctrl would scale the whole pattern * S would scale the whole marker, S-S would scale pattern only * Added support of marker's rotation which is currently rotates only patterns around their centers or all markers around median, Rotation or other non-translation/scaling transformation of search area doesn't make sense. * Track Preview widget would display transformed pattern which libmv actually operates with. - "Efficient Second-order Minimization" for the planar tracker This implements the "Efficient Second-order Minimization" scheme, as supported by the existing translation tracker. This increases the amount of per-iteration work, but decreases the number of iterations required to converge and also increases the size of the basin of attraction for the optimization. - Remove the use of the legacy RegionTracker API from Blender, and replaces it with the new TrackRegion API. This also adds several features to the planar tracker in libmv: * Do a brute-force initialization of tracking similar to "Hybrid" mode in the stable release, but using all floats. This is slower but more accurate. It is still necessary to evaluate if the performance loss is worth it. In particular, this change is necessary to support high bit depth imagery. * Add support for masks over the search window. This is a step towards supporting user-defined tracker masks. The tracker masks will make it easy for users to make a mask for e.g. a ball. Not exposed into interface yet/ * Add Pearson product moment correlation coefficient checking (aka "Correlation" in the UI. This causes tracking failure if the tracked patch is not linearly related to the template. * Add support for warping a few points in addition to the supplied points. This is useful because the tracking code deliberately does not expose the underlying warp representation. Instead, warps are specified in an aparametric way via the correspondences. - Replace the old style tracker configuration panel with the new planar tracking panel. From a users perspective, this means: * The old "tracking algorithm" picker is gone. There is only 1 algorithm now. We may revisit this later, but I would much prefer to have only 1 algorithm. So far no optimization work has been done so the speed is not there yet. * There is now a dropdown to select the motion model. Choices: * Translation * Translation, rotation * Translation, scale * Translation, rotation, scale * Affine * Perspective * The old "Hybrid" mode is gone; instead there is a toggle to enable or disable translation-only tracker initialization. This is the equivalent of the hyrbid mode before, but rewritten to work with the new planar tracking modes. * The pyramid levels setting is gone. At a future date, the planar tracker will decide to use pyramids or not automatically. The pyramid setting was ultimately a mistake; with the brute force initialization it is unnecessary. - Add light-normalized tracking Added the ability to normalize patterns by their average value while tracking, to make them invariant to global illumination changes. Additional details could be found at wiki page [2] [1] http://code.google.com/p/ceres-solver [2] http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Motion_Tracker
2012-06-10 15:28:19 +00:00
/* TrackRegion (new planar tracker) */
struct libmv_trackRegionOptions {
int motion_model;
int num_iterations;
int use_brute;
int use_normalization;
double minimum_correlation;
double sigma;
float *image1_mask;
Planar tracking support for motion tracking =========================================== Major list of changes done in tomato branch: - Add a planar tracking implementation to libmv This adds a new planar tracking implementation to libmv. The tracker is based on Ceres[1], the new nonlinear minimizer that myself and Sameer released from Google as open source. Since the motion model is more involved, the interface is different than the RegionTracker interface used previously in Blender. The start of a C API in libmv-capi.{cpp,h} is also included. - Migrate from pat_{min,max} for markers to 4 corners representation Convert markers in the movie clip editor / 2D tracker from using pat_min and pat_max notation to using the a more general, 4-corner representation. There is still considerable porting work to do; in particular sliding from preview widget does not work correct for rotated markers. All other areas should be ported to new representation: * Added support of sliding individual corners. LMB slide + Ctrl would scale the whole pattern * S would scale the whole marker, S-S would scale pattern only * Added support of marker's rotation which is currently rotates only patterns around their centers or all markers around median, Rotation or other non-translation/scaling transformation of search area doesn't make sense. * Track Preview widget would display transformed pattern which libmv actually operates with. - "Efficient Second-order Minimization" for the planar tracker This implements the "Efficient Second-order Minimization" scheme, as supported by the existing translation tracker. This increases the amount of per-iteration work, but decreases the number of iterations required to converge and also increases the size of the basin of attraction for the optimization. - Remove the use of the legacy RegionTracker API from Blender, and replaces it with the new TrackRegion API. This also adds several features to the planar tracker in libmv: * Do a brute-force initialization of tracking similar to "Hybrid" mode in the stable release, but using all floats. This is slower but more accurate. It is still necessary to evaluate if the performance loss is worth it. In particular, this change is necessary to support high bit depth imagery. * Add support for masks over the search window. This is a step towards supporting user-defined tracker masks. The tracker masks will make it easy for users to make a mask for e.g. a ball. Not exposed into interface yet/ * Add Pearson product moment correlation coefficient checking (aka "Correlation" in the UI. This causes tracking failure if the tracked patch is not linearly related to the template. * Add support for warping a few points in addition to the supplied points. This is useful because the tracking code deliberately does not expose the underlying warp representation. Instead, warps are specified in an aparametric way via the correspondences. - Replace the old style tracker configuration panel with the new planar tracking panel. From a users perspective, this means: * The old "tracking algorithm" picker is gone. There is only 1 algorithm now. We may revisit this later, but I would much prefer to have only 1 algorithm. So far no optimization work has been done so the speed is not there yet. * There is now a dropdown to select the motion model. Choices: * Translation * Translation, rotation * Translation, scale * Translation, rotation, scale * Affine * Perspective * The old "Hybrid" mode is gone; instead there is a toggle to enable or disable translation-only tracker initialization. This is the equivalent of the hyrbid mode before, but rewritten to work with the new planar tracking modes. * The pyramid levels setting is gone. At a future date, the planar tracker will decide to use pyramids or not automatically. The pyramid setting was ultimately a mistake; with the brute force initialization it is unnecessary. - Add light-normalized tracking Added the ability to normalize patterns by their average value while tracking, to make them invariant to global illumination changes. Additional details could be found at wiki page [2] [1] http://code.google.com/p/ceres-solver [2] http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Motion_Tracker
2012-06-10 15:28:19 +00:00
};
Planar tracking support for motion tracking =========================================== Major list of changes done in tomato branch: - Add a planar tracking implementation to libmv This adds a new planar tracking implementation to libmv. The tracker is based on Ceres[1], the new nonlinear minimizer that myself and Sameer released from Google as open source. Since the motion model is more involved, the interface is different than the RegionTracker interface used previously in Blender. The start of a C API in libmv-capi.{cpp,h} is also included. - Migrate from pat_{min,max} for markers to 4 corners representation Convert markers in the movie clip editor / 2D tracker from using pat_min and pat_max notation to using the a more general, 4-corner representation. There is still considerable porting work to do; in particular sliding from preview widget does not work correct for rotated markers. All other areas should be ported to new representation: * Added support of sliding individual corners. LMB slide + Ctrl would scale the whole pattern * S would scale the whole marker, S-S would scale pattern only * Added support of marker's rotation which is currently rotates only patterns around their centers or all markers around median, Rotation or other non-translation/scaling transformation of search area doesn't make sense. * Track Preview widget would display transformed pattern which libmv actually operates with. - "Efficient Second-order Minimization" for the planar tracker This implements the "Efficient Second-order Minimization" scheme, as supported by the existing translation tracker. This increases the amount of per-iteration work, but decreases the number of iterations required to converge and also increases the size of the basin of attraction for the optimization. - Remove the use of the legacy RegionTracker API from Blender, and replaces it with the new TrackRegion API. This also adds several features to the planar tracker in libmv: * Do a brute-force initialization of tracking similar to "Hybrid" mode in the stable release, but using all floats. This is slower but more accurate. It is still necessary to evaluate if the performance loss is worth it. In particular, this change is necessary to support high bit depth imagery. * Add support for masks over the search window. This is a step towards supporting user-defined tracker masks. The tracker masks will make it easy for users to make a mask for e.g. a ball. Not exposed into interface yet/ * Add Pearson product moment correlation coefficient checking (aka "Correlation" in the UI. This causes tracking failure if the tracked patch is not linearly related to the template. * Add support for warping a few points in addition to the supplied points. This is useful because the tracking code deliberately does not expose the underlying warp representation. Instead, warps are specified in an aparametric way via the correspondences. - Replace the old style tracker configuration panel with the new planar tracking panel. From a users perspective, this means: * The old "tracking algorithm" picker is gone. There is only 1 algorithm now. We may revisit this later, but I would much prefer to have only 1 algorithm. So far no optimization work has been done so the speed is not there yet. * There is now a dropdown to select the motion model. Choices: * Translation * Translation, rotation * Translation, scale * Translation, rotation, scale * Affine * Perspective * The old "Hybrid" mode is gone; instead there is a toggle to enable or disable translation-only tracker initialization. This is the equivalent of the hyrbid mode before, but rewritten to work with the new planar tracking modes. * The pyramid levels setting is gone. At a future date, the planar tracker will decide to use pyramids or not automatically. The pyramid setting was ultimately a mistake; with the brute force initialization it is unnecessary. - Add light-normalized tracking Added the ability to normalize patterns by their average value while tracking, to make them invariant to global illumination changes. Additional details could be found at wiki page [2] [1] http://code.google.com/p/ceres-solver [2] http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Motion_Tracker
2012-06-10 15:28:19 +00:00
struct libmv_trackRegionResult {
int termination;
const char *termination_reason;
double correlation;
Planar tracking support for motion tracking =========================================== Major list of changes done in tomato branch: - Add a planar tracking implementation to libmv This adds a new planar tracking implementation to libmv. The tracker is based on Ceres[1], the new nonlinear minimizer that myself and Sameer released from Google as open source. Since the motion model is more involved, the interface is different than the RegionTracker interface used previously in Blender. The start of a C API in libmv-capi.{cpp,h} is also included. - Migrate from pat_{min,max} for markers to 4 corners representation Convert markers in the movie clip editor / 2D tracker from using pat_min and pat_max notation to using the a more general, 4-corner representation. There is still considerable porting work to do; in particular sliding from preview widget does not work correct for rotated markers. All other areas should be ported to new representation: * Added support of sliding individual corners. LMB slide + Ctrl would scale the whole pattern * S would scale the whole marker, S-S would scale pattern only * Added support of marker's rotation which is currently rotates only patterns around their centers or all markers around median, Rotation or other non-translation/scaling transformation of search area doesn't make sense. * Track Preview widget would display transformed pattern which libmv actually operates with. - "Efficient Second-order Minimization" for the planar tracker This implements the "Efficient Second-order Minimization" scheme, as supported by the existing translation tracker. This increases the amount of per-iteration work, but decreases the number of iterations required to converge and also increases the size of the basin of attraction for the optimization. - Remove the use of the legacy RegionTracker API from Blender, and replaces it with the new TrackRegion API. This also adds several features to the planar tracker in libmv: * Do a brute-force initialization of tracking similar to "Hybrid" mode in the stable release, but using all floats. This is slower but more accurate. It is still necessary to evaluate if the performance loss is worth it. In particular, this change is necessary to support high bit depth imagery. * Add support for masks over the search window. This is a step towards supporting user-defined tracker masks. The tracker masks will make it easy for users to make a mask for e.g. a ball. Not exposed into interface yet/ * Add Pearson product moment correlation coefficient checking (aka "Correlation" in the UI. This causes tracking failure if the tracked patch is not linearly related to the template. * Add support for warping a few points in addition to the supplied points. This is useful because the tracking code deliberately does not expose the underlying warp representation. Instead, warps are specified in an aparametric way via the correspondences. - Replace the old style tracker configuration panel with the new planar tracking panel. From a users perspective, this means: * The old "tracking algorithm" picker is gone. There is only 1 algorithm now. We may revisit this later, but I would much prefer to have only 1 algorithm. So far no optimization work has been done so the speed is not there yet. * There is now a dropdown to select the motion model. Choices: * Translation * Translation, rotation * Translation, scale * Translation, rotation, scale * Affine * Perspective * The old "Hybrid" mode is gone; instead there is a toggle to enable or disable translation-only tracker initialization. This is the equivalent of the hyrbid mode before, but rewritten to work with the new planar tracking modes. * The pyramid levels setting is gone. At a future date, the planar tracker will decide to use pyramids or not automatically. The pyramid setting was ultimately a mistake; with the brute force initialization it is unnecessary. - Add light-normalized tracking Added the ability to normalize patterns by their average value while tracking, to make them invariant to global illumination changes. Additional details could be found at wiki page [2] [1] http://code.google.com/p/ceres-solver [2] http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Motion_Tracker
2012-06-10 15:28:19 +00:00
};
Planar tracking support for motion tracking =========================================== Major list of changes done in tomato branch: - Add a planar tracking implementation to libmv This adds a new planar tracking implementation to libmv. The tracker is based on Ceres[1], the new nonlinear minimizer that myself and Sameer released from Google as open source. Since the motion model is more involved, the interface is different than the RegionTracker interface used previously in Blender. The start of a C API in libmv-capi.{cpp,h} is also included. - Migrate from pat_{min,max} for markers to 4 corners representation Convert markers in the movie clip editor / 2D tracker from using pat_min and pat_max notation to using the a more general, 4-corner representation. There is still considerable porting work to do; in particular sliding from preview widget does not work correct for rotated markers. All other areas should be ported to new representation: * Added support of sliding individual corners. LMB slide + Ctrl would scale the whole pattern * S would scale the whole marker, S-S would scale pattern only * Added support of marker's rotation which is currently rotates only patterns around their centers or all markers around median, Rotation or other non-translation/scaling transformation of search area doesn't make sense. * Track Preview widget would display transformed pattern which libmv actually operates with. - "Efficient Second-order Minimization" for the planar tracker This implements the "Efficient Second-order Minimization" scheme, as supported by the existing translation tracker. This increases the amount of per-iteration work, but decreases the number of iterations required to converge and also increases the size of the basin of attraction for the optimization. - Remove the use of the legacy RegionTracker API from Blender, and replaces it with the new TrackRegion API. This also adds several features to the planar tracker in libmv: * Do a brute-force initialization of tracking similar to "Hybrid" mode in the stable release, but using all floats. This is slower but more accurate. It is still necessary to evaluate if the performance loss is worth it. In particular, this change is necessary to support high bit depth imagery. * Add support for masks over the search window. This is a step towards supporting user-defined tracker masks. The tracker masks will make it easy for users to make a mask for e.g. a ball. Not exposed into interface yet/ * Add Pearson product moment correlation coefficient checking (aka "Correlation" in the UI. This causes tracking failure if the tracked patch is not linearly related to the template. * Add support for warping a few points in addition to the supplied points. This is useful because the tracking code deliberately does not expose the underlying warp representation. Instead, warps are specified in an aparametric way via the correspondences. - Replace the old style tracker configuration panel with the new planar tracking panel. From a users perspective, this means: * The old "tracking algorithm" picker is gone. There is only 1 algorithm now. We may revisit this later, but I would much prefer to have only 1 algorithm. So far no optimization work has been done so the speed is not there yet. * There is now a dropdown to select the motion model. Choices: * Translation * Translation, rotation * Translation, scale * Translation, rotation, scale * Affine * Perspective * The old "Hybrid" mode is gone; instead there is a toggle to enable or disable translation-only tracker initialization. This is the equivalent of the hyrbid mode before, but rewritten to work with the new planar tracking modes. * The pyramid levels setting is gone. At a future date, the planar tracker will decide to use pyramids or not automatically. The pyramid setting was ultimately a mistake; with the brute force initialization it is unnecessary. - Add light-normalized tracking Added the ability to normalize patterns by their average value while tracking, to make them invariant to global illumination changes. Additional details could be found at wiki page [2] [1] http://code.google.com/p/ceres-solver [2] http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Motion_Tracker
2012-06-10 15:28:19 +00:00
int libmv_trackRegion(const struct libmv_trackRegionOptions *options,
const float *image1, int image1_width, int image1_height,
const float *image2, int image2_width, int image2_height,
Planar tracking support for motion tracking =========================================== Major list of changes done in tomato branch: - Add a planar tracking implementation to libmv This adds a new planar tracking implementation to libmv. The tracker is based on Ceres[1], the new nonlinear minimizer that myself and Sameer released from Google as open source. Since the motion model is more involved, the interface is different than the RegionTracker interface used previously in Blender. The start of a C API in libmv-capi.{cpp,h} is also included. - Migrate from pat_{min,max} for markers to 4 corners representation Convert markers in the movie clip editor / 2D tracker from using pat_min and pat_max notation to using the a more general, 4-corner representation. There is still considerable porting work to do; in particular sliding from preview widget does not work correct for rotated markers. All other areas should be ported to new representation: * Added support of sliding individual corners. LMB slide + Ctrl would scale the whole pattern * S would scale the whole marker, S-S would scale pattern only * Added support of marker's rotation which is currently rotates only patterns around their centers or all markers around median, Rotation or other non-translation/scaling transformation of search area doesn't make sense. * Track Preview widget would display transformed pattern which libmv actually operates with. - "Efficient Second-order Minimization" for the planar tracker This implements the "Efficient Second-order Minimization" scheme, as supported by the existing translation tracker. This increases the amount of per-iteration work, but decreases the number of iterations required to converge and also increases the size of the basin of attraction for the optimization. - Remove the use of the legacy RegionTracker API from Blender, and replaces it with the new TrackRegion API. This also adds several features to the planar tracker in libmv: * Do a brute-force initialization of tracking similar to "Hybrid" mode in the stable release, but using all floats. This is slower but more accurate. It is still necessary to evaluate if the performance loss is worth it. In particular, this change is necessary to support high bit depth imagery. * Add support for masks over the search window. This is a step towards supporting user-defined tracker masks. The tracker masks will make it easy for users to make a mask for e.g. a ball. Not exposed into interface yet/ * Add Pearson product moment correlation coefficient checking (aka "Correlation" in the UI. This causes tracking failure if the tracked patch is not linearly related to the template. * Add support for warping a few points in addition to the supplied points. This is useful because the tracking code deliberately does not expose the underlying warp representation. Instead, warps are specified in an aparametric way via the correspondences. - Replace the old style tracker configuration panel with the new planar tracking panel. From a users perspective, this means: * The old "tracking algorithm" picker is gone. There is only 1 algorithm now. We may revisit this later, but I would much prefer to have only 1 algorithm. So far no optimization work has been done so the speed is not there yet. * There is now a dropdown to select the motion model. Choices: * Translation * Translation, rotation * Translation, scale * Translation, rotation, scale * Affine * Perspective * The old "Hybrid" mode is gone; instead there is a toggle to enable or disable translation-only tracker initialization. This is the equivalent of the hyrbid mode before, but rewritten to work with the new planar tracking modes. * The pyramid levels setting is gone. At a future date, the planar tracker will decide to use pyramids or not automatically. The pyramid setting was ultimately a mistake; with the brute force initialization it is unnecessary. - Add light-normalized tracking Added the ability to normalize patterns by their average value while tracking, to make them invariant to global illumination changes. Additional details could be found at wiki page [2] [1] http://code.google.com/p/ceres-solver [2] http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Motion_Tracker
2012-06-10 15:28:19 +00:00
const double *x1, const double *y1,
struct libmv_trackRegionResult *result,
double *x2, double *y2);
void libmv_samplePlanarPatch(const float *image, int width, int height,
int channels, const double *xs, const double *ys,
int num_samples_x, int num_samples_y,
const float *mask, float *patch,
Planar tracking support for motion tracking =========================================== Major list of changes done in tomato branch: - Add a planar tracking implementation to libmv This adds a new planar tracking implementation to libmv. The tracker is based on Ceres[1], the new nonlinear minimizer that myself and Sameer released from Google as open source. Since the motion model is more involved, the interface is different than the RegionTracker interface used previously in Blender. The start of a C API in libmv-capi.{cpp,h} is also included. - Migrate from pat_{min,max} for markers to 4 corners representation Convert markers in the movie clip editor / 2D tracker from using pat_min and pat_max notation to using the a more general, 4-corner representation. There is still considerable porting work to do; in particular sliding from preview widget does not work correct for rotated markers. All other areas should be ported to new representation: * Added support of sliding individual corners. LMB slide + Ctrl would scale the whole pattern * S would scale the whole marker, S-S would scale pattern only * Added support of marker's rotation which is currently rotates only patterns around their centers or all markers around median, Rotation or other non-translation/scaling transformation of search area doesn't make sense. * Track Preview widget would display transformed pattern which libmv actually operates with. - "Efficient Second-order Minimization" for the planar tracker This implements the "Efficient Second-order Minimization" scheme, as supported by the existing translation tracker. This increases the amount of per-iteration work, but decreases the number of iterations required to converge and also increases the size of the basin of attraction for the optimization. - Remove the use of the legacy RegionTracker API from Blender, and replaces it with the new TrackRegion API. This also adds several features to the planar tracker in libmv: * Do a brute-force initialization of tracking similar to "Hybrid" mode in the stable release, but using all floats. This is slower but more accurate. It is still necessary to evaluate if the performance loss is worth it. In particular, this change is necessary to support high bit depth imagery. * Add support for masks over the search window. This is a step towards supporting user-defined tracker masks. The tracker masks will make it easy for users to make a mask for e.g. a ball. Not exposed into interface yet/ * Add Pearson product moment correlation coefficient checking (aka "Correlation" in the UI. This causes tracking failure if the tracked patch is not linearly related to the template. * Add support for warping a few points in addition to the supplied points. This is useful because the tracking code deliberately does not expose the underlying warp representation. Instead, warps are specified in an aparametric way via the correspondences. - Replace the old style tracker configuration panel with the new planar tracking panel. From a users perspective, this means: * The old "tracking algorithm" picker is gone. There is only 1 algorithm now. We may revisit this later, but I would much prefer to have only 1 algorithm. So far no optimization work has been done so the speed is not there yet. * There is now a dropdown to select the motion model. Choices: * Translation * Translation, rotation * Translation, scale * Translation, rotation, scale * Affine * Perspective * The old "Hybrid" mode is gone; instead there is a toggle to enable or disable translation-only tracker initialization. This is the equivalent of the hyrbid mode before, but rewritten to work with the new planar tracking modes. * The pyramid levels setting is gone. At a future date, the planar tracker will decide to use pyramids or not automatically. The pyramid setting was ultimately a mistake; with the brute force initialization it is unnecessary. - Add light-normalized tracking Added the ability to normalize patterns by their average value while tracking, to make them invariant to global illumination changes. Additional details could be found at wiki page [2] [1] http://code.google.com/p/ceres-solver [2] http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Motion_Tracker
2012-06-10 15:28:19 +00:00
double *warped_position_x, double *warped_position_y);
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00
/* Tracks */
struct libmv_Tracks *libmv_tracksNew(void);
void libmv_tracksInsert(struct libmv_Tracks *libmv_tracks, int image, int track, double x, double y);
void libmv_tracksDestroy(struct libmv_Tracks *libmv_tracks);
/* Reconstruction solver */
#define LIBMV_REFINE_FOCAL_LENGTH (1 << 0)
#define LIBMV_REFINE_PRINCIPAL_POINT (1 << 1)
#define LIBMV_REFINE_RADIAL_DISTORTION_K1 (1 << 2)
#define LIBMV_REFINE_RADIAL_DISTORTION_K2 (1 << 4)
typedef struct libmv_cameraIntrinsicsOptions {
double focal_length;
double principal_point_x, principal_point_y;
double k1, k2, k3;
double p1, p2;
int image_width, image_height;
} libmv_cameraIntrinsicsOptions;
typedef struct libmv_reconstructionOptions {
Motion tracking: automatic keyframe selection Implements an automatic keyframe selection algorithm which uses couple of approaches to find out best keyframes candidates: - First, slightly modifier Pollefeys's criteria is used, which limits correspondence ration from 80% to 100%. This allows to reject keyframe candidate early without doing heavy math in cases there're not much common features with first keyframe. - Second step is based on Geometric Robust Information Criteria (aka GRIC), which checks whether features motion between candidate keyframes is better defined by homography or fundamental matrices. To be a good keyframe candidate, fundamental matrix need to define motion better than homography (in this case F-GRIC will be smaller than H-GRIC). This two criteria are well described in this paper: http://www.cs.ait.ac.th/~mdailey/papers/Tahir-KeyFrame.pdf - Final step is based on estimating reconstruction error of a full-scene solution using candidate keyframes. This part is based on the following paper: ftp://ftp.tnt.uni-hannover.de/pub/papers/2004/ECCV2004-TTHBAW.pdf This step requires reconstruction using candidate keyframes and obtaining covariance matrix of 3D points positions. Reconstruction was done pretty much straightforward using other simple pipeline routines, and for covariance estimation pseudo-inverse of Hessian is used, which is in this case (J^T * J)+, where + denotes pseudo-inverse. Jacobian matrix is estimating using Ceres evaluate API. This is also crucial to get rid of possible gauge ambiguity, which is in our case made by zero-ing 7 (by gauge freedoms number) eigen values in pseudo-inverse. There're still room for improving and optimizing the code, but we need some point to start with anyway :) Thanks to Keir Mierle and Sameer Agarwal who assisted a lot to make this feature working.
2013-05-30 09:03:49 +00:00
int select_keyframes;
int keyframe1, keyframe2;
Motion tracking: automatic keyframe selection Implements an automatic keyframe selection algorithm which uses couple of approaches to find out best keyframes candidates: - First, slightly modifier Pollefeys's criteria is used, which limits correspondence ration from 80% to 100%. This allows to reject keyframe candidate early without doing heavy math in cases there're not much common features with first keyframe. - Second step is based on Geometric Robust Information Criteria (aka GRIC), which checks whether features motion between candidate keyframes is better defined by homography or fundamental matrices. To be a good keyframe candidate, fundamental matrix need to define motion better than homography (in this case F-GRIC will be smaller than H-GRIC). This two criteria are well described in this paper: http://www.cs.ait.ac.th/~mdailey/papers/Tahir-KeyFrame.pdf - Final step is based on estimating reconstruction error of a full-scene solution using candidate keyframes. This part is based on the following paper: ftp://ftp.tnt.uni-hannover.de/pub/papers/2004/ECCV2004-TTHBAW.pdf This step requires reconstruction using candidate keyframes and obtaining covariance matrix of 3D points positions. Reconstruction was done pretty much straightforward using other simple pipeline routines, and for covariance estimation pseudo-inverse of Hessian is used, which is in this case (J^T * J)+, where + denotes pseudo-inverse. Jacobian matrix is estimating using Ceres evaluate API. This is also crucial to get rid of possible gauge ambiguity, which is in our case made by zero-ing 7 (by gauge freedoms number) eigen values in pseudo-inverse. There're still room for improving and optimizing the code, but we need some point to start with anyway :) Thanks to Keir Mierle and Sameer Agarwal who assisted a lot to make this feature working.
2013-05-30 09:03:49 +00:00
int refine_intrinsics;
double success_threshold;
int use_fallback_reconstruction;
} libmv_reconstructionOptions;
typedef void (*reconstruct_progress_update_cb) (void *customdata, double progress, const char *message);
struct libmv_Reconstruction *libmv_solveReconstruction(const struct libmv_Tracks *libmv_tracks,
const libmv_cameraIntrinsicsOptions *libmv_camera_intrinsics_options,
Motion tracking: automatic keyframe selection Implements an automatic keyframe selection algorithm which uses couple of approaches to find out best keyframes candidates: - First, slightly modifier Pollefeys's criteria is used, which limits correspondence ration from 80% to 100%. This allows to reject keyframe candidate early without doing heavy math in cases there're not much common features with first keyframe. - Second step is based on Geometric Robust Information Criteria (aka GRIC), which checks whether features motion between candidate keyframes is better defined by homography or fundamental matrices. To be a good keyframe candidate, fundamental matrix need to define motion better than homography (in this case F-GRIC will be smaller than H-GRIC). This two criteria are well described in this paper: http://www.cs.ait.ac.th/~mdailey/papers/Tahir-KeyFrame.pdf - Final step is based on estimating reconstruction error of a full-scene solution using candidate keyframes. This part is based on the following paper: ftp://ftp.tnt.uni-hannover.de/pub/papers/2004/ECCV2004-TTHBAW.pdf This step requires reconstruction using candidate keyframes and obtaining covariance matrix of 3D points positions. Reconstruction was done pretty much straightforward using other simple pipeline routines, and for covariance estimation pseudo-inverse of Hessian is used, which is in this case (J^T * J)+, where + denotes pseudo-inverse. Jacobian matrix is estimating using Ceres evaluate API. This is also crucial to get rid of possible gauge ambiguity, which is in our case made by zero-ing 7 (by gauge freedoms number) eigen values in pseudo-inverse. There're still room for improving and optimizing the code, but we need some point to start with anyway :) Thanks to Keir Mierle and Sameer Agarwal who assisted a lot to make this feature working.
2013-05-30 09:03:49 +00:00
libmv_reconstructionOptions *libmv_reconstruction_options,
reconstruct_progress_update_cb progress_update_callback,
void *callback_customdata);
struct libmv_Reconstruction *libmv_solveModal(const struct libmv_Tracks *libmv_tracks,
const libmv_cameraIntrinsicsOptions *libmv_camera_intrinsics_options,
const libmv_reconstructionOptions *libmv_reconstruction_options,
reconstruct_progress_update_cb progress_update_callback,
void *callback_customdata);
int libmv_reporojectionPointForTrack(const struct libmv_Reconstruction *libmv_reconstruction, int track, double pos[3]);
double libmv_reporojectionErrorForTrack(const struct libmv_Reconstruction *libmv_reconstruction, int track);
double libmv_reporojectionErrorForImage(const struct libmv_Reconstruction *libmv_reconstruction, int image);
int libmv_reporojectionCameraForImage(const struct libmv_Reconstruction *libmv_reconstruction, int image, double mat[4][4]);
double libmv_reprojectionError(const struct libmv_Reconstruction *libmv_reconstruction);
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00
void libmv_destroyReconstruction(struct libmv_Reconstruction *libmv_reconstruction);
/* feature detector */
struct libmv_Features *libmv_detectFeaturesFAST(const unsigned char *data, int width, int height, int stride,
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00
int margin, int min_trackness, int min_distance);
struct libmv_Features *libmv_detectFeaturesMORAVEC(const unsigned char *data, int width, int height, int stride,
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00
int margin, int count, int min_distance);
int libmv_countFeatures(const struct libmv_Features *libmv_features);
void libmv_getFeature(const struct libmv_Features *libmv_features, int number, double *x, double *y, double *score, double *size);
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00
void libmv_destroyFeatures(struct libmv_Features *libmv_features);
/* camera intrinsics */
Assorted camera tracker improvements - Add support for refining the camera's intrinsic parameters during a solve. Currently, refining supports only the following combinations of intrinsic parameters: f f, cx, cy f, cx, cy, k1, k2 f, k1 f, k1, k2 This is not the same as autocalibration, since the user must still make a reasonable initial guess about the focal length and other parameters, whereas true autocalibration would eliminate the need for the user specify intrinsic parameters at all. However, the solver works well with only rough guesses for the focal length, so perhaps full autocalibation is not that important. Adding support for the last two combinations, (f, k1) and (f, k1, k2) required changes to the library libmv depends on for bundle adjustment, SSBA. These changes should get ported upstream not just to libmv but to SSBA as well. - Improved the region of convergence for bundle adjustment by increasing the number of Levenberg-Marquardt iterations from 50 to 500. This way, the solver is able to crawl out of the bad local minima it gets stuck in when changing from, for example, bundling k1 and k2 to just k1 and resetting k2 to 0. - Add several new region tracker implementations. A region tracker is a libmv concept, which refers to tracking a template image pattern through frames. The impact to end users is that tracking should "just work better". I am reserving a more detailed writeup, and maybe a paper, for later. - Other libmv tweaks, such as detecting that a tracker is headed outside of the image bounds. This includes several changes made directly to the libmv extern code rather expecting to get those changes through normal libmv channels, because I, the libmv BDFL, decided it was faster to work on libmv directly in Blender, then later reverse-port the libmv changes from Blender back into libmv trunk. The interesting part is that I added a full Levenberg-Marquardt loop to the region tracking code, which should lead to a more stable solutions. I also added a hacky implementation of "Efficient Second-Order Minimization" for tracking, which works nicely. A more detailed quantitative evaluation will follow. Original patch by Keir, cleaned a bit by myself.
2011-11-14 06:41:23 +00:00
struct libmv_CameraIntrinsics *libmv_ReconstructionExtractIntrinsics(struct libmv_Reconstruction *libmv_Reconstruction);
struct libmv_CameraIntrinsics *libmv_CameraIntrinsicsNewEmpty(void);
struct libmv_CameraIntrinsics *libmv_CameraIntrinsicsNew(const libmv_cameraIntrinsicsOptions *libmv_camera_intrinsics_options);
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00
struct libmv_CameraIntrinsics *libmv_CameraIntrinsicsCopy(const struct libmv_CameraIntrinsics *libmv_intrinsics);
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00
void libmv_CameraIntrinsicsDestroy(struct libmv_CameraIntrinsics *libmv_intrinsics);
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00
void libmv_CameraIntrinsicsUpdate(const libmv_cameraIntrinsicsOptions *libmv_camera_intrinsics_options,
struct libmv_CameraIntrinsics *libmv_intrinsics);
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00
void libmv_CameraIntrinsicsSetThreads(struct libmv_CameraIntrinsics *libmv_intrinsics, int threads);
void libmv_CameraIntrinsicsExtract(const struct libmv_CameraIntrinsics *libmv_intrinsics, double *focal_length,
Assorted camera tracker improvements - Add support for refining the camera's intrinsic parameters during a solve. Currently, refining supports only the following combinations of intrinsic parameters: f f, cx, cy f, cx, cy, k1, k2 f, k1 f, k1, k2 This is not the same as autocalibration, since the user must still make a reasonable initial guess about the focal length and other parameters, whereas true autocalibration would eliminate the need for the user specify intrinsic parameters at all. However, the solver works well with only rough guesses for the focal length, so perhaps full autocalibation is not that important. Adding support for the last two combinations, (f, k1) and (f, k1, k2) required changes to the library libmv depends on for bundle adjustment, SSBA. These changes should get ported upstream not just to libmv but to SSBA as well. - Improved the region of convergence for bundle adjustment by increasing the number of Levenberg-Marquardt iterations from 50 to 500. This way, the solver is able to crawl out of the bad local minima it gets stuck in when changing from, for example, bundling k1 and k2 to just k1 and resetting k2 to 0. - Add several new region tracker implementations. A region tracker is a libmv concept, which refers to tracking a template image pattern through frames. The impact to end users is that tracking should "just work better". I am reserving a more detailed writeup, and maybe a paper, for later. - Other libmv tweaks, such as detecting that a tracker is headed outside of the image bounds. This includes several changes made directly to the libmv extern code rather expecting to get those changes through normal libmv channels, because I, the libmv BDFL, decided it was faster to work on libmv directly in Blender, then later reverse-port the libmv changes from Blender back into libmv trunk. The interesting part is that I added a full Levenberg-Marquardt loop to the region tracking code, which should lead to a more stable solutions. I also added a hacky implementation of "Efficient Second-Order Minimization" for tracking, which works nicely. A more detailed quantitative evaluation will follow. Original patch by Keir, cleaned a bit by myself.
2011-11-14 06:41:23 +00:00
double *principal_x, double *principal_y, double *k1, double *k2, double *k3, int *width, int *height);
void libmv_CameraIntrinsicsUndistortByte(const struct libmv_CameraIntrinsics *libmv_intrinsics,
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00
unsigned char *src, unsigned char *dst, int width, int height, float overscan, int channels);
void libmv_CameraIntrinsicsUndistortFloat(const struct libmv_CameraIntrinsics *libmv_intrinsics,
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00
float *src, float *dst, int width, int height, float overscan, int channels);
void libmv_CameraIntrinsicsDistortByte(const struct libmv_CameraIntrinsics *libmv_intrinsics,
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00
unsigned char *src, unsigned char *dst, int width, int height, float overscan, int channels);
void libmv_CameraIntrinsicsDistortFloat(const struct libmv_CameraIntrinsics *libmv_intrinsics,
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00
float *src, float *dst, int width, int height, float overscan, int channels);
/* utils */
void libmv_ApplyCameraIntrinsics(const libmv_cameraIntrinsicsOptions *libmv_camera_intrinsics_options,
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00
double x, double y, double *x1, double *y1);
void libmv_InvertCameraIntrinsics(const libmv_cameraIntrinsicsOptions *libmv_camera_intrinsics_options,
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00
double x, double y, double *x1, double *y1);
#ifdef __cplusplus
}
#endif
#endif // LIBMV_C_API_H