Camera tracking integration

===========================

Bundling new version of libmv. It's repo is ready for
bundling again, some patches which were applied on our
side are in libmv repo now.

This new version of libmv also contains patch from John Carpenter
which makes reconstruciton a bit more "stable" -- some kind
of fallback algorithm.
This commit is contained in:
Sergey Sharybin 2011-08-09 10:42:39 +00:00
parent 6752fe77ec
commit 2a5a5bcd5a
28 changed files with 1482 additions and 687 deletions

816
extern/libmv/ChangeLog vendored

@ -1,3 +1,348 @@
commit c2af303e7bf0dddcb02937323ac5846b1801e6cc
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Aug 9 11:13:29 2011 +0200
Remove reconstruction breaking debug code.
commit 8792a633e5c5f1c1f12e164b9e8897ca0790ac59
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Aug 9 10:49:18 2011 +0200
Remove getchar()s.
commit 63a9bdee0cbd1197e0315d01c27bfc2361bd5656
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Aug 9 10:35:07 2011 +0200
Adapt patch to new PipelineRoutines code generation strategy.
commit 096ff1a4070f7212c50fb0a4b2feec7ca9d97158
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Aug 9 09:54:12 2011 +0200
Merge max_image and max_track fix from tomato.
commit d8450cd3c37278a397482cd36b1e2419f154cfb9
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Aug 9 09:38:49 2011 +0200
Synchronize tree with Tomato: Merge patch for better resection, keep deprecated KLT tracker.
commit e9b2dca920cf9575c15150a4988634b00e343a41
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Mon Aug 8 17:07:08 2011 +0200
Fixes, Documentation.
commit 4fc1c57a2d92442808ac4a3676e6d9a25a51e310
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Sun Aug 7 14:35:08 2011 +0200
Improve tracker resilience by penalizing large motion vectors.
commit cc8e7e8e08cd91f75c080a0091461ca9fe969664
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Sun Aug 7 09:28:09 2011 +0200
Leverage SSE2 SAD instruction for 16x speed improvement in integer pixel search resulting in ~1ms per marker for 16x16 pattern on 128x128 region.
commit f362ab4999a768370fca57552464b459eb9fbddc
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Sun Aug 7 09:06:04 2011 +0200
Improve SAD Tracker subpixel precision (avoid drift even when adapting at each frame).
commit fce7a214c561b5f5f0e17115c31fb48814bde2db
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Sat Aug 6 21:57:06 2011 +0200
Track using simple Sum of Absolute Differences matching.
This method is simpler, more robust, faster and accurate.
commit 620a7a35d9a2818bf6e9dbf5d11debda4be6bc26
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Fri Jul 29 12:35:57 2011 +0200
Add Intersect unit test.
commit a2bf58fa57be11215eb17ff7f7de58f97d480ec3
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Thu Jul 28 11:08:06 2011 +0200
Remove tests depending on dead code.
Fix CameraIntrinsics test.
Add Intersect and Resect tests.
commit 19bddee10b4879c8cd2238ccdf5b8f7620cf8384
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jul 27 12:07:21 2011 +0200
Image Distortion: Fixes and more testing.
commit 0454d97da328fb0eda8c6c50511ac31864a6d3d6
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jul 27 10:32:37 2011 +0200
Test float image distortion.
commit 8db01595a8721f766d85931a8d92b780461d8741
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jul 27 10:27:07 2011 +0200
Image Distortion: Bilinear sampling, Optimization, Instantiate all variants (Distort/Undistort, float/ubyte, 1-4 channels).
commit 91916db921e1f2818f0aa2be823bf92c50ad4de9
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jul 26 22:51:21 2011 +0200
New Undistortion API in CameraIntrinsics.h.
Implement Undistortion API in CameraIntrinsics.cc.
commit 8c47a26072cfa9cf216771e5ae7a1dc60a770f82
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jul 26 22:16:06 2011 +0200
Qt Calibration fixes. Image undistortion experiments.
commit b575d9f68856b4e95a6b0a92ecc4e7d635342f95
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jul 26 22:14:54 2011 +0200
Qt Calibration fixes.
Image undistortion experiments.
commit fac2b3e88ef8f14fa62149f6fc929c623d73fe39
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Mon Jul 25 11:44:17 2011 +0200
Merge uncalibrated reconstruction.
UI fixes.
commit d04071ee210baef5ff657441c8c5284c235e93a3
Merge: 795e50f c4c67db
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Sat Jul 23 12:18:58 2011 +0200
Merge branch 'master' of git://github.com/keir/libmv
Conflicts:
src/libmv/simple_pipeline/initialize_reconstruction.h
src/ui/tracker/main.cc
src/ui/tracker/main.h
src/ui/tracker/scene.cc
src/ui/tracker/scene.h
commit 795e50fa1ca9ca4373ad9b5432916edf2f1940a0
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Sat Jul 23 12:12:53 2011 +0200
Support creating FloatImage without copying data.
commit c4c67db84cc6e972be19c3e0f495477a1419200e
Author: Keir Mierle <mierle@gmail.com>
Date: Thu Jul 21 10:24:06 2011 -0700
Add an uncalibrated reconstruction pipeline to libmv.
Note: The pipeline doesn't actually work yet! It runs, but the resulting
reconstruction is wildly broken. I have a number of theories as to why this is,
and will write tests to track this down, but this change has grown out of
control in the meantime.
commit 584e0ddc4058a6a4e41b1fd9665654097af177d4
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jul 20 20:21:55 2011 +0200
Latest working revision of simple optimized tracker.
commit 7983f86ff93f1ccd06f78439fb87387aecdfe49e
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jul 20 20:10:12 2011 +0200
Revert "Incremental optimization: Change API to take float*."
This reverts commit 922dc5c31022afc7fd14b2ead32491c079565d7b.
commit 922dc5c31022afc7fd14b2ead32491c079565d7b
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jul 20 17:11:20 2011 +0200
Incremental optimization: Change API to take float*.
commit 9d9fab4165aabad6474bb879b5f418c1b7a7862e
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jul 20 17:05:14 2011 +0200
Incremental optimization.
commit 739fe5a118a2a5c90cf2c6d66c776231da0fb92b
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jul 20 16:57:37 2011 +0200
Incremental optimization.
commit c48ba3b9f49e8fc5ef45ab1b6753f70bfdef0c34
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jul 20 15:05:11 2011 +0200
Inline.
commit 2b7e704d947cafd12d67b3904365feb7c2b3e89a
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jul 20 14:59:12 2011 +0200
Keep relative marker position in tracker.
commit a92e1fb70c2be05f9808bc43b87d3949790ef7ed
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jul 20 10:56:52 2011 +0200
Avoid unecessary pyramid copy.
commit 31908b280e8862f15c3f67f697727dd692db9997
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jul 20 10:45:54 2011 +0200
tracker.cc: Extract float image only when necessary.
klt.cc: Remove retrack which wasn't useful when we already use robust time-reversible KLT (+40% speed)
klt.cc: Factorize bilinear sampling
We are probably limited by bandwidth because of the float image filtering.
Next optimization step would require reducing filter sizes and use integer filtering.
Though, it will be necessary to verify the impact of tracking quality.
commit 433cd976f6047324ba27f21b3cafe3ecfbcb5aa1
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jul 19 23:21:47 2011 +0200
Really add the simplified and optimized tracker.
commit caf938781bcd3e695f127a548678a1cf0a426b8f
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jul 19 23:19:24 2011 +0200
Replace tracking/* trackers with the new simplified and optimized version.
This first optimization pass should result in a 4x speedup.
TODO: Integrating optimized filtering and optimized bilinear sampling should provide additionnal performance.
commit 13f5707573757a37d72b3d5be21a019049de9523
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jul 19 23:14:19 2011 +0200
Update documentation.
commit a002fcf0ee42bd15991df34832cf557a5653c48e
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jul 19 23:07:05 2011 +0200
Incremental Optimization #4: Keep last filtered pyramid.
This avoid computing the pyramid downsampling twice (as new patch and then as old patch).
API was changed as this require the client to keep our state.
commit 6772d7916126e710179370a5412d9380f05995a9
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jul 19 22:54:30 2011 +0200
Incremental Optimization #3: Move Filtering in MakePyramid.
This avoid computing each pyramid filter twice (i.e for each call to TrackImage).
commit eec24f8c8b21ec286d12990b6e63772d7559bccc
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jul 19 22:42:35 2011 +0200
Incremental Optimization #2: Move pyramid downsampling up to Track.
This avoid computing the pyramid downsampling twice (forward and backward).
commit 35cd70d321113956f2413084102a3a76ca35186f
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jul 19 22:37:01 2011 +0200
Incremental optimization #1: Convert retrack/pyramid/KLT tracker objects to respective methods in a single Tracker object.
This is necessary to allow retrack to reuse pyramids and avoid filtering images multiple times.
commit 84341da380c585645f38e67dce5a8e1fd58242ac
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jul 19 22:17:58 2011 +0200
Revert to working tracker in Qt Tracker.
TODO: The optimization changes shall be done incrementally.
commit 1d15e8572e8922c2a7aa8927f6f82c13ed6c3fc8
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jul 19 21:18:21 2011 +0200
Fix missing deallocation.
commit a3ca18db4dec90414e658d480ea5f19b31bb8a77
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jul 19 21:00:37 2011 +0200
Progress in debugging optimized KLT.
commit 354488cca6a29a4aa6ea3f943f693ae4af21e8da
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jul 19 14:54:43 2011 +0200
Add optimized version of KLT tracker.
FIXME: It currently doesn't work.
commit 07a27f214c98e92b876c0878d05397f2031b35cd
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Sun Jul 17 23:01:58 2011 +0200
Add some #if (QT_VERSION < QT_VERSION_CHECK(4, 7, 0)) and #define constBits bits macros to support Qt < 4.7 which is still used on older systems.
Build using Qt < 4.7 will see increased memory usage and unecessary copying.
commit 1b91161241f90ef40391edd13c902d1901b723fb
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Sun Jul 17 18:25:47 2011 +0200
Fix sequence caching.
commit e1737d71b50237c0f0d5a7f2b7463284d95a902e
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Sun Jul 17 16:35:48 2011 +0200
Use deprecated FFmpeg API.
commit 09b090a20134a8ccef273d802752bbf475ff5280
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Sat Jul 16 21:19:14 2011 +0200
Fix loading cache directly after creation.
commit 1190584f6ca60fb74188cac36c61333951244026
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Sat Jul 16 20:02:36 2011 +0200
Add custom FileDialog to handle multiple files and folders selection in the same dialog
commit 50504af3d0fc11665483aed524d52d076f38ffa8
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Fri Jul 15 23:34:41 2011 +0200
Fix mistake.
commit a8e322a2c37cc87acd864d94a354334d611b9750
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Fri Jul 15 22:25:51 2011 +0200
Use cache even on first run (i.e don't load all images on the heap).
commit 66ac3b5833a1fd4fbd97b082db6ddf859b33811c
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Fri Jul 15 21:15:45 2011 +0200
Improve Zoom View.
commit dec996966d5466717a813da2d8b9962115dfc8ac
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Fri Jul 15 20:04:06 2011 +0200
@ -98,474 +443,3 @@ Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Sat Jul 9 12:08:28 2011 +0200
Update README with instructions to build Qt OpenCV Calibration tool.
commit dc4e18b474b024c04fbb0ef6b4f2c2cd642c8715
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Fri Jul 8 13:25:14 2011 +0200
Make Qt OpenCV Calibration Tool optionnal.
commit 86356bd988b1b937082d56330f18a20e4cb26c19
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Fri Jul 8 10:19:30 2011 +0200
add QCompleter with QFileSystemModel to choose source image folder easily.
commit 209983a5e74c8b328d22d17668b3ad20d6e87f7a
Merge: 0396ccf 7bef9ca
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Fri Jul 8 10:17:05 2011 +0200
Merge branch 'master' of git://github.com/libmv/libmv
commit 0396ccf81dee87c3e7a06aa15f05bcaad8dd6ac3
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Thu Jul 7 18:35:12 2011 +0200
calibration: Compute intrinsincs, undistort image and corners, output custom XML file.
commit 540a48b1e9f9d4e28fe6b1ef56decf1b29b4811f
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Thu Jul 7 11:18:25 2011 +0200
Detect calibration checkerboard using OpenCV Calib3D.
commit d8113dbac2f2156cdfebb5070102f29c26ba9776
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jul 6 22:36:12 2011 +0200
Initial implementation of OpenCV Qt Calibration Tool
commit dae6fae16ec4abbaa9826f944c6ae16cc17ba051
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jul 6 16:01:41 2011 +0200
Fix build.
commit 8cdf7aa54f16bf4fb0f0f824c7b5874373831019
Merge: b8d02b5 df1d54e
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jul 5 09:51:01 2011 +0200
Merge branch 'master' of git://github.com/keir/libmv
commit df1d54efd23530891851d3573a5126094acea840
Author: Keir Mierle <mierle@gmail.com>
Date: Mon Jul 4 13:12:53 2011 -0700
Fix include ordering.
commit b8d02b551bca95f4a228a85188f12078cc3bd2f4
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Mon Jul 4 16:34:04 2011 +0200
Remove momentum in scene view.
commit 67433907db5537a2e32893ef558c63ab336f59c1
Merge: 0049521 b027af4
Author: Keir Mierle <mierle@gmail.com>
Date: Mon Jul 4 04:01:44 2011 -0700
Merge Matthias's branch.
commit 0049521756bf03c88061ccc46d0a97397cc77094
Author: Keir Mierle <mierle@gmail.com>
Date: Mon Jul 4 03:18:05 2011 -0700
Fix build break caused by removing out-of-place definition of P_from_KRt.
commit 40b38d52f71ba61af9d2b00a5077d6346b83d669
Author: Keir Mierle <mierle@gmail.com>
Date: Mon Jul 4 03:07:46 2011 -0700
Improvements to reconstruction.h documentation.
commit b027af4662007c50a12c08e1991fb9b1124a3a87
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Mon Jul 4 10:22:54 2011 +0200
use P_From_KRt from projection.cc.
commit b082a3e50c79e168e79e51d1571cd2e9b31aa2c2
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Sun Jul 3 19:33:30 2011 +0200
Make unit testing optionnal.
Restore user defined camera intrinsincs.
Reduce subjective view speed.
commit 144618f9bb247c33e7530118b1ff5aca608573af
Author: Keir Mierle <mierle@gmail.com>
Date: Thu Jun 30 02:33:09 2011 -0700
Add missing headers.
commit 41b185566506a3583dc7d2aa46b48fa647d30177
Author: Keir Mierle <mierle@gmail.com>
Date: Thu Jun 30 02:32:50 2011 -0700
Disable projective resection for now.
commit c0f6582f01d3529f757cccd83e7b76ffb1184f1f
Author: Keir Mierle <mierle@gmail.com>
Date: Thu Jun 30 02:31:07 2011 -0700
Fix a uninitialized memory error in reconstruction.
With this change, the elevator sequence now reconstructs with 0.2 pixel average
reprojection error. That's over about 2000 markers, 120 frames, and 25 tracks.
Reconstruction is now working for real!
commit 3dc34c599eaf69d7903e2213f68196fd32d69439
Author: Keir Mierle <mierle@gmail.com>
Date: Thu Jun 30 01:31:08 2011 -0700
Split multiview.h/cc up into individual files to keep compile times sane.
commit c3b3296905f7b67ea038058b51efa8cd76e232ba
Author: Keir Mierle <mierle@gmail.com>
Date: Wed Jun 29 22:46:58 2011 -0700
Add bundling support to simple pipeline. It seems to work pretty well.
commit 4a3ebd309f551abe3197dc573cd7549941d74efd
Author: Keir Mierle <mierle@gmail.com>
Date: Wed Jun 29 10:33:27 2011 -0700
Fix return type in multiview minimization.
commit 045ff6625a3de3fff98e9f4c153376f2143d2c3d
Author: Keir Mierle <mierle@gmail.com>
Date: Tue Jun 28 10:27:33 2011 -0700
Add hacky hardcoded calls into multiview from qt-tracker.
commit 8c1dec581afbb06e45ed6f15dd72c64b90f52409
Author: Keir Mierle <mierle@gmail.com>
Date: Tue Jun 28 10:25:38 2011 -0700
Add Intersect and Resect implementations, including nonlinear refinement with Levenberg-Marquardt.
commit 63939617aa39a8012154b6f18cf65b671a57880d
Author: Keir Mierle <mierle@gmail.com>
Date: Tue Jun 28 10:10:48 2011 -0700
Add const and non-const accessors to Reconstruction.
commit b95015c8e99d7c9c37c13c3efdc4e6c87cece53d
Author: Keir Mierle <mierle@gmail.com>
Date: Tue Jun 28 10:09:56 2011 -0700
Return false from euclidean resection if it does not work.
commit 437ab722d16c442f27b9c63af3a43468a564a2ef
Author: Keir Mierle <mierle@gmail.com>
Date: Tue Jun 28 10:09:15 2011 -0700
Formatting.
commit c7156a6fcaf7ca8d30d624abeefcd9ddf1087bea
Author: Keir Mierle <mierle@gmail.com>
Date: Tue Jun 28 10:04:21 2011 -0700
More testing fixes.
commit ba9e89674a49aa3b5372dd721dd413d90222a064
Author: Keir Mierle <mierle@gmail.com>
Date: Tue Jun 28 10:04:03 2011 -0700
Reenable testing.
commit 0f9a11ace1294f9d857683779747d367923d48a2
Author: Keir Mierle <mierle@gmail.com>
Date: Tue Jun 28 10:03:36 2011 -0700
Add a test for camera intrinsics.
commit 37ae9aea4541c9211cd983ce44a875f037136bd7
Author: Keir Mierle <mierle@gmail.com>
Date: Mon Jun 27 02:36:41 2011 -0700
Get simple reconstruction pipeline is running without BA. Doesn't work well yet.
commit 656f4588f2dbb7672c7233bee5425b34e5ba8aa4
Author: Keir Mierle <mierle@gmail.com>
Date: Sun Jun 26 23:06:22 2011 -0700
Fix PointForTrack() in the Reconstruction object.
commit b27afb3168c23671c130c0f92e24643204034d3e
Author: Keir Mierle <mierle@gmail.com>
Date: Sun Jun 26 23:05:39 2011 -0700
Clarify documenattion in pipelien.h.
commit afd5c5b57999282a7f800c17c05d9020423843b3
Author: Keir Mierle <mierle@gmail.com>
Date: Sun Jun 26 23:05:15 2011 -0700
Add more source files to multiview that got removed.
commit 2916c0f0c29c1e03a11773ac6615a7dbfbc9c058
Author: Keir Mierle <mierle@gmail.com>
Date: Sun Jun 26 23:04:24 2011 -0700
Implement Intersect() and Resect() in multiview.cc.
commit ff2478331a98d0bf2e83b7b03a501049741b1d0e
Author: Keir Mierle <mierle@gmail.com>
Date: Sun Jun 26 23:03:45 2011 -0700
Clarify documentation in multiview.h.
commit f7670aab1b101f68802bc35bc1b64c3d55fa5f3c
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Thu Jun 23 16:18:44 2011 +0200
fix EnforceFundamentalRank2Constraint
commit a3e29213bdd959502d2fdc077ac479d2b59db0c8
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jun 22 23:30:54 2011 +0200
fix CoordinatesForMarkersInImage
commit 9e5f84b6cb063bdca8649b76b4f9769e35823f5d
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jun 22 20:26:33 2011 +0200
Remove random data generation, use InsertPoint in Intersect.
commit a41b10fa74f0e7977654a43bb7367704b638bede
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jun 22 16:24:26 2011 +0200
hook NViewTriangulate to simple_pipeline/multiview Intersect API.
Use libmv::vector instead of std::vector to avoid conflicts.
Convert iterator-based loops to counter-based loops as libmv::vector doesn't support iteration.
commit 67b7437b074cb0dfdc26422dd29b9aba1cb906f4
Merge: 045ca85 c257195
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jun 22 10:28:48 2011 +0200
Merge branch 'master' of git://github.com/keir/libmv
Conflicts:
src/libmv/simple_pipeline/multiview.cc
commit 045ca85dde561c6ec84621ae8f8a67757152dc62
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jun 21 22:19:33 2011 +0200
Only reconstruct when we have at least 8 tracks, remove tracks from selections when no markers.
commit 7b593875e6ba089d997ef9a7ff0df255c8883ea4
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jun 21 21:57:12 2011 +0200
only lens distort in overlay, fix keyframes creation.
commit 4ff8e9762a4816bf3634578c2c4886e655120698
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Tue Jun 21 21:50:15 2011 +0200
MacOS support.
commit c2571954d8d88b45b2f05be139fef06f308228ee
Author: Keir Mierle <mierle@gmail.com>
Date: Tue Jun 21 10:35:51 2011 -0700
Rework multiview.h doxygen comments.
commit 7bef9cac601f4407eab576906442dba70396ed89
Merge: 7a996a2 56f49b5
Author: Keir Mierle <mierle@gmail.com>
Date: Tue Jun 21 10:04:20 2011 -0700
Merge pull request #5 from nathanwiegand/master
Howdy, Keir.
commit 7a996a2f8153eed6c6ae784b5a17aee59c65d45f
Merge: 1b20b39 7aceb96
Author: Keir Mierle <mierle@gmail.com>
Date: Tue Jun 21 10:03:01 2011 -0700
Merge pull request #7 from JulienMichot/master
Fixed issue 26: Error 'REG_RIP' was not declared in this scope
commit fb1c93590a67ba95d550f351c1d297699cdceffb
Author: Keir Mierle <mierle@gmail.com>
Date: Tue Jun 21 09:26:17 2011 -0700
Remove use od constBits() which is not present in Qt 4.6.
commit 5a1883edb1b5b2b56b31ba929a1a077f53187306
Author: Keir Mierle <mierle@gmail.com>
Date: Tue Jun 21 09:26:00 2011 -0700
Make multiview.cc compile.
commit 5ca168c45291205b387683ef679dbee18af23e72
Merge: 9aea3ad 1de8d03
Author: Keir Mierle <mierle@gmail.com>
Date: Tue Jun 21 01:35:20 2011 -0700
Merge branch 'master' of git://github.com/Matthias-Fauconneau/libmv
commit 9aea3adc56234b059a7ee10a6ff946c3a67ee699
Author: Keir Mierle <mierle@gmail.com>
Date: Tue Jun 21 01:34:40 2011 -0700
Progress on multiview code in simple pipeline.
commit 1de8d0326cfe2c1b1d1b151b9eef147fca3c72c9
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Mon Jun 20 23:16:46 2011 +0200
Add back third_party/glog and third_party/gflag to build system as they are needed to compile multiview with logging in debug mode.
commit a51662a259a6c946b5f84c3e0e3b351e9ea4105c
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Mon Jun 20 19:13:40 2011 +0200
Implement keyframe selection, Fix Tracks::MarkersForTracksInBothImages, Use Reconstruction methods.
commit 2d6cd58ee1cd7c5073980f358c71b2898ad16b4c
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Mon Jun 20 14:15:37 2011 +0200
Add Simple Pipeline API header and documentation.
Make rebuilding much faster by removing unneeded dependencies.
commit 7af3c1af55e42a8091b95ece38bbd1dbefa64013
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Mon Jun 20 12:52:44 2011 +0200
Add documentation to MultiView and Reconstruction APIs.
commit 308fbc7d87c8d7189c588eb56a563d1f1ff926ba
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Sun Jun 19 20:37:49 2011 +0200
Update the README.
commit 340b69b2afa4d6c48a019c083bfd0a705bb12165
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Sun Jun 19 20:28:22 2011 +0200
Change default build options to avoid spurious build failure reports.
commit 2e1c6ef7803bbe8ef53237e2027764e932c404f9
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Sat Jun 18 12:17:59 2011 +0200
Set maximum focal length to FLT_MAX.
commit a213989750b7f080a2c543203865262a0d40252e
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Sat Jun 18 12:01:52 2011 +0200
Fix issues caused by style changes.
commit cea54e0207696bb09cadeefbce7bb2a4f1e4c996
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Fri Jun 17 22:46:10 2011 +0200
Add many whitespaces and newlines.
commit fd01f5139fe720fbe3e8206de27b48519c3de947
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Fri Jun 17 13:05:05 2011 +0200
Add low order lens distortion for reprojected bundle preview.
TODO: Test if the distortion is correct.
commit 8aca7f45513b3cdc2b4b3cf04bd6c11aecf0a756
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Thu Jun 16 23:02:17 2011 +0200
Add Solve button and a solve() stub.
commit 60b24a44cebdb7308d19e0b472349eadac42fdde
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Thu Jun 16 21:41:08 2011 +0200
Fix first MaxImage() and MaxTrack().
Correct transform for Zoom view.
Display whole track for visible markers.
commit 3cc45633279ac6fdcdfcae70db03301c69ad609b
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Thu Jun 16 19:58:07 2011 +0200
Persistent Settings.
commit 8c5b8de3019013cbdbcf019352c49fc3dc1b181b
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Thu Jun 16 19:19:41 2011 +0200
Add Camera parameters GUI.
TODO: persistent parameters.
TODO: Feed the data to Multiview API.
commit 7db187d7035bc7f5adee3771125867e9b892d03a
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Thu Jun 16 18:34:40 2011 +0200
Smaller bundle display, Display bundles in overlay.
commit 9bf3ccee0f846cdfa5358b367ca35bf68d726df3
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Thu Jun 16 10:26:12 2011 +0200
Display 3D Scene overlay on 2D Tracker view.
commit 3e795fa9f5aa7c461d2d65507af5d7b7d4e976ab
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Thu Jun 16 00:26:12 2011 +0200
Fix tracking (was wrong coordinate transform).
commit f3dc249c6cf417c0bed7455ff4a974d59088d369
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jun 15 22:56:56 2011 +0200
attribute should have been varying.
commit cfe8f83cea71d0fe46159cdb4b750c5888ae2065
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jun 15 22:49:53 2011 +0200
NVidia GLSL compiler is really not strict enough...
commit 13eb0c74ee49cc15827c88fe7f7ae6a0f29e378b
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jun 15 22:46:20 2011 +0200
correct GLSL 1.2
commit a8e9cfffac479cd0af0a4e461a9204e4685fd01e
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jun 15 22:26:36 2011 +0200
Remove unused GL abstraction, Downgrade GLSL requirement to 1.20, use GLEW by default even on linux.
commit f9a29280ede36f6a3a894d2eea51dc7f4a1f62e2
Author: Matthias Fauconneau <matthias.fauconneau@gmail.com>
Date: Wed Jun 15 21:58:35 2011 +0200
Add GL Zoom Views.
This new implementation now display all selected markers.
commit 7aceb964db6843fcf91bf641adde0646817db305
Author: Julien Michot <julien.michot.fr@gmail.com>
Date: Wed Jun 15 21:51:04 2011 +0200
Fixed issue 26.

@ -66,7 +66,7 @@ typedef struct libmv_RegionTracker {
} libmv_RegionTracker;
typedef struct libmv_Reconstruction {
libmv::Reconstruction reconstruction;
libmv::EuclideanReconstruction reconstruction;
/* used for per-track average error calculation after reconstruction */
libmv::Tracks tracks;
@ -332,7 +332,7 @@ libmv_Reconstruction *libmv_solveReconstruction(libmv_Tracks *tracks, int keyfra
/* Invert the camera intrinsics. */
libmv::vector<libmv::Marker> markers = ((libmv::Tracks*)tracks)->AllMarkers();
libmv_Reconstruction *libmv_reconstruction = new libmv_Reconstruction();
libmv::Reconstruction *reconstruction = &libmv_reconstruction->reconstruction;
libmv::EuclideanReconstruction *reconstruction = &libmv_reconstruction->reconstruction;
libmv::CameraIntrinsics *intrinsics = &libmv_reconstruction->intrinsics;
intrinsics->SetFocalLength(focal_length, focal_length);
@ -354,20 +354,20 @@ libmv_Reconstruction *libmv_solveReconstruction(libmv_Tracks *tracks, int keyfra
libmv::vector<libmv::Marker> keyframe_markers =
normalized_tracks.MarkersForTracksInBothImages(keyframe1, keyframe2);
libmv::ReconstructTwoFrames(keyframe_markers, reconstruction);
libmv::Bundle(normalized_tracks, reconstruction);
libmv::CompleteReconstruction(normalized_tracks, reconstruction);
libmv::EuclideanReconstructTwoFrames(keyframe_markers, reconstruction);
libmv::EuclideanBundle(normalized_tracks, reconstruction);
libmv::EuclideanCompleteReconstruction(normalized_tracks, reconstruction);
libmv_reconstruction->tracks = *(libmv::Tracks *)tracks;
libmv_reconstruction->error = libmv::ReprojectionError(*(libmv::Tracks *)tracks, *reconstruction, *intrinsics);
libmv_reconstruction->error = libmv::EuclideanReprojectionError(*(libmv::Tracks *)tracks, *reconstruction, *intrinsics);
return (libmv_Reconstruction *)libmv_reconstruction;
}
int libmv_reporojectionPointForTrack(libmv_Reconstruction *libmv_reconstruction, int track, double pos[3])
{
libmv::Reconstruction *reconstruction = &libmv_reconstruction->reconstruction;
libmv::Point *point = reconstruction->PointForTrack(track);
libmv::EuclideanReconstruction *reconstruction = &libmv_reconstruction->reconstruction;
libmv::EuclideanPoint *point = reconstruction->PointForTrack(track);
if(point) {
pos[0] = point->X[0];
@ -380,7 +380,7 @@ int libmv_reporojectionPointForTrack(libmv_Reconstruction *libmv_reconstruction,
return 0;
}
static libmv::Marker ProjectMarker(const libmv::Point &point, const libmv::Camera &camera,
static libmv::Marker ProjectMarker(const libmv::EuclideanPoint &point, const libmv::EuclideanCamera &camera,
const libmv::CameraIntrinsics &intrinsics) {
libmv::Vec3 projected = camera.R * point.X + camera.t;
projected /= projected(2);
@ -396,7 +396,7 @@ static libmv::Marker ProjectMarker(const libmv::Point &point, const libmv::Camer
double libmv_reporojectionErrorForTrack(libmv_Reconstruction *libmv_reconstruction, int track)
{
libmv::Reconstruction *reconstruction = &libmv_reconstruction->reconstruction;
libmv::EuclideanReconstruction *reconstruction = &libmv_reconstruction->reconstruction;
libmv::CameraIntrinsics *intrinsics = &libmv_reconstruction->intrinsics;
libmv::vector<libmv::Marker> markers = libmv_reconstruction->tracks.MarkersForTrack(track);
@ -404,8 +404,8 @@ double libmv_reporojectionErrorForTrack(libmv_Reconstruction *libmv_reconstructi
double total_error = 0.0;
for (int i = 0; i < markers.size(); ++i) {
const libmv::Camera *camera = reconstruction->CameraForImage(markers[i].image);
const libmv::Point *point = reconstruction->PointForTrack(markers[i].track);
const libmv::EuclideanCamera *camera = reconstruction->CameraForImage(markers[i].image);
const libmv::EuclideanPoint *point = reconstruction->PointForTrack(markers[i].track);
if (!camera || !point) {
continue;
@ -425,8 +425,8 @@ double libmv_reporojectionErrorForTrack(libmv_Reconstruction *libmv_reconstructi
int libmv_reporojectionCameraForImage(libmv_Reconstruction *libmv_reconstruction, int image, double mat[4][4])
{
libmv::Reconstruction *reconstruction = &libmv_reconstruction->reconstruction;
libmv::Camera *camera = reconstruction->CameraForImage(image);
libmv::EuclideanReconstruction *reconstruction = &libmv_reconstruction->reconstruction;
libmv::EuclideanCamera *camera = reconstruction->CameraForImage(image);
if(camera) {
for (int j = 0; j < 3; ++j) {

@ -41,23 +41,25 @@ class ArrayND : public BaseArray {
typedef Tuple<int, N> Index;
/// Create an empty array.
ArrayND() : data_(NULL) { Resize(Index(0)); }
ArrayND() : data_(NULL), own_data(true) { Resize(Index(0)); }
/// Create an array with the specified shape.
ArrayND(const Index &shape) : data_(NULL) { Resize(shape); }
ArrayND(const Index &shape) : data_(NULL), own_data(true) { Resize(shape); }
/// Create an array with the specified shape.
ArrayND(int *shape) : data_(NULL) { Resize(shape); }
ArrayND(int *shape) : data_(NULL), own_data(true) { Resize(shape); }
/// Copy constructor.
ArrayND(const ArrayND<T, N> &b) : data_(NULL) {
ArrayND(const ArrayND<T, N> &b) : data_(NULL), own_data(true) {
ResizeLike(b);
std::memcpy(Data(), b.Data(), sizeof(T) * Size());
}
ArrayND(int s0) : data_(NULL) { Resize(s0); }
ArrayND(int s0, int s1) : data_(NULL) { Resize(s0, s1); }
ArrayND(int s0, int s1, int s2) : data_(NULL) { Resize(s0, s1, s2); }
ArrayND(int s0) : data_(NULL), own_data(true) { Resize(s0); }
ArrayND(int s0, int s1) : data_(NULL), own_data(true) { Resize(s0, s1); }
ArrayND(int s0, int s1, int s2) : data_(NULL), own_data(true) { Resize(s0, s1, s2); }
ArrayND(T* data, int s0, int s1, int s2) : data_(data), own_data(false) { Resize(s0, s1, s2); }
/// Destructor deletes pixel data.
~ArrayND() {
@ -91,10 +93,12 @@ class ArrayND : public BaseArray {
for (int i = N - 1; i > 0; --i) {
strides_(i - 1) = strides_(i) * shape_(i);
}
delete [] data_;
data_ = NULL;
if (Size() > 0) {
data_ = new T[Size()];
if(own_data) {
delete [] data_;
data_ = NULL;
if (Size() > 0) {
data_ = new T[Size()];
}
}
}
@ -326,6 +330,9 @@ class ArrayND : public BaseArray {
/// Pointer to the first element of the array.
T *data_;
/// Flag if this Array either own or reference the data
bool own_data;
};
/// 3D array (row, column, channel).
@ -339,6 +346,9 @@ class Array3D : public ArrayND<T, 3> {
Array3D(int height, int width, int depth=1)
: Base(height, width, depth) {
}
Array3D(T* data, int height, int width, int depth=1)
: Base(data, height, width, depth) {
}
void Resize(int height, int width, int depth=1) {
Base::Resize(height, width, depth);

@ -76,6 +76,7 @@ inline T SampleLinear(const Array3D<T> &image, float y, float x, int v = 0) {
// Downsample all channels by 2. If the image has odd width or height, the last
// row or column is ignored.
// FIXME(MatthiasF): this implementation shouldn't be in an interface file
inline void DownsampleChannelsBy2(const Array3Df &in, Array3Df *out) {
int height = in.Height() / 2;
int width = in.Width() / 2;

@ -37,22 +37,22 @@ void KRt_From_P(const Mat34 &P, Mat3 *Kp, Mat3 *Rp, Vec3 *tp) {
Q.setIdentity();
// Set K(2,1) to zero.
if (K(2,1) != 0) {
if (K(2, 1) != 0) {
double c = -K(2,2);
double s = K(2,1);
double l = sqrt(c * c + s * s);
c /= l; s /= l;
Mat3 Qx;
Qx << 1, 0, 0,
0, c, -s,
0, s, c;
Qx << 1, 0, 0,
0, c, -s,
0, s, c;
K = K * Qx;
Q = Qx.transpose() * Q;
}
// Set K(2,0) to zero.
if (K(2,0) != 0) {
double c = K(2,2);
double s = K(2,0);
if (K(2, 0) != 0) {
double c = K(2, 2);
double s = K(2, 0);
double l = sqrt(c * c + s * s);
c /= l; s /= l;
Mat3 Qy;
@ -63,9 +63,9 @@ void KRt_From_P(const Mat34 &P, Mat3 *Kp, Mat3 *Rp, Vec3 *tp) {
Q = Qy.transpose() * Q;
}
// Set K(1,0) to zero.
if (K(1,0) != 0) {
double c = -K(1,1);
double s = K(1,0);
if (K(1, 0) != 0) {
double c = -K(1, 1);
double s = K(1, 0);
double l = sqrt(c * c + s * s);
c /= l; s /= l;
Mat3 Qz;

@ -128,7 +128,7 @@ inline void Project(const Mat34 &P, const Vec3 &X, Vec2 *x) {
inline void Project(const Mat34 &P, const Mat4X &X, Mat2X *x) {
x->resize(2, X.cols());
for (size_t c = 0; c < X.cols(); ++c) {
for (int c = 0; c < X.cols(); ++c) {
Vec3 hx = P * X.col(c);
x->col(c) = hx.head<2>() / hx(2);
}
@ -142,7 +142,7 @@ inline Mat2X Project(const Mat34 &P, const Mat4X &X) {
inline void Project(const Mat34 &P, const Mat3X &X, Mat2X *x) {
x->resize(2, X.cols());
for (size_t c = 0; c < X.cols(); ++c) {
for (int c = 0; c < X.cols(); ++c) {
Vec4 HX;
HX << X.col(c), 1.0;
Vec3 hx = P * HX;
@ -154,7 +154,7 @@ inline void Project(const Mat34 &P, const Mat3X &X, const Vecu &ids, Mat2X *x) {
x->resize(2, ids.size());
Vec4 HX;
Vec3 hx;
for (size_t c = 0; c < ids.size(); ++c) {
for (int c = 0; c < ids.size(); ++c) {
HX << X.col(ids[c]), 1.0;
hx = P * HX;
x->col(c) = hx.head<2>() / hx(2);

@ -134,10 +134,12 @@ class LevenbergMarquardt {
Solver solver(A_augmented);
dx = solver.solve(g);
bool solved = (A_augmented * dx).isApprox(g);
if (!solved) LOG(ERROR) << "Failed to solve";
if (!solved) {
LOG(ERROR) << "Failed to solve";
}
if (solved && dx.norm() <= params.relative_step_threshold * x.norm()) {
results.status = RELATIVE_STEP_SIZE_TOO_SMALL;
break;
results.status = RELATIVE_STEP_SIZE_TOO_SMALL;
break;
}
if (solved) {
x_new = x + dx;

@ -36,7 +36,8 @@
namespace libmv {
void Bundle(const Tracks &tracks, Reconstruction *reconstruction) {
void EuclideanBundle(const Tracks &tracks,
EuclideanReconstruction *reconstruction) {
vector<Marker> markers = tracks.AllMarkers();
// "index" in this context is the index that V3D's optimizer will see. The
@ -44,16 +45,16 @@ void Bundle(const Tracks &tracks, Reconstruction *reconstruction) {
// not the case for the "image" numbering that arises from the tracks
// structure. The complicated mapping is necessary to convert between the two
// representations.
std::map<Camera *, int> camera_to_index;
std::map<Point *, int> point_to_index;
vector<Camera *> index_to_camera;
vector<Point *> index_to_point;
std::map<EuclideanCamera *, int> camera_to_index;
std::map<EuclideanPoint *, int> point_to_index;
vector<EuclideanCamera *> index_to_camera;
vector<EuclideanPoint *> index_to_point;
int num_cameras = 0;
int num_points = 0;
for (int i = 0; i < markers.size(); ++i) {
const Marker &marker = markers[i];
Camera *camera = reconstruction->CameraForImage(marker.image);
Point *point = reconstruction->PointForTrack(marker.track);
EuclideanCamera *camera = reconstruction->CameraForImage(marker.image);
EuclideanPoint *point = reconstruction->PointForTrack(marker.track);
if (camera && point) {
if (camera_to_index.find(camera) == camera_to_index.end()) {
camera_to_index[camera] = num_cameras;
@ -118,8 +119,8 @@ void Bundle(const Tracks &tracks, Reconstruction *reconstruction) {
std::vector<int> v3d_camera_for_measurement;
std::vector<int> v3d_point_for_measurement;
for (int i = 0; i < markers.size(); ++i) {
Camera *camera = reconstruction->CameraForImage(markers[i].image);
Point *point = reconstruction->PointForTrack(markers[i].track);
EuclideanCamera *camera = reconstruction->CameraForImage(markers[i].image);
EuclideanPoint *point = reconstruction->PointForTrack(markers[i].track);
if (!camera || !point) {
continue;
}
@ -174,4 +175,10 @@ void Bundle(const Tracks &tracks, Reconstruction *reconstruction) {
}
}
void ProjectiveBundle(const Tracks & /*tracks*/,
ProjectiveReconstruction * /*reconstruction*/) {
// TODO(keir): Implement this! This can't work until we have a better bundler
// than SSBA, since SSBA has no support for projective bundling.
}
} // namespace libmv

@ -23,7 +23,8 @@
namespace libmv {
class Reconstruction;
class EuclideanReconstruction;
class ProjectiveReconstruction;
class Tracks;
/*!
@ -41,9 +42,30 @@ class Tracks;
\note This assumes a calibrated reconstruction, e.g. the markers are
already corrected for camera intrinsics and radial distortion.
\sa Resect, Intersect, ReconstructTwoFrames
\sa EuclideanResect, EuclideanIntersect, EuclideanReconstructTwoFrames
*/
void Bundle(const Tracks &tracks, Reconstruction *reconstruction);
void EuclideanBundle(const Tracks &tracks,
EuclideanReconstruction *reconstruction);
/*!
Refine camera poses and 3D coordinates using bundle adjustment.
This routine adjusts all cameras and points in \a *reconstruction. This
assumes a full observation for reconstructed tracks; this implies that if
there is a reconstructed 3D point (a bundle) for a track, then all markers
for that track will be included in the minimization. \a tracks should
contain markers used in the initial reconstruction.
The cameras and bundles (homogeneous 3D points) are refined in-place.
\note This assumes an outlier-free set of markers.
\note This assumes that radial distortion is already corrected for, but
does not assume that that other intrinsics are.
\sa ProjectiveResect, ProjectiveIntersect, ProjectiveReconstructTwoFrames
*/
void ProjectiveBundle(const Tracks &tracks,
ProjectiveReconstruction *reconstruction);
} // namespace libmv

@ -23,6 +23,8 @@
namespace libmv {
struct Offset { char ix,iy; unsigned char fx,fy; };
CameraIntrinsics::CameraIntrinsics()
: K_(Mat3::Identity()),
image_width_(0),
@ -31,7 +33,14 @@ CameraIntrinsics::CameraIntrinsics()
k2_(0),
k3_(0),
p1_(0),
p2_(0) {}
p2_(0),
distort_(0),
undistort_(0) {}
CameraIntrinsics::~CameraIntrinsics() {
if(distort_) delete[] distort_;
if(undistort_) delete[] undistort_;
}
void CameraIntrinsics::ApplyIntrinsics(double normalized_x,
double normalized_y,
@ -89,7 +98,7 @@ void CameraIntrinsics::InvertIntrinsics(double image_x,
Solver::SolverParameters params;
Solver solver(intrinsics_cost);
Solver::Results results = solver.minimize(params, &normalized);
/*Solver::Results results =*/ solver.minimize(params, &normalized);
// TODO(keir): Better error handling.
@ -97,4 +106,117 @@ void CameraIntrinsics::InvertIntrinsics(double image_x,
*normalized_y = normalized(1);
}
// TODO(MatthiasF): downsample lookup
template<typename WarpFunction>
void CameraIntrinsics::ComputeLookupGrid(Offset* grid, int width, int height) {
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
double warp_x, warp_y;
WarpFunction(this,x,y,&warp_x,&warp_y);
int ix = int(warp_x), iy = int(warp_y);
int fx = round((warp_x-ix)*256), fy = round((warp_y-iy)*256);
if(fx == 256) { fx=0; ix++; }
if(fy == 256) { fy=0; iy++; }
#ifdef CLIP
// Use nearest border pixel
if( ix < 0 ) { ix = 0, fx = 0; }
if( iy < 0 ) { iy = 0, fy = 0; }
if( ix >= width-1 ) { ix = width-1, fx = 0; }
if( iy >= height-1 ) { iy = height-1, fy = 0; }
#else
// No offset: Avoid adding out of bounds to error.
if( ix < 0 || iy < 0 || ix >= width-1 || iy >= height-1 ) {
ix = x; iy = y; fx = fy = 0;
}
#endif
//assert( ix-x > -128 && ix-x < 128 && iy-y > -128 && iy-y < 128 );
Offset offset = { ix-x, iy-y, fx, fy };
grid[y*width+x] = offset;
}
}
}
// TODO(MatthiasF): cubic B-Spline image sampling, bilinear lookup
template<typename T,int N>
static void Warp(const Offset* grid, const T* src, T* dst,
int width, int height) {
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
Offset offset = grid[y*width+x];
const T* s = &src[((y+offset.iy)*width+(x+offset.ix))*N];
for (int i = 0; i < N; i++) {
dst[(y*width+x)*N+i] = ((s[ i] * (256-offset.fx) + s[ N+i] * offset.fx) * (256-offset.fy)
+(s[width*N+i] * (256-offset.fx) + s[width*N+N+i] * offset.fx) * offset.fy) / (256*256);
}
}
}
}
// FIXME: C++ templates limitations makes thing complicated, but maybe there is a simpler method.
struct ApplyIntrinsicsFunction {
ApplyIntrinsicsFunction(CameraIntrinsics* intrinsics, double x, double y,
double *warp_x, double *warp_y) {
intrinsics->ApplyIntrinsics(
(x-intrinsics->principal_point_x())/intrinsics->focal_length_x(),
(y-intrinsics->principal_point_y())/intrinsics->focal_length_y(),
warp_x, warp_y);
}
};
struct InvertIntrinsicsFunction {
InvertIntrinsicsFunction(CameraIntrinsics* intrinsics, double x, double y,
double *warp_x, double *warp_y) {
intrinsics->InvertIntrinsics(x,y,warp_x,warp_y);
*warp_x = *warp_x*intrinsics->focal_length_x()+intrinsics->principal_point_x();
*warp_y = *warp_y*intrinsics->focal_length_y()+intrinsics->principal_point_y();
}
};
void CameraIntrinsics::Distort(const float* src, float* dst, int width, int height, int channels) {
if(!distort_) {
distort_ = new Offset[width*height];
ComputeLookupGrid<InvertIntrinsicsFunction>(distort_,width,height);
}
if(channels==1) Warp<float,1>(distort_,src,dst,width,height);
else if(channels==2) Warp<float,2>(distort_,src,dst,width,height);
else if(channels==3) Warp<float,3>(distort_,src,dst,width,height);
else if(channels==4) Warp<float,4>(distort_,src,dst,width,height);
//else assert("channels must be between 1 and 4");
}
void CameraIntrinsics::Distort(const unsigned char* src, unsigned char* dst, int width, int height, int channels) {
if(!distort_) {
distort_ = new Offset[width*height];
ComputeLookupGrid<InvertIntrinsicsFunction>(distort_,width,height);
}
if(channels==1) Warp<unsigned char,1>(distort_,src,dst,width,height);
else if(channels==2) Warp<unsigned char,2>(distort_,src,dst,width,height);
else if(channels==3) Warp<unsigned char,3>(distort_,src,dst,width,height);
else if(channels==4) Warp<unsigned char,4>(distort_,src,dst,width,height);
//else assert("channels must be between 1 and 4");
}
void CameraIntrinsics::Undistort(const float* src, float* dst, int width, int height, int channels) {
if(!undistort_) {
undistort_ = new Offset[width*height];
ComputeLookupGrid<ApplyIntrinsicsFunction>(undistort_,width,height);
}
if(channels==1) Warp<float,1>(undistort_,src,dst,width,height);
else if(channels==2) Warp<float,2>(undistort_,src,dst,width,height);
else if(channels==3) Warp<float,3>(undistort_,src,dst,width,height);
else if(channels==4) Warp<float,4>(undistort_,src,dst,width,height);
//else assert("channels must be between 1 and 4");
}
void CameraIntrinsics::Undistort(const unsigned char* src, unsigned char* dst, int width, int height, int channels) {
if(!undistort_) {
undistort_ = new Offset[width*height];
ComputeLookupGrid<ApplyIntrinsicsFunction>(undistort_,width,height);
}
if(channels==1) Warp<unsigned char,1>(undistort_,src,dst,width,height);
else if(channels==2) Warp<unsigned char,2>(undistort_,src,dst,width,height);
else if(channels==3) Warp<unsigned char,3>(undistort_,src,dst,width,height);
else if(channels==4) Warp<unsigned char,4>(undistort_,src,dst,width,height);
//else assert("channels must be between 1 and 4");
}
} // namespace libmv

@ -21,13 +21,17 @@
#ifndef LIBMV_SIMPLE_PIPELINE_CAMERA_INTRINSICS_H_
#define LIBMV_SIMPLE_PIPELINE_CAMERA_INTRINSICS_H_
#include "libmv/numeric/numeric.h"
#include <Eigen/Core>
typedef Eigen::Matrix<double, 3, 3> Mat3;
namespace libmv {
struct Offset;
class CameraIntrinsics {
public:
CameraIntrinsics();
~CameraIntrinsics();
const Mat3 &K() const { return K_; }
// FIXME(MatthiasF): these should be CamelCase methods
@ -44,6 +48,11 @@ class CameraIntrinsics {
double p1() const { return p1_; }
double p2() const { return p2_; }
/// Set the entire calibration matrix at once.
void SetK(const Mat3 new_k) {
K_ = new_k;
}
/// Set both x and y focal length in pixels.
void SetFocalLength(double focal_x, double focal_y) {
K_(0, 0) = focal_x;
@ -66,6 +75,11 @@ class CameraIntrinsics {
k3_ = k3;
}
void SetTangentialDistortion(double p1, double p2) {
p1_ = p1;
p2_ = p2;
}
/*!
Apply camera intrinsics to the normalized point to get image coordinates.
@ -85,7 +99,50 @@ class CameraIntrinsics {
void InvertIntrinsics(double image_x, double image_y,
double *normalized_x, double *normalized_y) const;
/*!
Distort an image using the current camera instrinsics
The distorted image is computed in \a dst using samples from \a src.
both buffers should be \a width x \a height x \a channels sized.
\note This is the reference implementation using floating point images.
*/
void Distort(const float* src, float* dst,
int width, int height, int channels);
/*!
Distort an image using the current camera instrinsics
The distorted image is computed in \a dst using samples from \a src.
both buffers should be \a width x \a height x \a channels sized.
\note This version is much faster.
*/
void Distort(const unsigned char* src, unsigned char* dst,
int width, int height, int channels);
/*!
Undistort an image using the current camera instrinsics
The undistorted image is computed in \a dst using samples from \a src.
both buffers should be \a width x \a height x \a channels sized.
\note This is the reference implementation using floating point images.
*/
void Undistort(const float* src, float* dst,
int width, int height, int channels);
/*!
Undistort an image using the current camera instrinsics
The undistorted image is computed in \a dst using samples from \a src.
both buffers should be \a width x \a height x \a channels sized.
\note This version is much faster.
*/
void Undistort(const unsigned char* src, unsigned char* dst,
int width, int height, int channels);
private:
template<typename WarpFunction> void ComputeLookupGrid(Offset* grid, int width, int height);
// The traditional intrinsics matrix from x = K[R|t]X.
Mat3 K_;
@ -99,6 +156,9 @@ class CameraIntrinsics {
// the normalized coordinates before the focal length, which makes them
// independent of image size.
double k1_, k2_, k3_, p1_, p2_;
Offset* distort_;
Offset* undistort_;
};
} // namespace libmv

@ -31,7 +31,7 @@ namespace libmv {
std::vector<Corner> Detect(const unsigned char* data, int width, int height, int stride,
int margin, int min_trackness, int min_distance) {
std::vector<Corner> corners;
data += margin*width+margin;
data += margin*width + margin;
// TODO(MatthiasF): Support targetting a feature count (binary search trackness)
int num_corners;
xy* all = fast9_detect(data, width-2*margin, height-2*margin,
@ -44,15 +44,11 @@ std::vector<Corner> Detect(const unsigned char* data, int width, int height, int
// TODO: merge with close feature suppression
xy* nonmax = nonmax_suppression(all, scores, num_corners, &num_corners);
free(all);
free(scores);
if(num_corners == 0) {
free(nonmax);
return corners;
}
// Remove too close features
// TODO(MatthiasF): A resolution independent parameter would be better than distance
// e.g. a coefficient going from 0 (no minimal distance) to 1 (optimal circle packing)
corners.reserve(num_corners);
// FIXME(MatthiasF): this method will not necessarily give all maximum markers
if(num_corners) corners.reserve(num_corners);
for(int i = 0; i < num_corners; ++i) {
xy xy = nonmax[i];
Corner a = { xy.x+margin, xy.y+margin, scores[i], 7 };
@ -60,19 +56,16 @@ std::vector<Corner> Detect(const unsigned char* data, int width, int height, int
for(int j = 0; j < corners.size(); j++) {
Corner& b = corners[j];
if ( (a.x-b.x)*(a.x-b.x)+(a.y-b.y)*(a.y-b.y) < min_distance*min_distance ) {
if( a.score > b.score ) {
// replace close lesser feature
b = a;
}
// already a nearby feature
goto skip;
}
}
// or add a new feature
corners.push_back( a );
// otherwise add the new feature
corners.push_back(a);
skip: ;
}
free(scores);
free(nonmax);
return corners;
}
}

@ -20,8 +20,9 @@
#include "libmv/base/vector.h"
#include "libmv/logging/logging.h"
#include "libmv/multiview/projection.h"
#include "libmv/multiview/fundamental.h"
#include "libmv/multiview/projection.h"
#include "libmv/numeric/levenberg_marquardt.h"
#include "libmv/numeric/numeric.h"
#include "libmv/simple_pipeline/reconstruction.h"
#include "libmv/simple_pipeline/tracks.h"
@ -63,8 +64,8 @@ void GetImagesInMarkers(const vector<Marker> &markers,
} // namespace
bool ReconstructTwoFrames(const vector<Marker> &markers,
Reconstruction *reconstruction) {
bool EuclideanReconstructTwoFrames(const vector<Marker> &markers,
EuclideanReconstruction *reconstruction) {
if (markers.size() < 16) {
return false;
}
@ -116,4 +117,102 @@ bool ReconstructTwoFrames(const vector<Marker> &markers,
return true;
}
namespace {
Mat3 DecodeF(const Vec9 &encoded_F) {
// Decode F and force it to be rank 2.
Map<const Mat3> full_rank_F(encoded_F.data(), 3, 3);
Eigen::JacobiSVD<Mat3> svd(full_rank_F, Eigen::ComputeFullU | Eigen::ComputeFullV);
Vec3 diagonal = svd.singularValues();
diagonal(2) = 0;
Mat3 F = svd.matrixU() * diagonal.asDiagonal() * svd.matrixV().transpose();
return F;
}
// This is the stupidest way to refine F known to mankind, since it requires
// doing a full SVD of F at each iteration. This uses sampson error.
struct FundamentalSampsonCostFunction {
public:
typedef Vec FMatrixType;
typedef Vec9 XMatrixType;
// Assumes markers are ordered by track.
FundamentalSampsonCostFunction(const vector<Marker> &markers)
: markers(markers) {}
Vec operator()(const Vec9 &encoded_F) const {
// Decode F and force it to be rank 2.
Mat3 F = DecodeF(encoded_F);
Vec residuals(markers.size() / 2);
residuals.setZero();
for (int i = 0; i < markers.size() / 2; ++i) {
const Marker &marker1 = markers[2*i + 0];
const Marker &marker2 = markers[2*i + 1];
CHECK_EQ(marker1.track, marker2.track);
Vec2 x1(marker1.x, marker1.y);
Vec2 x2(marker2.x, marker2.y);
residuals[i] = SampsonDistance(F, x1, x2);
}
return residuals;
}
const vector<Marker> &markers;
};
} // namespace
bool ProjectiveReconstructTwoFrames(const vector<Marker> &markers,
ProjectiveReconstruction *reconstruction) {
if (markers.size() < 16) {
return false;
}
int image1, image2;
GetImagesInMarkers(markers, &image1, &image2);
Mat x1, x2;
CoordinatesForMarkersInImage(markers, image1, &x1);
CoordinatesForMarkersInImage(markers, image2, &x2);
Mat3 F;
NormalizedEightPointSolver(x1, x2, &F);
// XXX Verify sampson distance.
#if 0
// Refine the resulting projection fundamental matrix using Sampson's
// approximation of geometric error. This avoids having to do a full bundle
// at the cost of some accuracy.
//
// TODO(keir): After switching to a better bundling library, use a proper
// full bundle adjust here instead of this lame bundle adjustment.
typedef LevenbergMarquardt<FundamentalSampsonCostFunction> Solver;
FundamentalSampsonCostFunction fundamental_cost(markers);
// Pack the initial P matrix into a size-12 vector..
Vec9 encoded_F = Map<Vec9>(F.data(), 3, 3);
Solver solver(fundamental_cost);
Solver::SolverParameters params;
Solver::Results results = solver.minimize(params, &encoded_F);
// TODO(keir): Check results to ensure clean termination.
// Recover F from the minimization.
F = DecodeF(encoded_F);
#endif
// Image 1 gets P = [I|0], image 2 gets arbitrary P.
Mat34 P1 = Mat34::Zero();
P1.block<3, 3>(0, 0) = Mat3::Identity();
Mat34 P2;
ProjectionsFromFundamental(F, &P1, &P2);
reconstruction->InsertCamera(image1, P1);
reconstruction->InsertCamera(image2, P2);
LG << "From two frame reconstruction got P2:\n" << P2;
return true;
}
} // namespace libmv

@ -25,11 +25,13 @@
namespace libmv {
class Marker;
class Reconstruction;
struct Marker;
class EuclideanReconstruction;
class ProjectiveReconstruction;
/*!
Initialize the \link Reconstruction reconstruction \endlink using two frames.
Initialize the \link EuclideanReconstruction reconstruction \endlink using
two frames.
\a markers should contain all \l Marker markers \endlink belonging to
tracks visible in both frames. The pose estimation of the camera for
@ -43,11 +45,30 @@ class Reconstruction;
already corrected for camera intrinsics and radial distortion.
\note This assumes an outlier-free set of markers.
\sa Resect
\sa EuclideanResect, EuclideanIntersect, EuclideanBundle
*/
bool ReconstructTwoFrames(const vector<Marker> &markers,
Reconstruction *reconstruction);
bool EuclideanReconstructTwoFrames(const vector<Marker> &markers,
EuclideanReconstruction *reconstruction);
/*!
Initialize the \link ProjectiveReconstruction reconstruction \endlink using
two frames.
\a markers should contain all \l Marker markers \endlink belonging to
tracks visible in both frames. An estimate of the projection matrices for
the two frames will get added to the reconstruction.
\note The two frames need to have both enough parallax and enough common tracks
for accurate reconstruction. At least 8 tracks are suggested.
\note The origin of the coordinate system is defined to be the camera of
the first keyframe.
\note This assumes the markers are already corrected for radial distortion.
\note This assumes an outlier-free set of markers.
\sa ProjectiveResect, ProjectiveIntersect, ProjectiveBundle
*/
bool ProjectiveReconstructTwoFrames(const vector<Marker> &markers,
ProjectiveReconstruction *reconstruction);
} // namespace libmv
#endif // LIBMV_SIMPLE_PIPELINE_INITIALIZE_RECONSTRUCTION_H

@ -31,20 +31,24 @@
namespace libmv {
struct IntersectCostFunction {
namespace {
struct EuclideanIntersectCostFunction {
public:
typedef Vec FMatrixType;
typedef Vec3 XMatrixType;
IntersectCostFunction(const vector<Marker> &markers,
const Reconstruction &reconstruction)
: markers(markers), reconstruction(reconstruction) {}
EuclideanIntersectCostFunction(const vector<Marker> &markers,
const EuclideanReconstruction &reconstruction)
: markers(markers),
reconstruction(reconstruction) {}
Vec operator()(const Vec3 &X) const {
Vec residuals(2 * markers.size());
residuals.setZero();
for (int i = 0; i < markers.size(); ++i) {
const Camera &camera = *reconstruction.CameraForImage(markers[i].image);
const EuclideanCamera &camera =
*reconstruction.CameraForImage(markers[i].image);
Vec3 projected = camera.R * X + camera.t;
projected /= projected(2);
residuals[2*i + 0] = projected(0) - markers[i].x;
@ -53,10 +57,13 @@ struct IntersectCostFunction {
return residuals;
}
const vector<Marker> &markers;
const Reconstruction &reconstruction;
const EuclideanReconstruction &reconstruction;
};
bool Intersect(const vector<Marker> &markers, Reconstruction *reconstruction) {
} // namespace
bool EuclideanIntersect(const vector<Marker> &markers,
EuclideanReconstruction *reconstruction) {
if (markers.size() < 2) {
return false;
}
@ -67,7 +74,7 @@ bool Intersect(const vector<Marker> &markers, Reconstruction *reconstruction) {
vector<Mat34> cameras;
Mat34 P;
for (int i = 0; i < markers.size(); ++i) {
Camera *camera = reconstruction->CameraForImage(markers[i].image);
EuclideanCamera *camera = reconstruction->CameraForImage(markers[i].image);
P_From_KRt(K, camera->R, camera->t, &P);
cameras.push_back(P);
}
@ -82,14 +89,14 @@ bool Intersect(const vector<Marker> &markers, Reconstruction *reconstruction) {
Vec4 Xp;
LG << "Intersecting with " << markers.size() << " markers.";
NViewTriangulateAlgebraic(points, cameras, &Xp);
Xp /= Xp(3);
// Get euclidean version.
// Get euclidean version of the homogeneous point.
Xp /= Xp(3);
Vec3 X = Xp.head<3>();
typedef LevenbergMarquardt<IntersectCostFunction> Solver;
typedef LevenbergMarquardt<EuclideanIntersectCostFunction> Solver;
IntersectCostFunction triangulate_cost(markers, *reconstruction);
EuclideanIntersectCostFunction triangulate_cost(markers, *reconstruction);
Solver::SolverParameters params;
Solver solver(triangulate_cost);
@ -97,7 +104,8 @@ bool Intersect(const vector<Marker> &markers, Reconstruction *reconstruction) {
// Try projecting the point; make sure it's in front of everyone.
for (int i = 0; i < cameras.size(); ++i) {
const Camera &camera = *reconstruction->CameraForImage(markers[i].image);
const EuclideanCamera &camera =
*reconstruction->CameraForImage(markers[i].image);
Vec3 x = camera.R * X + camera.t;
if (x(2) < 0) {
LOG(ERROR) << "POINT BEHIND CAMERA " << markers[i].image
@ -112,4 +120,86 @@ bool Intersect(const vector<Marker> &markers, Reconstruction *reconstruction) {
return true;
}
namespace {
struct ProjectiveIntersectCostFunction {
public:
typedef Vec FMatrixType;
typedef Vec4 XMatrixType;
ProjectiveIntersectCostFunction(
const vector<Marker> &markers,
const ProjectiveReconstruction &reconstruction)
: markers(markers), reconstruction(reconstruction) {}
Vec operator()(const Vec4 &X) const {
Vec residuals(2 * markers.size());
residuals.setZero();
for (int i = 0; i < markers.size(); ++i) {
const ProjectiveCamera &camera =
*reconstruction.CameraForImage(markers[i].image);
Vec3 projected = camera.P * X;
projected /= projected(2);
residuals[2*i + 0] = projected(0) - markers[i].x;
residuals[2*i + 1] = projected(1) - markers[i].y;
}
return residuals;
}
const vector<Marker> &markers;
const ProjectiveReconstruction &reconstruction;
};
} // namespace
bool ProjectiveIntersect(const vector<Marker> &markers,
ProjectiveReconstruction *reconstruction) {
if (markers.size() < 2) {
return false;
}
// Get the cameras to use for the intersection.
vector<Mat34> cameras;
for (int i = 0; i < markers.size(); ++i) {
ProjectiveCamera *camera = reconstruction->CameraForImage(markers[i].image);
cameras.push_back(camera->P);
}
// Stack the 2D coordinates together as required by NViewTriangulate.
Mat2X points(2, markers.size());
for (int i = 0; i < markers.size(); ++i) {
points(0, i) = markers[i].x;
points(1, i) = markers[i].y;
}
Vec4 X;
LG << "Intersecting with " << markers.size() << " markers.";
NViewTriangulateAlgebraic(points, cameras, &X);
X /= X(3);
typedef LevenbergMarquardt<ProjectiveIntersectCostFunction> Solver;
ProjectiveIntersectCostFunction triangulate_cost(markers, *reconstruction);
Solver::SolverParameters params;
Solver solver(triangulate_cost);
Solver::Results results = solver.minimize(params, &X);
(void) results; // TODO(keir): Ensure results are good.
// Try projecting the point; make sure it's in front of everyone.
for (int i = 0; i < cameras.size(); ++i) {
const ProjectiveCamera &camera =
*reconstruction->CameraForImage(markers[i].image);
Vec3 x = camera.P * X;
if (x(2) < 0) {
LOG(ERROR) << "POINT BEHIND CAMERA " << markers[i].image
<< ": " << x.transpose();
}
}
reconstruction->InsertPoint(markers[0].track, X);
// TODO(keir): Add proper error checking.
return true;
}
} // namespace libmv

@ -44,10 +44,33 @@ namespace libmv {
already corrected for camera intrinsics and radial distortion.
\note This assumes an outlier-free set of markers.
\sa EuclideanResect
*/
bool EuclideanIntersect(const vector<Marker> &markers,
EuclideanReconstruction *reconstruction);
/*!
Estimate the homogeneous coordinates of a track by intersecting rays.
This takes a set of markers, where each marker is for the same track but
different images, and reconstructs the homogeneous 3D position of that
track. Each of the frames for which there is a marker for that track must
have a corresponding reconstructed camera in \a *reconstruction.
\a markers should contain all \l Marker markers \endlink belonging to
tracks visible in all frames.
\a reconstruction should contain the cameras for all frames.
The new \l Point points \endlink will be inserted in \a reconstruction.
\note This assumes that radial distortion is already corrected for, but
does not assume that e.g. focal length and principal point are
accounted for.
\note This assumes an outlier-free set of markers.
\sa Resect
*/
bool Intersect(const vector<Marker> &markers,
Reconstruction *reconstruction);
bool ProjectiveIntersect(const vector<Marker> &markers,
ProjectiveReconstruction *reconstruction);
} // namespace libmv

@ -33,9 +33,94 @@
#endif
namespace libmv {
namespace {
void CompleteReconstruction(const Tracks &tracks,
Reconstruction *reconstruction) {
// These are "strategy" classes which make it possible to use the same code for
// both projective and euclidean reconstruction.
// FIXME(MatthiasF): OOP would achieve the same goal while avoiding
// template bloat and making interface changes much easier.
struct EuclideanPipelineRoutines {
typedef EuclideanReconstruction Reconstruction;
typedef EuclideanCamera Camera;
typedef EuclideanPoint Point;
static void Bundle(const Tracks &tracks,
EuclideanReconstruction *reconstruction) {
EuclideanBundle(tracks, reconstruction);
}
static bool Resect(const vector<Marker> &markers,
EuclideanReconstruction *reconstruction, bool final_pass) {
return EuclideanResect(markers, reconstruction, final_pass);
}
static bool Intersect(const vector<Marker> &markers,
EuclideanReconstruction *reconstruction) {
return EuclideanIntersect(markers, reconstruction);
}
static Marker ProjectMarker(const EuclideanPoint &point,
const EuclideanCamera &camera,
const CameraIntrinsics &intrinsics) {
Vec3 projected = camera.R * point.X + camera.t;
projected /= projected(2);
Marker reprojected_marker;
intrinsics.ApplyIntrinsics(projected(0),
projected(1),
&reprojected_marker.x,
&reprojected_marker.y);
reprojected_marker.image = camera.image;
reprojected_marker.track = point.track;
return reprojected_marker;
}
};
struct ProjectivePipelineRoutines {
typedef ProjectiveReconstruction Reconstruction;
typedef ProjectiveCamera Camera;
typedef ProjectivePoint Point;
static void Bundle(const Tracks &tracks,
ProjectiveReconstruction *reconstruction) {
ProjectiveBundle(tracks, reconstruction);
}
static bool Resect(const vector<Marker> &markers,
ProjectiveReconstruction *reconstruction, bool final_pass) {
return ProjectiveResect(markers, reconstruction);
}
static bool Intersect(const vector<Marker> &markers,
ProjectiveReconstruction *reconstruction) {
return ProjectiveIntersect(markers, reconstruction);
}
static Marker ProjectMarker(const ProjectivePoint &point,
const ProjectiveCamera &camera,
const CameraIntrinsics &intrinsics) {
Vec3 projected = camera.P * point.X;
projected /= projected(2);
Marker reprojected_marker;
intrinsics.ApplyIntrinsics(projected(0),
projected(1),
&reprojected_marker.x,
&reprojected_marker.y);
reprojected_marker.image = camera.image;
reprojected_marker.track = point.track;
return reprojected_marker;
}
};
} // namespace
template<typename PipelineRoutines>
void InternalCompleteReconstruction(
const Tracks &tracks,
typename PipelineRoutines::Reconstruction *reconstruction) {
int max_track = tracks.MaxTrack();
int max_image = tracks.MaxImage();
int num_resects = -1;
@ -63,13 +148,13 @@ void CompleteReconstruction(const Tracks &tracks,
LG << "Got " << reconstructed_markers.size()
<< " reconstructed markers for track " << track;
if (reconstructed_markers.size() >= 2) {
Intersect(reconstructed_markers, reconstruction);
PipelineRoutines::Intersect(reconstructed_markers, reconstruction);
num_intersects++;
LG << "Ran Intersect() for track " << track;
}
}
if (num_intersects) {
Bundle(tracks, reconstruction);
PipelineRoutines::Bundle(tracks, reconstruction);
LG << "Ran Bundle() after intersections.";
}
LG << "Did " << num_intersects << " intersects.";
@ -93,7 +178,7 @@ void CompleteReconstruction(const Tracks &tracks,
LG << "Got " << reconstructed_markers.size()
<< " reconstructed markers for image " << image;
if (reconstructed_markers.size() >= 5) {
if (Resect(reconstructed_markers, reconstruction)) {
if (PipelineRoutines::Resect(reconstructed_markers, reconstruction, false)) {
num_resects++;
LG << "Ran Resect() for image " << image;
} else {
@ -102,46 +187,61 @@ void CompleteReconstruction(const Tracks &tracks,
}
}
if (num_resects) {
Bundle(tracks, reconstruction);
PipelineRoutines::Bundle(tracks, reconstruction);
}
LG << "Did " << num_resects << " resects.";
}
// One last pass...
num_resects = 0;
for (int image = 0; image <= max_image; ++image) {
if (reconstruction->CameraForImage(image)) {
LG << "Skipping frame: " << image;
continue;
}
vector<Marker> all_markers = tracks.MarkersInImage(image);
vector<Marker> reconstructed_markers;
for (int i = 0; i < all_markers.size(); ++i) {
if (reconstruction->PointForTrack(all_markers[i].track)) {
reconstructed_markers.push_back(all_markers[i]);
}
}
if (reconstructed_markers.size() >= 5) {
if (PipelineRoutines::Resect(reconstructed_markers, reconstruction, true)) {
num_resects++;
LG << "Ran Resect() for image " << image;
} else {
LG << "Failed Resect() for image " << image;
}
}
}
if (num_resects) {
PipelineRoutines::Bundle(tracks, reconstruction);
}
}
Marker ProjectMarker(const Point &point,
const Camera &camera,
const CameraIntrinsics &intrinsics) {
Vec3 projected = camera.R * point.X + camera.t;
projected /= projected(2);
Marker reprojected_marker;
intrinsics.ApplyIntrinsics(projected(0),
projected(1),
&reprojected_marker.x,
&reprojected_marker.y);
reprojected_marker.image = camera.image;
reprojected_marker.track = point.track;
return reprojected_marker;
}
double ReprojectionError(const Tracks &image_tracks,
const Reconstruction &reconstruction,
const CameraIntrinsics &intrinsics) {
template<typename PipelineRoutines>
double InternalReprojectionError(const Tracks &image_tracks,
const typename PipelineRoutines::Reconstruction &reconstruction,
const CameraIntrinsics &intrinsics) {
int num_skipped = 0;
int num_reprojected = 0;
double total_error = 0.0;
vector<Marker> markers = image_tracks.AllMarkers();
for (int i = 0; i < markers.size(); ++i) {
const Camera *camera = reconstruction.CameraForImage(markers[i].image);
const Point *point = reconstruction.PointForTrack(markers[i].track);
const typename PipelineRoutines::Camera *camera =
reconstruction.CameraForImage(markers[i].image);
const typename PipelineRoutines::Point *point =
reconstruction.PointForTrack(markers[i].track);
if (!camera || !point) {
num_skipped++;
continue;
}
num_reprojected++;
Marker reprojected_marker = ProjectMarker(*point, *camera, intrinsics);
Marker reprojected_marker =
PipelineRoutines::ProjectMarker(*point, *camera, intrinsics);
double ex = reprojected_marker.x - markers[i].x;
double ey = reprojected_marker.y - markers[i].y;
@ -172,4 +272,46 @@ double ReprojectionError(const Tracks &image_tracks,
return total_error / num_reprojected;
}
double EuclideanReprojectionError(const Tracks &image_tracks,
const EuclideanReconstruction &reconstruction,
const CameraIntrinsics &intrinsics) {
return InternalReprojectionError<EuclideanPipelineRoutines>(image_tracks,
reconstruction,
intrinsics);
}
double ProjectiveReprojectionError(
const Tracks &image_tracks,
const ProjectiveReconstruction &reconstruction,
const CameraIntrinsics &intrinsics) {
return InternalReprojectionError<ProjectivePipelineRoutines>(image_tracks,
reconstruction,
intrinsics);
}
void EuclideanCompleteReconstruction(const Tracks &tracks,
EuclideanReconstruction *reconstruction) {
InternalCompleteReconstruction<EuclideanPipelineRoutines>(tracks,
reconstruction);
}
void ProjectiveCompleteReconstruction(const Tracks &tracks,
ProjectiveReconstruction *reconstruction) {
InternalCompleteReconstruction<ProjectivePipelineRoutines>(tracks,
reconstruction);
}
void InvertIntrinsicsForTracks(const Tracks &raw_tracks,
const CameraIntrinsics &camera_intrinsics,
Tracks *calibrated_tracks) {
vector<Marker> markers = raw_tracks.AllMarkers();
for (int i = 0; i < markers.size(); ++i) {
camera_intrinsics.InvertIntrinsics(markers[i].x,
markers[i].y,
&(markers[i].x),
&(markers[i].y));
}
*calibrated_tracks = Tracks(markers);
}
} // namespace libmv

@ -18,6 +18,9 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
#ifndef LIBMV_SIMPLE_PIPELINE_PIPELINE_H_
#define LIBMV_SIMPLE_PIPELINE_PIPELINE_H_
#include "libmv/simple_pipeline/tracks.h"
#include "libmv/simple_pipeline/reconstruction.h"
@ -40,15 +43,53 @@ namespace libmv {
cameras. The minimum number of cameras is two (with no 3D points) and the
minimum number of 3D points (with no estimated cameras) is 5.
\sa Resect, Intersect, Bundle
\sa EuclideanResect, EuclideanIntersect, EuclideanBundle
*/
void CompleteReconstruction(const Tracks &tracks,
Reconstruction *reconstruction);
void EuclideanCompleteReconstruction(const Tracks &tracks,
EuclideanReconstruction *reconstruction);
/*!
Estimate camera matrices and homogeneous 3D coordinates for all frames and
tracks.
This method should be used once there is an initial reconstruction in
place, for example by reconstructing from two frames that have a sufficient
baseline and number of tracks in common. This function iteratively
triangulates points that are visible by cameras that have their pose
estimated, then resections (i.e. estimates the pose) of cameras that are
not estimated yet that can see triangulated points. This process is
repeated until all points and cameras are estimated. Periodically, bundle
adjustment is run to ensure a quality reconstruction.
\a tracks should contain markers used in the reconstruction.
\a reconstruction should contain at least some 3D points or some estimated
cameras. The minimum number of cameras is two (with no 3D points) and the
minimum number of 3D points (with no estimated cameras) is 5.
\sa ProjectiveResect, ProjectiveIntersect, ProjectiveBundle
*/
void ProjectiveCompleteReconstruction(const Tracks &tracks,
ProjectiveReconstruction *reconstruction);
class CameraIntrinsics;
double ReprojectionError(const Tracks &image_tracks,
const Reconstruction &reconstruction,
const CameraIntrinsics &intrinsics);
// TODO(keir): Decide if we want these in the public API, and if so, what the
// appropriate include file is.
double EuclideanReprojectionError(const Tracks &image_tracks,
const EuclideanReconstruction &reconstruction,
const CameraIntrinsics &intrinsics);
double ProjectiveReprojectionError(
const Tracks &image_tracks,
const ProjectiveReconstruction &reconstruction,
const CameraIntrinsics &intrinsics);
void InvertIntrinsicsForTracks(const Tracks &raw_tracks,
const CameraIntrinsics &camera_intrinsics,
Tracks *calibrated_tracks);
} // namespace libmv
#endif // LIBMV_SIMPLE_PIPELINE_PIPELINE_H_

@ -24,7 +24,25 @@
namespace libmv {
void Reconstruction::InsertCamera(int image, const Mat3 &R, const Vec3 &t) {
EuclideanReconstruction::EuclideanReconstruction() {}
EuclideanReconstruction::EuclideanReconstruction(
const EuclideanReconstruction &other) {
cameras_ = other.cameras_;
points_ = other.points_;
}
EuclideanReconstruction &EuclideanReconstruction::operator=(
const EuclideanReconstruction &other) {
if (&other != this) {
cameras_ = other.cameras_;
points_ = other.points_;
}
return *this;
}
void EuclideanReconstruction::InsertCamera(int image,
const Mat3 &R,
const Vec3 &t) {
LG << "InsertCamera " << image << ":\nR:\n"<< R << "\nt:\n" << t;
if (image >= cameras_.size()) {
cameras_.resize(image + 1);
@ -34,7 +52,7 @@ void Reconstruction::InsertCamera(int image, const Mat3 &R, const Vec3 &t) {
cameras_[image].t = t;
}
void Reconstruction::InsertPoint(int track, const Vec3 &X) {
void EuclideanReconstruction::InsertPoint(int track, const Vec3 &X) {
LG << "InsertPoint " << track << ":\n" << X;
if (track >= points_.size()) {
points_.resize(track + 1);
@ -43,24 +61,26 @@ void Reconstruction::InsertPoint(int track, const Vec3 &X) {
points_[track].X = X;
}
Camera *Reconstruction::CameraForImage(int image) {
return const_cast<Camera *>(
static_cast<const Reconstruction *>(this)->CameraForImage(image));
EuclideanCamera *EuclideanReconstruction::CameraForImage(int image) {
return const_cast<EuclideanCamera *>(
static_cast<const EuclideanReconstruction *>(
this)->CameraForImage(image));
}
const Camera *Reconstruction::CameraForImage(int image) const {
const EuclideanCamera *EuclideanReconstruction::CameraForImage(
int image) const {
if (image < 0 || image >= cameras_.size()) {
return NULL;
}
const Camera *camera = &cameras_[image];
const EuclideanCamera *camera = &cameras_[image];
if (camera->image == -1) {
return NULL;
}
return camera;
}
vector<Camera> Reconstruction::AllCameras() const {
vector<Camera> cameras;
vector<EuclideanCamera> EuclideanReconstruction::AllCameras() const {
vector<EuclideanCamera> cameras;
for (int i = 0; i < cameras_.size(); ++i) {
if (cameras_[i].image != -1) {
cameras.push_back(cameras_[i]);
@ -69,24 +89,97 @@ vector<Camera> Reconstruction::AllCameras() const {
return cameras;
}
Point *Reconstruction::PointForTrack(int track) {
return const_cast<Point *>(
static_cast<const Reconstruction *>(this)->PointForTrack(track));
EuclideanPoint *EuclideanReconstruction::PointForTrack(int track) {
return const_cast<EuclideanPoint *>(
static_cast<const EuclideanReconstruction *>(this)->PointForTrack(track));
}
const Point *Reconstruction::PointForTrack(int track) const {
const EuclideanPoint *EuclideanReconstruction::PointForTrack(int track) const {
if (track < 0 || track >= points_.size()) {
return NULL;
}
const Point *point = &points_[track];
const EuclideanPoint *point = &points_[track];
if (point->track == -1) {
return NULL;
}
return point;
}
vector<Point> Reconstruction::AllPoints() const {
vector<Point> points;
vector<EuclideanPoint> EuclideanReconstruction::AllPoints() const {
vector<EuclideanPoint> points;
for (int i = 0; i < points_.size(); ++i) {
if (points_[i].track != -1) {
points.push_back(points_[i]);
}
}
return points;
}
void ProjectiveReconstruction::InsertCamera(int image,
const Mat34 &P) {
LG << "InsertCamera " << image << ":\nP:\n"<< P;
if (image >= cameras_.size()) {
cameras_.resize(image + 1);
}
cameras_[image].image = image;
cameras_[image].P = P;
}
void ProjectiveReconstruction::InsertPoint(int track, const Vec4 &X) {
LG << "InsertPoint " << track << ":\n" << X;
if (track >= points_.size()) {
points_.resize(track + 1);
}
points_[track].track = track;
points_[track].X = X;
}
ProjectiveCamera *ProjectiveReconstruction::CameraForImage(int image) {
return const_cast<ProjectiveCamera *>(
static_cast<const ProjectiveReconstruction *>(
this)->CameraForImage(image));
}
const ProjectiveCamera *ProjectiveReconstruction::CameraForImage(
int image) const {
if (image < 0 || image >= cameras_.size()) {
return NULL;
}
const ProjectiveCamera *camera = &cameras_[image];
if (camera->image == -1) {
return NULL;
}
return camera;
}
vector<ProjectiveCamera> ProjectiveReconstruction::AllCameras() const {
vector<ProjectiveCamera> cameras;
for (int i = 0; i < cameras_.size(); ++i) {
if (cameras_[i].image != -1) {
cameras.push_back(cameras_[i]);
}
}
return cameras;
}
ProjectivePoint *ProjectiveReconstruction::PointForTrack(int track) {
return const_cast<ProjectivePoint *>(
static_cast<const ProjectiveReconstruction *>(this)->PointForTrack(track));
}
const ProjectivePoint *ProjectiveReconstruction::PointForTrack(int track) const {
if (track < 0 || track >= points_.size()) {
return NULL;
}
const ProjectivePoint *point = &points_[track];
if (point->track == -1) {
return NULL;
}
return point;
}
vector<ProjectivePoint> ProjectiveReconstruction::AllPoints() const {
vector<ProjectivePoint> points;
for (int i = 0; i < points_.size(); ++i) {
if (points_[i].track != -1) {
points.push_back(points_[i]);

@ -27,7 +27,7 @@
namespace libmv {
/*!
A Camera is the location and rotation of the camera viewing \a image.
A EuclideanCamera is the location and rotation of the camera viewing \a image.
\a image identify which image from \l Tracks this camera represents.
\a R is a 3x3 matrix representing the rotation of the camera.
@ -35,9 +35,9 @@ namespace libmv {
\sa Reconstruction
*/
struct Camera {
Camera() : image(-1) {}
Camera(const Camera &c) : image(c.image), R(c.R), t(c.t) {}
struct EuclideanCamera {
EuclideanCamera() : image(-1) {}
EuclideanCamera(const EuclideanCamera &c) : image(c.image), R(c.R), t(c.t) {}
int image;
Mat3 R;
@ -52,32 +52,42 @@ struct Camera {
\sa Reconstruction
*/
struct Point {
Point() : track(-1) {}
Point(const Point &p) : track(p.track), X(p.X) {}
struct EuclideanPoint {
EuclideanPoint() : track(-1) {}
EuclideanPoint(const EuclideanPoint &p) : track(p.track), X(p.X) {}
int track;
Vec3 X;
};
/*!
The Reconstruction class stores \link Camera cameras \endlink and \link Point points \endlink.
The EuclideanReconstruction class stores \link EuclideanCamera cameras
\endlink and \link EuclideanPoint points \endlink.
The Reconstruction container is intended as the store of 3D reconstruction data
to be used with the MultiView API.
The EuclideanReconstruction container is intended as the store of 3D
reconstruction data to be used with the MultiView API.
The container has lookups to query a \a Camera from an \a image or a \a Point from a \a track.
The container has lookups to query a \a EuclideanCamera from an \a image or
a \a EuclideanPoint from a \a track.
\sa Camera, Point
*/
class Reconstruction {
class EuclideanReconstruction {
public:
// Default constructor starts with no cameras.
EuclideanReconstruction();
/// Copy constructor.
EuclideanReconstruction(const EuclideanReconstruction &other);
EuclideanReconstruction &operator=(const EuclideanReconstruction &other);
/*!
Insert a camera into the set. If there is already a camera for the given
\a image, the existing camera is replaced. If there is no
camera for the given \a image, a new one is added.
\a image, the existing camera is replaced. If there is no camera for the
given \a image, a new one is added.
\a image is the key used to retrieve the cameras with the
other methods in this class.
\a image is the key used to retrieve the cameras with the other methods
in this class.
\note You should use the same \a image identifier as in \l Tracks.
*/
@ -96,22 +106,110 @@ class Reconstruction {
void InsertPoint(int track, const Vec3 &X);
/// Returns a pointer to the camera corresponding to \a image.
Camera *CameraForImage(int image);
const Camera *CameraForImage(int image) const;
EuclideanCamera *CameraForImage(int image);
const EuclideanCamera *CameraForImage(int image) const;
/// Returns all cameras.
vector<Camera> AllCameras() const;
vector<EuclideanCamera> AllCameras() const;
/// Returns a pointer to the point corresponding to \a track.
Point *PointForTrack(int track);
const Point *PointForTrack(int track) const;
EuclideanPoint *PointForTrack(int track);
const EuclideanPoint *PointForTrack(int track) const;
/// Returns all points.
vector<Point> AllPoints() const;
vector<EuclideanPoint> AllPoints() const;
private:
vector<Camera> cameras_;
vector<Point> points_;
vector<EuclideanCamera> cameras_;
vector<EuclideanPoint> points_;
};
/*!
A ProjectiveCamera is the projection matrix for the camera of \a image.
\a image identify which image from \l Tracks this camera represents.
\a P is the 3x4 projection matrix.
\sa ProjectiveReconstruction
*/
struct ProjectiveCamera {
ProjectiveCamera() : image(-1) {}
ProjectiveCamera(const ProjectiveCamera &c) : image(c.image), P(c.P) {}
int image;
Mat34 P;
};
/*!
A Point is the 3D location of a track.
\a track identifies which track from \l Tracks this point corresponds to.
\a X is the homogeneous 3D position of the track.
\sa Reconstruction
*/
struct ProjectivePoint {
ProjectivePoint() : track(-1) {}
ProjectivePoint(const ProjectivePoint &p) : track(p.track), X(p.X) {}
int track;
Vec4 X;
};
/*!
The ProjectiveReconstruction class stores \link ProjectiveCamera cameras
\endlink and \link ProjectivePoint points \endlink.
The ProjectiveReconstruction container is intended as the store of 3D
reconstruction data to be used with the MultiView API.
The container has lookups to query a \a ProjectiveCamera from an \a image or
a \a ProjectivePoint from a \a track.
\sa Camera, Point
*/
class ProjectiveReconstruction {
public:
/*!
Insert a camera into the set. If there is already a camera for the given
\a image, the existing camera is replaced. If there is no camera for the
given \a image, a new one is added.
\a image is the key used to retrieve the cameras with the other methods
in this class.
\note You should use the same \a image identifier as in \l Tracks.
*/
void InsertCamera(int image, const Mat34 &P);
/*!
Insert a point into the reconstruction. If there is already a point for
the given \a track, the existing point is replaced. If there is no point
for the given \a track, a new one is added.
\a track is the key used to retrieve the points with the
other methods in this class.
\note You should use the same \a track identifier as in \l Tracks.
*/
void InsertPoint(int track, const Vec4 &X);
/// Returns a pointer to the camera corresponding to \a image.
ProjectiveCamera *CameraForImage(int image);
const ProjectiveCamera *CameraForImage(int image) const;
/// Returns all cameras.
vector<ProjectiveCamera> AllCameras() const;
/// Returns a pointer to the point corresponding to \a track.
ProjectivePoint *PointForTrack(int track);
const ProjectivePoint *PointForTrack(int track) const;
/// Returns all points.
vector<ProjectivePoint> AllPoints() const;
private:
vector<ProjectiveCamera> cameras_;
vector<ProjectivePoint> points_;
};
} // namespace libmv

@ -18,6 +18,8 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
#include <cstdio>
#include "libmv/base/vector.h"
#include "libmv/logging/logging.h"
#include "libmv/multiview/euclidean_resection.h"
@ -57,14 +59,14 @@ Mat3 RotationFromEulerVector(Vec3 euler_vector) {
// to avoid issues with the rotation representation. R' is derived from a
// euler vector encoding the rotation in 3 parameters; the direction is the
// axis to rotate around and the magnitude is the amount of the rotation.
struct ResectCostFunction {
struct EuclideanResectCostFunction {
public:
typedef Vec FMatrixType;
typedef Vec6 XMatrixType;
ResectCostFunction(const vector<Marker> &markers,
const Reconstruction &reconstruction,
const Mat3 initial_R)
EuclideanResectCostFunction(const vector<Marker> &markers,
const EuclideanReconstruction &reconstruction,
const Mat3 initial_R)
: markers(markers),
reconstruction(reconstruction),
initial_R(initial_R) {}
@ -80,7 +82,8 @@ struct ResectCostFunction {
Vec residuals(2 * markers.size());
residuals.setZero();
for (int i = 0; i < markers.size(); ++i) {
const Point &point = *reconstruction.PointForTrack(markers[i].track);
const EuclideanPoint &point =
*reconstruction.PointForTrack(markers[i].track);
Vec3 projected = R * point.X + t;
projected /= projected(2);
residuals[2*i + 0] = projected(0) - markers[i].x;
@ -90,13 +93,14 @@ struct ResectCostFunction {
}
const vector<Marker> &markers;
const Reconstruction &reconstruction;
const EuclideanReconstruction &reconstruction;
const Mat3 &initial_R;
};
} // namespace
bool Resect(const vector<Marker> &markers, Reconstruction *reconstruction) {
bool EuclideanResect(const vector<Marker> &markers,
EuclideanReconstruction *reconstruction, bool final_pass) {
if (markers.size() < 5) {
return false;
}
@ -110,9 +114,10 @@ bool Resect(const vector<Marker> &markers, Reconstruction *reconstruction) {
Mat3 R;
Vec3 t;
if (0 || !euclidean_resection::EuclideanResection(points_2d, points_3d, &R, &t)) {
// printf("Resection for image %d failed\n", markers[0].image);
LG << "Resection for image " << markers[0].image << " failed;"
<< " trying fallback projective resection.";
return false;
if (!final_pass) return false;
// Euclidean resection failed. Fall back to projective resection, which is
// less reliable but better conditioned when there are many points.
Mat34 P;
@ -147,10 +152,10 @@ bool Resect(const vector<Marker> &markers, Reconstruction *reconstruction) {
}
// Refine the result.
typedef LevenbergMarquardt<ResectCostFunction> Solver;
typedef LevenbergMarquardt<EuclideanResectCostFunction> Solver;
// Give the cost our initial guess for R.
ResectCostFunction resect_cost(markers, *reconstruction, R);
EuclideanResectCostFunction resect_cost(markers, *reconstruction, R);
// Encode the initial parameters: start with zero delta rotation, and the
// guess for t obtained from resection.
@ -174,4 +179,93 @@ bool Resect(const vector<Marker> &markers, Reconstruction *reconstruction) {
return true;
}
namespace {
// Directly parameterize the projection matrix, P, which is a 12 parameter
// homogeneous entry. In theory P should be parameterized with only 11
// parametetrs, but in practice it works fine to let the extra degree of
// freedom drift.
struct ProjectiveResectCostFunction {
public:
typedef Vec FMatrixType;
typedef Vec12 XMatrixType;
ProjectiveResectCostFunction(const vector<Marker> &markers,
const ProjectiveReconstruction &reconstruction)
: markers(markers),
reconstruction(reconstruction) {}
Vec operator()(const Vec12 &vector_P) const {
// Unpack P from vector_P.
Map<const Mat34> P(vector_P.data(), 3, 4);
// Compute the reprojection error for each coordinate.
Vec residuals(2 * markers.size());
residuals.setZero();
for (int i = 0; i < markers.size(); ++i) {
const ProjectivePoint &point =
*reconstruction.PointForTrack(markers[i].track);
Vec3 projected = P * point.X;
projected /= projected(2);
residuals[2*i + 0] = projected(0) - markers[i].x;
residuals[2*i + 1] = projected(1) - markers[i].y;
}
return residuals;
}
const vector<Marker> &markers;
const ProjectiveReconstruction &reconstruction;
};
} // namespace
bool ProjectiveResect(const vector<Marker> &markers,
ProjectiveReconstruction *reconstruction) {
if (markers.size() < 5) {
return false;
}
// Stack the homogeneous 3D points as the columns of a matrix.
Mat2X points_2d = PointMatrixFromMarkers(markers);
Mat4X points_3d_homogeneous(4, markers.size());
for (int i = 0; i < markers.size(); i++) {
points_3d_homogeneous.col(i) =
reconstruction->PointForTrack(markers[i].track)->X;
}
LG << "Points for resect:\n" << points_2d;
// Resection the point.
Mat34 P;
resection::Resection(points_2d, points_3d_homogeneous, &P);
// Flip the sign of P if necessary to keep the point in front of the camera.
if ((P * points_3d_homogeneous.col(0))(2) < 0) {
LG << "Point behind camera; switch sign.";
P = -P;
}
// TODO(keir): Check if error is horrible and fail in that case.
// Refine the resulting projection matrix using geometric error.
typedef LevenbergMarquardt<ProjectiveResectCostFunction> Solver;
ProjectiveResectCostFunction resect_cost(markers, *reconstruction);
// Pack the initial P matrix into a size-12 vector..
Vec12 vector_P = Map<Vec12>(P.data());
Solver solver(resect_cost);
Solver::SolverParameters params;
Solver::Results results = solver.minimize(params, &vector_P);
// TODO(keir): Check results to ensure clean termination.
// Unpack the projection matrix.
P = Map<Mat34>(vector_P.data(), 3, 4);
LG << "Resection for image " << markers[0].image << " got:\n"
<< "P:\n" << P;
reconstruction->InsertCamera(markers[0].image, P);
return true;
}
} // namespace libmv

@ -22,11 +22,13 @@
#define LIBMV_SIMPLE_PIPELINE_RESECT_H
#include "libmv/base/vector.h"
#include "libmv/simple_pipeline/tracks.h"
#include "libmv/simple_pipeline/reconstruction.h"
namespace libmv {
/*!
Estimate the pose of a camera from 2D to 3D correspondences.
Estimate the Euclidean pose of a camera from 2D to 3D correspondences.
This takes a set of markers visible in one frame (which is the one to
resection), such that the markers are also reconstructed in 3D in the
@ -47,10 +49,37 @@ namespace libmv {
\return True if the resection was successful, false otherwise.
\sa ReconstructTwoFrames
\sa EuclideanIntersect, EuclideanReconstructTwoFrames
*/
bool Resect(const vector<Marker> &markers,
Reconstruction *reconstruction);
bool EuclideanResect(const vector<Marker> &markers,
EuclideanReconstruction *reconstruction, bool final_pass);
/*!
Estimate the projective pose of a camera from 2D to 3D correspondences.
This takes a set of markers visible in one frame (which is the one to
resection), such that the markers are also reconstructed in a projective
frame in the reconstruction object, and solves for the projective matrix of
the camera for that frame.
\a markers should contain \l Marker markers \endlink belonging to tracks
visible in the one frame to be resectioned. Each of the tracks associated
with the markers must have a corresponding reconstructed homogeneous 3D
position in the \a *reconstruction object.
\a *reconstruction should contain the homogeneous 3D points associated with
the tracks for the markers present in \a markers.
\note This assumes radial distortion has already been corrected, but
otherwise works for uncalibrated sequences.
\note This assumes an outlier-free set of markers.
\return True if the resection was successful, false otherwise.
\sa ProjectiveIntersect, ProjectiveReconstructTwoFrames
*/
bool ProjectiveResect(const vector<Marker> &markers,
ProjectiveReconstruction *reconstruction);
} // namespace libmv

@ -26,6 +26,10 @@
namespace libmv {
Tracks::Tracks(const Tracks &other) {
markers_ = other.markers_;
}
Tracks::Tracks(const vector<Marker> &markers) : markers_(markers) {}
void Tracks::Insert(int image, int track, double x, double y) {

@ -57,6 +57,9 @@ class Tracks {
public:
Tracks() {}
// Copy constructor for a tracks object.
Tracks(const Tracks &other);
/// Construct a new tracks object using the given markers to start.
Tracks(const vector<Marker> &markers);

@ -27,8 +27,6 @@ namespace libmv {
class RegionTracker {
public:
// FIXME(MatthiasF): Our public API require C++ support (constructors,virtuals,templates)
// We need to either simplify it directly or add a C ABI compatible wrapper.
RegionTracker() {}
virtual ~RegionTracker() {}
@ -39,9 +37,6 @@ class RegionTracker {
image2. If no guess is available, (\a x1, \a y1) is a good start. Returns
true on success, false otherwise
*/
// FIXME(MatthiasF): FloatImage (aka Array3Df (aka Array3D<float>, aka
// ArrayND<float, 3>)) is an overly complicated generic (i.e templated) type.
// Using ( int size, float* old, float* new ) would be simpler and more flexible
virtual bool Track(const FloatImage &image1,
const FloatImage &image2,
double x1, double y1,

@ -1,13 +0,0 @@
diff --git a/src/libmv/simple_pipeline/pipeline.cc b/src/libmv/simple_pipeline/pipeline.cc
index 652d70c..cf3d9e6 100644
--- a/src/libmv/simple_pipeline/pipeline.cc
+++ b/src/libmv/simple_pipeline/pipeline.cc
@@ -72,7 +72,7 @@ void CompleteReconstruction(const Tracks &tracks,
// Do all possible resections.
num_resects = 0;
- for (int image = 0; image < max_image; ++image) {
+ for (int image = 0; image <= max_image; ++image) {
if (reconstruction->CameraForImage(image)) {
LG << "Skipping frame: " << image;
continue;

@ -1,13 +0,0 @@
diff --git a/src/libmv/simple_pipeline/pipeline.cc b/src/libmv/simple_pipeline/pipeline.cc
index 652d70c..a60ba40 100644
--- a/src/libmv/simple_pipeline/pipeline.cc
+++ b/src/libmv/simple_pipeline/pipeline.cc
@@ -42,7 +42,7 @@ void CompleteReconstruction(const Tracks &tracks,
while (num_resects != 0 || num_intersects != 0) {
// Do all possible intersections.
num_intersects = 0;
- for (int track = 0; track < max_track; ++track) {
+ for (int track = 0; track <= max_track; ++track) {
if (reconstruction->PointForTrack(track)) {
LG << "Skipping point: " << track;
continue;

@ -1,6 +1,4 @@
v3d_verbosity.patch
max_track.patch
max_image.patch
snrptinf_fix.patch
bundle_tweaks.patch
fast.patch