# Structure from motion

Published on February 27, 2008

Author: Denise

Source: authorstream.com

3D from Pictures:  3D from Pictures Jiajun Zhu Sept.29 2006 University of Virginia What can we compute from a collection of pictures?:  What can we compute from a collection of pictures? - 3D structure - camera poses and parameters:  - 3D structure - camera poses and parameters One of the most important / exciting results in computer vision from 90s’:  One of the most important / exciting results in computer vision from 90s’ It is difficult, largely due to numerical computation in practice. But this is SO powerful!!! 2 SIGGRAPH papers with several sketches this year! show a few demo videos:  But this is SO powerful!!! 2 SIGGRAPH papers with several sketches this year! show a few demo videos Now let’s see how this works!:  Now let’s see how this works! Input: (1) A collection of pictures. Output: (1) camera parameters (2) sparse 3D scene structure Consider 1 camera first:  Consider 1 camera first What’s the relation between pixels and rays in space? Slide9:  ~ Slide10:  P is a 3x4 Matrix 7 degree of freedom: 1 from focal length 3 from rotation 3 from translation Simplified projective camera model P x = P X = K [ R | t ] X Consider 1 camera:  x = P X Consider 1 camera P3x4 has 7 degrees of freedom Given one image, we observe x Can we recover X or P? If P is known, what do we know about X? If X is known, can we recover P? # unknown = 7 Each X gives 2 equations 2n >= 7 i.e. n >= 4 This is a Camera Calibration Problem:  This is a Camera Calibration Problem Input: n>4 world to image point correspondences {Xi  xi} Output: camera parameters P = K[R|T] Slide13:  Direct Linear Transform (DLT) Slide14:  Direct Linear Transform (DLT) n  4 points minimize subject to constraint use SVD p is the last column vector of V: p = Vn Slide15:  Objective Given n≥4, 3D to 2D point correspondences {Xi↔xi’}, determine P Algorithm Linear solution: Normalization: DLT Minimization of geometric error: Iteratively optimization (Levenberg-Marquardt): Denormalization: Implementation in Practice Slide16:  Objective Given camera projection matrix P, decompose P = K[R|t] Algorithm Perform RQ decomposition of M, so that K is the upper-triangular matrix and R is orthonormal matrix. How to recover K, R and t from P? This is what we learn from 1 Camera:  This is what we learn from 1 Camera Let’s consider 2 cameras:  Let’s consider 2 cameras Correspondence geometry: Given an image point x in the first image, how does this constrain the position of the corresponding point x’ in the second image? (ii) Camera geometry (motion): Given a set of corresponding image points {xi ↔x’i}, i=1,…,n, what are the cameras P and P’ for the two views? Slide19:  Correspondence geometry: Given an image point x in the first image, how does this constrain the position of the corresponding point x’ in the second image? Slide20:  The Fundamental Matrix F x’T Fx = 0 Slide21:  What does Fundamental Matrix F tell us? x’T Fx = 0 Fundamental matrix F relates corresponding pixels If the intrinsic parameter (i.e. focal length in our camera model) of both cameras are known, as K and K’. Then we can derive (not here) that: K’TFK = t cross product R t and R are translation and rotation for the 2nd camera i.e. P = [I|0] and P’ = [R|t] Slide22:  Good thing is that … x’T Fx = 0 Fundamental matrix F can be computed: from a set of pixel correspondences: {x’  x} Slide23:  Compute F from correspondence: separate known from unknown (data) (unknowns) (linear) How many correspondences do we need? Slide24:  What can we do now? (1) Given F, K and K’, we can estimate the relative translation and rotation for two cameras: (2) Given 8 correspondences: {x’  x}, we can compute F P = [I | 0] and P’ = [R | t] Given K and K’, and 8 correspondences {x’  x}, we can compute: P = [I | 0] and P’ = [R | t] This answers the 2nd question:  This answers the 2nd question Correspondence geometry: Given an image point x in the first image, how does this constrain the position of the corresponding point x’ in the second image? (ii) Camera geometry (motion): Given a set of corresponding image points {xi ↔x’i}, i=1,…,n, what are the cameras P and P’ for the two views? But how to make this automatic?:  But how to make this automatic? Given K and K’, and 8 correspondences {x’  x}, we can compute: P = [I | 0] and P’ = [R | t] (1) Estimating intrinsic K and K’ (auto-calibration) will not be discussed here. (involve much projective geometry knowledge) (2) Let’s see how to find correspondences automatically. (i.e. Feature detection and matching) Lowe’s SIFT features :  Lowe’s SIFT features invariant to with position, orientation and scale Scale:  Scale Look for strong responses of DOG filter (Difference-Of-Gaussian) over scale space Only consider local maxima in both position and scale Orientation:  Orientation Create histogram of local gradient directions computed at selected scale Assign canonical orientation at peak of smoothed histogram Each key specifies stable 2D coordinates (x, y, scale, orientation) Simple matching:  Simple matching For each feature in image 1 find the feature in image 2 that is most similar (compute correlation of two vectors) and vice-versa Keep mutual best matches Can design a very robust RANSAC type algorithm What have we learnt so far?:  What have we learnt so far? What have we learnt so far?:  What have we learnt so far? Consider more then 2 cameras:  Consider more then 2 cameras K K’ P P’ X P’’ Slide34:  Objective Given N images { Q1, …, QN } with reasonable overlaps Compute N camera projection matrices { P1, …, PN }, where each Pi = Ki[Ri |ti], Ki is the intrinsic parameter, Ri and ti are rotation and translation matrix respectively Slide35:  Algorithm (1) Find M tracks T = {T1, T2, …, TN } (i ) for every pair of image {Qi , Qj}: detect SIFT feature points in Qi and Qj match feature points robustly (RANSAC) (ii) match features across multiple images, construct tracks. (2) Estimate { P1… PN } and 3D position for each track { X1… XN } (i ) select one pair of image {Q1’ , Q2’} (well-conditioned). Let T1’2’ = {their associate overlapping track}; (ii) Estimate K1’ and K2’, compute {P1’ , P2’} and 3D position of T1’2’ from fundamental matrix. (iii) incrementally add new camera Pk into the system, estimate its camera matrix by DLT (calibration) (iv) repeat (iii) until all the cameras are estimated. Slide36:  Algorithm (1) Find M tracks T = {T1, T2, …, TN } (i ) for every pair of image {Qi , Qj}: detect SIFT feature points in Qi and Qj match feature points robustly (RANSAC) (ii) match features across multiple images, construct tracks. (2) Estimate { P1… PN } and 3D position for each track { X1… XN } (i ) select one pair of image {Q1’ , Q2’} (well-conditioned). Let T1’2’ = {their associate overlapping track}; (ii) Estimate K1’ and K2’, compute {P1’ , P2’} and 3D position of T1’2’ from fundamental matrix. (iii) incrementally add new camera Pk into the system, estimate its camera matrix by DLT (calibration) (iv) repeat (iii) until all the cameras are estimated. However, this won’t work! Slide37:  Algorithm (1) Find M tracks T = {T1, T2, …, TN } (i ) for every pair of image {Qi , Qj}: detect SIFT feature points in Qi and Qj match feature points robustly (RANSAC) (ii) match features across multiple images, construct tracks. (2) Estimate { P1… PN } and 3D position for each track { X1… XN } (i ) select one pair of image {Q1’ , Q2’} (well-conditioned). Let T1’2’ = {their associate overlapping track}; (ii) Estimate K1’ and K2’, compute {P1’ , P2’} and 3D position of T1’2’ from fundamental matrix. Then non-linearly minimize reprojection errors (LM). (iii) incrementally add new camera Pk into the system, estimate initial value by DLT, then non-linearly optimize the system. (iv) repeat (iii) until all the cameras are estimated. Replaces with more robust non-linear optimization Tired?:  Tired? Recall the camera calibration algorithm:  Recall the camera calibration algorithm Objective Given n≥4, 3D to 2D point correspondences {Xi↔xi’}, determine P Algorithm Linear solution: Normalization: DLT Minimization of geometric error: Iteratively optimization (Levenberg-Marquardt): Denormalization: We are lucky! 1st time huge amount of visual data is easily accessible. High-level description of these data also become available. How do we explore them? Analysis them? Wisely use them? :  We are lucky! 1st time huge amount of visual data is easily accessible. High-level description of these data also become available. How do we explore them? Analysis them? Wisely use them? What’s the contribution of this paper? How to extract high-level information? - Computer Vision, Machine Learning Tools. Structure from motion, and more computer vision tools reach a certain robust point for graphics application. - Internet Image search - Human Label game with purpose What is the space of all the pictures? :  What is the space of all the pictures? in the past present the future? What’s the space of all the videos? :  What’s the space of all the videos? in the past present the future? What else? :  What else? Using Search Engine? :  Using Search Engine? Using human computation power? :  Using human computation power? Using human computation power? :  Using human computation power? Using human computation power? :  Using human computation power? What else? :  What else? What else?:  What else? Book: “Multiple View Geometry in Computer Vision” Hartley and Zisserman Online Tutorial: http://www.cs.unc.edu/~marc/tutorial.pdf http://www.cs.unc.edu/~marc/tutorial/ Matlab Toolbox: http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/TORR1/index.html :  Book: “Multiple View Geometry in Computer Vision” Hartley and Zisserman Online Tutorial: http://www.cs.unc.edu/~marc/tutorial.pdf http://www.cs.unc.edu/~marc/tutorial/ Matlab Toolbox: http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/TORR1/index.html

17. 12. 2007
0 views

10. 12. 2007
0 views

02. 10. 2007
0 views

03. 10. 2007
0 views

01. 10. 2007
0 views

03. 12. 2007
0 views

12. 12. 2007
0 views

01. 11. 2007
0 views

02. 11. 2007
0 views

05. 11. 2007
0 views

05. 11. 2007
0 views

16. 11. 2007
0 views

22. 11. 2007
0 views

08. 11. 2007
0 views

23. 11. 2007
0 views

02. 11. 2007
0 views

20. 11. 2007
0 views

21. 12. 2007
0 views

24. 12. 2007
0 views

28. 12. 2007
0 views

31. 12. 2007
0 views

04. 01. 2008
0 views

03. 01. 2008
0 views

06. 11. 2007
0 views

13. 11. 2007
0 views

05. 11. 2007
0 views

30. 12. 2007
0 views

18. 12. 2007
0 views

19. 02. 2008
0 views

24. 02. 2008
0 views

29. 02. 2008
0 views

05. 03. 2008
0 views

11. 03. 2008
0 views

30. 03. 2008
0 views

13. 04. 2008
0 views

01. 01. 2008
0 views

29. 12. 2007
0 views

05. 11. 2007
0 views

07. 01. 2008
0 views

23. 12. 2007
0 views

09. 10. 2007
0 views

27. 09. 2007
0 views

29. 05. 2007
0 views

28. 12. 2007
0 views

26. 11. 2007
0 views

05. 11. 2007
0 views

15. 11. 2007
0 views

07. 11. 2007
0 views