Determination of position from images and associated camera positions
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G06T-007/00
G06K-009/46
G06K-009/52
G06K-009/62
H04N-007/18
출원번호
US-0431908
(2013-09-27)
등록번호
US-9607219
(2017-03-28)
우선권정보
GB-1217395.1 (2012-09-28)
국제출원번호
PCT/GB2013/052530
(2013-09-27)
국제공개번호
WO2014/049372
(2014-04-03)
발명자
/ 주소
Greveson, Eric
Srinivasan, James
Morris, Julian
출원인 / 주소
2D3 LIMITED
대리인 / 주소
Toler Law Group, PC
인용정보
피인용 횟수 :
9인용 특허 :
1
초록▼
The absolute position of a target object point is determined using a series of images of the scene with overlapping fields of view captured by a camera in positions arranged in at least two dimensions across the scene and position data representing the absolute positions of the camera on capture of
The absolute position of a target object point is determined using a series of images of the scene with overlapping fields of view captured by a camera in positions arranged in at least two dimensions across the scene and position data representing the absolute positions of the camera on capture of the respective images. The images are analyzed to identify sets of image points corresponding to common object points in the scene. A bundle adjustment is performed on the sets of image points that estimates parameters representing the positions of the object points relative to the positions of the camera associated with each image, but without using input orientation data representing the orientation of the camera. The absolute position of the target object point is derived on the basis of the results of the bundle adjustment and the absolute positions of the camera represented by the position data.
대표청구항▼
1. A method comprising: capturing, by a camera, images of a scene having overlapping fields of view from image to image;recording an absolute global position of the camera at a time of capture for each of the images;identifying object points in the images that are common to the images from image to
1. A method comprising: capturing, by a camera, images of a scene having overlapping fields of view from image to image;recording an absolute global position of the camera at a time of capture for each of the images;identifying object points in the images that are common to the images from image to image;estimating parameters representing the positions of the object points relative to the absolute global positions of the camera using the absolute global positions of the camera without using input orientation data measured by a sensor associated with the camera representing orientations of the camera on capture of the images to set initial estimates of the orientations of the camera on capture of the respective images;deriving a target object point based on the parameters representing the positions of the object points relative to the absolute global positions of the camera; andderiving the absolute global position of the target object point based on the absolute global positions of the camera. 2. A method according to claim 1, wherein the scene includes a portion of a surface of the Earth and the images are captured from a camera on an aircraft. 3. A method according to claim 1, wherein the absolute global positions are derived from a global positioning satellite receiver, and wherein the method further comprises providing an initial ground plane elevation estimate. 4. A method according to claim 1, further comprising receiving user input designating at least one target image point in at least one of the images. 5. A method according to claim 4, wherein deriving the absolute global position of the target object point comprises interpolating the absolute global position of the target object point from estimated positions of object points corresponding to a group of image points located around at least one of the at least one designated target image points. 6. A method according to claim 4, wherein the method further comprises: identifying a set of target image points in at least one of the images corresponding to the at least one designated target image point; andestimating parameters representing the positions of the set of target image points relative to the absolute global positions of the camera with respect to each image. 7. A method according to claim 6, wherein identifying a set of target image points comprises: estimating a homography between one of the images in which the target image point is designated and one or more other images;predicting the position of the at least one designated target image point in the one or more other images using the homography;detecting matching image patches within a search region around the predicted position of the designated target image point in the one or more other images that match an image patch at the position of the designated target image point in the one of the images, wherein the set of target image points is identified as being at the position of the designated target image point and at positions of the matching image patches in the one or more other images. 8. A method according to claim 1, further comprising: detecting features within the images;generating descriptors with respect to each image from respective patches of the image at the position of each detected feature; anddetecting sets of additional descriptors generated from different images, wherein the positions of the detected features from which a set of additional descriptors are generated are identified as a set of image points corresponding to a common object point. 9. A method according to claim 8, wherein the detected features are corner features. 10. A method according to claim 8, wherein the detected features are detected at a plurality of scales. 11. A method according to claim 8, wherein detecting sets of additional descriptors comprises: detecting sets of descriptors generated from different images that match other respective sets of descriptors; andidentifying which of the detected sets of descriptors conform to a common mapping between the positions in the respective images. 12. A method according to claim 1, wherein the method further comprises estimating parameters representing the x-, y- and z-orientations of the camera with respect to each image. 13. A method according to claim 1, wherein each image includes a timestamp, and wherein the method further includes: matching the absolute global positions of the camera to each respective image by a timestamp included in each image; anddownsampling a resolution of each image by building a Gaussian-filtered dyadic image pyramid. 14. A method according to claim 1, wherein the parameters represent intrinsic properties of the camera. 15. A method according to claim 1, further comprising providing initial estimates of the orientations of the camera on capture of the respective images that are within 90° of each other. 16. A method according to claim 1, wherein the method further comprises providing initial estimates of the orientations of the camera on capture of the respective images based on an optical axis aligned with a common object point in the scene. 17. A method according to claim 1, wherein the camera captures the images in positions along a curved path with respect to the scene. 18. A method according to claim 17, wherein the method is performed with initial estimates of the positions of the object points relative to the positions of the camera with respect to each image, wherein the method further comprises deriving the initial estimates from a geometrical measure of curvature of the curved path of the absolute positions of the camera for each image. 19. An apparatus for determining an absolute position of at least one target object point in a scene, the apparatus comprising: a camera configured to capture images of the scene with overlapping fields of view, wherein the camera captures the images in positions arranged in at least two dimensions across the scene;a processor coupled to a memory configured with instructions to: receive position data representing the absolute global positions of the camera on capture of the respective images; andin each of the images, identify object points that are common to at least one other image;estimate parameters representing positions of the object points relative to absolute global positions of the camera taking into account the absolute global positions of the camera without using input orientation data, measured by a sensor associated with the camera, representing orientations of the camera on capture of the images to set initial estimates of the orientations of the camera on capture of the respective images;derive a target object point based on the parameters representing the positions of the object points relative to the absolute global positions of the camera; andderive the absolute global position of the target object point based on the absolute global positions of the camera. 20. The system of claim 19, further comprising: an aircraft coupled to the camera;a position sensor configured to acquire the position data and communicate the position data to the memory; anda datalink interface coupled to the memory and configured to communicate the images and position data. 21. A method according to claim 1, further comprising: removing color data from the images after capturing the images. 22. A method according to claim 1, further comprising: translating image coordinates of each image so that a principal point of the camera is at [00]T; anddividing the image coordinates of each image by a focal length of a lens of the camera.
Magaña, Enrique Casado; Scarlatti, David; Campillo, David Esteban; Alcañiz, Jesús Iván Maza; Benitez, Fernando Caballero, Methods and apparatus for positioning aircraft based on images of mobile targets.
Sperindeo, Samuel; Barash, Benji; Schoenberg, Yves Albers; Buchmueller, Daniel, Unmanned aerial vehicle camera calibration as part of departure or arrival at a materials handling facility.
Wilcox, Scott Michael; Busek, Naimisaranya Das; Decker, Trevor Josph Myslinski; Regner, Michael William, Unmanned aerial vehicle sensor calibration as part of departure from a materials handling facility.
Watson, Joshua John; Novak, Benjamin Griffin; O'Brien, Barry James; Wilcox, Scott Michael; Caro, Benjamin Israel; Boyd, Scott Patrick, Unmanned aerial vehicle sensor calibration during flight.
Molin, Hans M.; Gyori, Marton; Kuehnle, Andreas U.; Li, Zheng, Vehicle 360° surround view system having corner placed cameras, and system and method for calibration thereof.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.