최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0246024 (2005-10-07) |
등록번호 | US-8111904 (2012-02-07) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 9 인용 특허 : 388 |
The invention provides inter alia methods and apparatus for determining the pose, e.g., position along x-, y- and z-axes, pitch, roll and yaw (or one or more characteristics of that pose) of an object in three dimensions by triangulation of data gleaned from multiple images of the object. Thus, for
The invention provides inter alia methods and apparatus for determining the pose, e.g., position along x-, y- and z-axes, pitch, roll and yaw (or one or more characteristics of that pose) of an object in three dimensions by triangulation of data gleaned from multiple images of the object. Thus, for example, in one aspect, the invention provides a method for 3D machine vision in which, during a calibration step, multiple cameras disposed to acquire images of the object from different respective viewpoints are calibrated to discern a mapping function that identifies rays in 3D space emanating from each respective camera's lens that correspond to pixel locations in that camera's field of view. In a training step, functionality associated with the cameras is trained to recognize expected patterns in images to be acquired of the object. A runtime step triangulates locations in 3D space of one or more of those patterns from pixel-wise positions of those patterns in images of the object and from the mappings discerned during calibration step.
1. A method of three-dimensional (3D) vision for determining at least one of a position and orientation of an object in three dimensions, the method comprising A. calibrating multiple cameras that are disposed to acquire images of the object from different respective viewpoints to discern a mapping
1. A method of three-dimensional (3D) vision for determining at least one of a position and orientation of an object in three dimensions, the method comprising A. calibrating multiple cameras that are disposed to acquire images of the object from different respective viewpoints to discern a mapping function that identifies rays in 3D space emanating from each respective camera's lens that correspond to pixel locations in that camera's field of view, wherein the calibrating step includes determining, from images acquired substantially simultaneously by the multiple cameras, correlations between positions in 3D space and pixel locations in each respective camera's field of view,B. training functionality associated with the multiple cameras to recognize expected patterns in images of the object acquired by different ones of the multiple cameras, and training that functionality in regard to reference points of those expected patterns, such that training as to those expected patterns facilitates insuring that the reference points for those patterns coincide as between images obtained by those cameras, andC. triangulating locations in 3D space of one or more of the patterns from pixel-wise positions of those patterns in images of the object and from the mappings discerned during step (A). 2. The method of claim 1, in which step (C) includes triangulating the locations in 3D space from images of the object taken substantially simultaneously by the multiple cameras. 3. The method of claim 1, comprising a step (D) re-calibrating a selected camera that produces images in which the patterns appear to lie at locations inconsistent and/or in substantial disagreement with images from the other cameras. 4. The method of claim 3, wherein step (D) includes using locations in 3D space determined in step (C) from the images from the other cameras to discern a mapping function that identifies rays in 3D space emanating from the lens of the selected camera that correspond to pixel locations in that camera's field of view. 5. The method of claim 1, wherein step (A) includes positioning registration targets at known positions in 3D space and characterizing correlations between those positions and the pixel-wise locations of the respective targets in the cameras' fields of view. 6. The method of claim 5, wherein step (A) includes characterizing the correlations algorithmically. 7. The method of claim 5, wherein step (A) includes positioning one or more of registration targets that are simultaneously imaged by multiple cameras for purposes of discerning the mapping function. 8. The method of claim 7, wherein step (A) includes imaging registration targets that are disposed on one or more calibration plates in order to discern the mapping function. 9. The method of claim 5, wherein step (A) includes discerning a mapping function for one or more cameras that takes into account warping in the field of view of that camera. 10. The method of claim 1, in which step (B) includes training said functionality associated with the cameras to recognize expected patterns that include any of letters, numbers, other symbols, corners, or other discernible features of the object expected to be imaged. 11. The method of claim 1, in which step (B) includes training said functionality associated with the cameras as to the expected locations in 3D space of the patterns on the object expected to be imaged. 12. The method of claim 11, in which step (B) includes training said functionality associated with the cameras as to the expected locations in 3D space of the patterns relative to one another on the object expected to be imaged. 13. The method of claim 11, in which step (B) includes training said functionality associated with the cameras as to the expected locations in 3D space of the patterns based on user input. 14. The method of claim 11, in which step (B) includes training said functionality associated with the cameras as to the expected locations in 3D space of one or more of the patterns by triangulating actual locations of those patterns from (i) pixel-wise positions of those patterns in images of a training object, (ii) mappings discerned during step (A). 15. The method of claim 11, in which step (B) includes triangulating expected locations in 3D space of one or more of the patterns from (i) pixel-wise positions of those patterns in images of a training object, (ii) mappings discerned during step (A), and (iii) expected locations in 3D space of the patterns on the object expected to be imaged. 16. The method of claim 11, in which step (B) includes training said functionality as to an origin of each pattern. 17. The method of claim 16, in which step (B) includes training said functionality to use like models to identify like patterns as between different cameras. 18. The method of claim 16, in which step (B) includes training said functionality (i) to use different models to identify like patterns as between different cameras, and (ii) to identify like reference points for patterns determined with those patterns. 19. The method of claim 1, in which step (B) includes finding an expected pattern in an image from one camera based on prior identification of that pattern in an image from another camera. 20. The method of claim 1, in which step (C) includes triangulating the position of one or more of the patterns using pattern-matching or other two-dimensional vision tools. 21. The method of claim 1, in which step (C) includes triangulating the position of a pattern in 3D space from a nearest point of intersection of 3D rays of multiple cameras on which that pattern appears to lies. 22. The method of claim 1, in which step (C) includes triangulating the position of a pattern in 3D space from one or more of (i) a nearest point of intersection of 3D rays of multiple cameras on which that pattern appears to lies, (ii) one or more 3D rays of one or more cameras on which others of said patterns lie, and (iii) expected locations the patterns. 23. The method of claim 1, in which step (C) includes use like models to identify like patterns in images of the object acquired by different cameras. 24. The method of claim 23, in which step (C) includes (i) using different models to identify like patterns in images of the object acquired by different cameras, and (ii) identifying like reference points for those patterns notwithstanding that they were identified using different models. 25. The method of claim 1, in which step (C) includes triangulating the location in 3D space of the object even where one or more of the patterns are not detected in images of the object. 26. The method of claim 1, in which step (C) includes aborting detection of one or more patterns in an image of the object, if such detection exceeds a delay interval. 27. A machine vision system operating in accord with any of claims 1, 2, 5, 11, 16 or 19. 28. A method of three-dimensional (3D) vision for determining at least one of a position and orientation of an object in three dimensions, the method comprising A. calibrating multiple cameras that are disposed to acquire images of the object from different respective viewpoints to discern a mapping function that identifies rays in 3D space emanating from each respective camera's lens that correspond to pixel locations in that camera's field of view,B. training functionality associated with the cameras to recognize expected patterns in images to be acquired by different ones of the multiple cameras of the object, wherein the training step includes training, based on images acquired substantially simultaneously by the multiple cameras, the functionality associated with the cameras to select like reference points of said expected patterns, such that training the cameras to select those reference points facilitates insuring that the reference points for those patterns coincide as between images obtained by those cameras,C. triangulating locations in 3D space of one or more of the patterns from pixel-wise positions of those patterns in images of the object and from the mappings discerned during step (A). 29. A method of three-dimensional (3D) vision for determining at least one of a position and orientation of an object in three dimensions, the method comprising A. calibrating multiple cameras that are disposed to acquire images of the object from different respective viewpoints to discern a mapping function that identifies rays in 3D space emanating from each respective camera's lens that correspond to pixel locations in that camera's field of view,B. training functionality associated with the multiple cameras to recognize expected patterns in images of the object acquired by different ones of the multiple cameras, and training that functionality in regard to reference points of those expected patterns, such that training as to those expected patterns facilitates insuring that the reference points for those patterns coincide as between images obtained by those cameras, andC. triangulating locations in 3D space of one or more of the patterns from pixel-wise positions of those patterns in images of the object taken substantially simultaneously by the multiple cameras and from the mappings discerned during step (A). 30. The method of claim 1 wherein the training step further comprises training the selection of the reference points using a laser pointer.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.