IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0664565
(2003-09-18)
|
발명자
/ 주소 |
- Meisner,Jeffrey
- Donnelly,Walter P.
- Roosen,Richard
|
출원인 / 주소 |
- Meisner,Jeffrey
- Donnelly,Walter P.
- Roosen,Richard
|
대리인 / 주소 |
Knoble Yoshida & Dunleavy, LLC
|
인용정보 |
피인용 횟수 :
45 인용 특허 :
16 |
초록
▼
A tracker system for determining the relative position between a sensor and an object surface, generally comprising a sensor or sensors for detecting a pattern of fiducials disposed on an object surface and a processor connected to the at least one sensor. An augmented reality system generally compr
A tracker system for determining the relative position between a sensor and an object surface, generally comprising a sensor or sensors for detecting a pattern of fiducials disposed on an object surface and a processor connected to the at least one sensor. An augmented reality system generally comprising a pattern of fiducials disposed on an object surface, a computer having a processor and a memory, a user interface for receiving input and presenting augmented reality output to a user, and a tracker for detecting the pattern of fiducials. A method for tracking the position and orientation of an object generally comprising the steps of scanning across an object to detect fiducials and form video runs, clumping video runs to detect a pattern of fiducials, acquiring estimated values for a set of tracking parameters by comparing a detected pattern of fiducials to a reference pattern of fiducials, and iterating the estimated values for the set of tracking parameters until the detected pattern of fiducials match the reference pattern of fiducials to within a desired convergence. A method for augmenting reality generally comprising the steps of disposing a pattern of fiducials on an object surface, tracking the position and orientation of the object, retrieving and processing virtual information stored in a computer memory according to the position and orientation of the object and presenting the virtual information with real information to a user in near real time.
대표청구항
▼
What is claimed is: 1. A method for tracking the position and orientation of an object, comprising the steps of: (a) scanning across an object to detect fiducials, wherein a video run is formed by a scan; (b) clumping video runs to detect a pattern of fiducials; (c) acquiring estimated values for
What is claimed is: 1. A method for tracking the position and orientation of an object, comprising the steps of: (a) scanning across an object to detect fiducials, wherein a video run is formed by a scan; (b) clumping video runs to detect a pattern of fiducials; (c) acquiring estimated values for a set of tracking parameters by comparing a detected pattern of fiducials to a reference pattern of fiducials; and (d) iterating the estimated values for the set of tracking parameters until the detected pattern of fiducials match the reference pattern of fiducials to within a desired convergence. 2. The method for tracking the position and orientation of an object of claim 1, wherein the step of scanning across an object to detect a fiducials includes the step of setting a predetermined threshold voltage level for detecting a fiducial, and identifying fiducial edges when an output voltage from an optical sensor crosses the predetermined voltage level. 3. The method for tracking the position and orientation of an object of claim 1, wherein the step of clumping video runs includes the step of combining adjacent video runs and extracting relevant information from the video runs. 4. The method for tracking the position and orientation of an object of claim 3, wherein a pixel is recorded by recording a scan line number and a pixel number for each pixel that has a video level above the predetermined threshold. 5. The method for tracking the position and orientation of an object of claim 1, wherein the step of clumping video runs includes the steps of detecting and removing noise from the video runs. 6. The method for tracking the position and orientation of an object of claim 1, wherein the detected and reference patterns of fiducials includes a geometrically consistent pattern of hard fiducials and wherein the step of acquiring estimated values for a set of tracking parameters includes the step of corresponding a predetermined number of detected hard fiducials with the reference pattern of fiducials to estimate phi, theta and psi orientation parameters and to estimate a distance position parameter. 7. The method for tracking the position and orientation of an object of claim 1, wherein the detected and reference patterns of fiducials includes a pseudo random pattern of soft fiducials and wherein the step of acquiring estimated values for a set of tracking parameters includes the step of electing at least one of the soft fiducials with the reference pattern of fiducials to estimate the X-bar and Y-bar position parameters. 8. The method for tracking the position and orientation of an object of claim 1, wherein the step of iterating the estimated values for the set of tracking parameters uses the method of least squares. 9. A method for augmenting reality, comprising steps of: (a) tracking the position and orientation of a pattern of fiducials on an object with a self-contained, mobile system by scanning across the object to detect the fiducials, wherein a video run is formed by a scan and clumping video runs to detect the pattern of fiducials, wherein said step of scanning across the object to detect the fiducials includes the steps of setting a predetermined threshold voltage level for detecting a fiducial and identifying fiducial edges when an output voltage from an optical sensor crosses the predetermined voltage level; (b) processing virtual information stored in a computer memory of said system according to the position and orientation of the object; and (c) presenting the virtual information with real information to a user in near real time with said system. 10. The method for augmenting reality of claim 9, wherein the pattern of fiducials are disposed on the object surface in a geometrically consistent hard fiducial pattern and in a pseudo random soft fiducial pattern. 11. The method for augmenting reality of claim 9, wherein said step of tracking the position and orientation of the object further includes the steps of: acquiring estimated values for a set of tracking parameters by comparing a detected pattern of fiducials to a reference pattern of fiducials; and iterating the estimated values for the set of tracking parameters until the detected pattern of fiducials match the reference pattern of fiducials to within a desired convergence. 12. The method for augmenting reality of claim 11, wherein said step of scanning across the object to detect the fiducials includes the step of forming a signal corresponding to the detected pattern of fiducials, removing noise from the signal, and reducing the bandwidth of the signal. 13. The method for augmenting reality of claim 11, wherein said step of acquiring estimated values for a set of tracking parameters includes the steps of: corresponding a predetermined number of detected hard fiducials with the reference pattern of fiducials to estimate phi, theta and psi orientation parameters and to estimate a distance position parameter; and electing at least one of the soft fiducials with the reference pattern of fiducials to estimate X-bar and Y-bar position parameters. 14. The method for augmenting reality of claim 9, wherein the object surface includes a wiring board used in a process of fabricating wire harnesses, the virtual information includes a wiring schematic and instructions for a wiring harness, and the real information includes the wiring board, wires and connectors. 15. The method for augmenting reality of claim 9, wherein the computer memory forms a part of a wearable computer, the wearable computer having processor which performs said step of processing virtual information stored in the computer memory according to position and orientation of the object. 16. The method for augmenting reality of claim 9, wherein said step of presenting the virtual information with real information to a user in near real time includes the step of projecting the virtual information on a head mounted display. 17. The method for augmenting reality of claim 9, wherein said step of presenting the virtual information with real information to a user in near real time includes the step of projecting the virtual information on an Optical See Through display. 18. The method for augmenting reality of claim 9, wherein said step of presenting the virtual information with real information to a user in near real time includes the step of projecting the virtual information and the real information on a Video See Through display. 19. The method for augmenting reality of claim 9, wherein said steps of tracking the position and orientation of the object, retrieving virtual information, and presenting the virtual information with real information are performed with an update rate of at least 60 Hertz and a latency below 16 milliseconds. 20. The method for augmenting reality of claim 9, wherein the virtual information includes menu driven screen displays, wherein the screen displays forming a user-friendly interface for an augmented reality system, end wherein a user selects menu item by moving a cursor over a desired selection and choosing the selection. 21. The method for augmenting reality of claim 20, wherein the cursor is moved by moving the user's head. 22. The method for augmenting reality of claim 9, further comprising the step of calibrating the alignment between the virtual information and the real information. 23. A method for augmenting reality, comprising steps of: (a) using a sensor to provide at least one signal that is indicative of a pattern of fiducials on an object; (b) processing said signal to locate said fiducials; (c) determining a relative position and orientation of the sensor with respect to the object by comparing the locations of said fiducials to a known reference pattern; (d) providing virtual information to a user in substantial registration with real information based on the relative position and orientation determined in step (c), and wherein said method is performed so as to provide said virtual information to said user in near real-time. 24. A method according to claim 23, wherein step (d) comprises providing computer-generated aural information to a user. 25. A method according to claim 23, wherein step (d) comprises providing computer-generated kinesthetic information to a user. 26. A method according to claim 23, wherein step (a) comprises using a sensor to provide at least one signal that is indicative of a pattern of fiducials on a user's eye. 27. A method according to claim 23, wherein said sensor is mounted on a user's head. 28. A method according to claim 23, wherein steps (a) through (d) are performed with a self-contained, mobile system that comprises a wearable computer. 29. A method according to claim 23, wherein at least one of steps (b) and (c) are performed by electronically reducing said virtual information to a reduced amount of information for manageable processing. 30. A method according to claim 29, wherein said step of electronically reducing said virtual information comprises electronically filtering said information to eliminate background information. 31. A method according to claim 29, wherein said step of electronically reducing said virtual information comprises electronically filtering said information to eliminate noise. 32. A method according to claim 31, wherein said step of electronically filtering said information to eliminate noise comprises electronically correcting to compensate for known aberrations of at least one hardware element. 33. A method according to claim 29, wherein said step of electronically reducing said virtual information comprises removing at least one synchronization signal from said information. 34. A method according to claim 29, wherein said step of electronically reducing said virtual information comprises a step of determining a centroid position of at least one of said fiducials. 35. A method according to claim 34, wherein said step of electronically reducing said virtual information further comprises utilizing said centroid position in subsequent electronic processing. 36. A method according to claim 34, wherein said step of determining a centroid position of at least one of said fiducials comprises steps of determining at least two boundary locations of a fiducial and calculating the centroid position based at least in part on said boundary locations. 37. A method according to claim 23, wherein step (a) comprises tracking the position and orientation of at least one hard fiducial pattern having a known geometric shape and further comprises tracking the position and orientation of at least one soft fiducial pattern. 38. A method according to claim 37, wherein at least one fiducial of said soft fiducial pattern is located within said known geometric shape. 39. A method according to claim 37, wherein step (b) comprises a step of locating said hard fiducial pattern, and wherein said step of locating said hard fiducial pattern is aided by utilizing a reference pattern of fiducials corresponding to said known geometric shape. 40. A method according to claim 37, wherein step (b) further comprises locating said soft fiducial pattern, and wherein said step of locating said soft fiducial pattern is aided by said prior determination of said hard fiducial pattern. 41. A method according to claim 37, wherein step (b) comprises electronically calculating the position and orientation of the object based on the determined location of the hard fiducial pattern and the determined location of the soft fiducial pattern. 42. A method according to claim 41, wherein said step of electronically calculating the position and orientation of the object is performed based on a relative position of at least one fiducial within said soft fiducial pattern that is located within said known geometric shape of said hard fiducial pattern. 43. A method according to claim 23, wherein step (a) comprises iteratively tracking the position and orientation of a number of fiducials on said object. 44. A method according to claim 43, wherein said step of iteratively tracking the position and orientation of a number of fiducials comprises a quick acquire routine for calculating a theoretical parameter value for at least one fiducial based on a previous position of the fiducial. 45. A method according to claim 44, wherein said quick acquire routine tracks movement of the system with respect to the object and utilizes data pertaining to said movement to predict likely future locations of said fiducial. 46. A method according to claim 43, wherein at least one of steps (b) and (c) comprises receiving a signal of information based on said iterative tracking and processing said signal to eliminate nonessential data, thereby reducing bandwidth. 47. A method for tracking the position and orientation of an object, comprising the steps of: (a) scanning across an object to detect fiducials, wherein a video run is formed by a scan; (b) clumping video runs to detect a pattern of fiducials; (c) acquiring estimated values for a set of tracking parameters by comparing a detected pattern of fiducials to a reference pattern of fiducials, wherein said step of acquiring estimated values for a set of tracking parameters includes a cold acquire step and a quick acquire step, wherein the quick acquire step uses prior tracking parameters to estimate new tracking parameters, and wherein the cold acquire step is performed when prior tracking parameters are not available to estimate new tracking parameters; and (d) iterating the estimated values for the set of tracking parameters until the detected pattern of fiducials match the reference pattern of fiducials to within a desired convergence. 48. A method for tracking the position and orientation of an object, comprising the steps of: performing a startup routine, said startup routine including the steps of loading real world alignment data from a computer database file, calculating compensation for know aberrations of a sensor system used in the step of scanning across an object to detect fiducials, and setting a gain for the sensor system; scanning across an object to detect fiducials, wherein a video run is formed by a scan; clumping video runs to detect a pattern of fiducials; acquiring estimated values for a set of tracking parameters by comparing a detected pattern of fiducials to a reference pattern of fiducials; and iterating the estimated values for the set of tracking parameters until the detected pattern of fiducials match the reference pattern of fiducials to within a desired convergence. 49. A method for augmenting reality, comprising steps of: (a) using a sensor to provide at least one signal that is indicative of a pattern of fiducials on an object; (b) processing said signal to locate said fiducials; (c) determining a relative position and orientation of the sensor with respect to the object by comparing the locations of said fiducials to a known reference pattern; (d) providing virtual information to a user in substantial registration with real information based on the relative position and orientation determined in step (c), and wherein said method is performed so as to provide said virtual information to said user in near real-time, and wherein at least one of steps (b) and (c) are performed by electronically reducing said virtual information to a reduced amount of information for manageable processing, said step of electronically reducing said virtual information comprises a step of determining a centroid position of at least one of said fiducials and said step of determining a centroid position of at least one of said fiducials comprises detecting a beginning pixel and an ending pixel of the fiducial during a video raster run; extracting the line number, the beginning pixel number and the number of continuous pixels after the beginning pixel; processing such information obtained from different line numbers of the video raster run to form individual fiducial representations; and further processing this information to determine the centroid position.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.