IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0351235
(2006-02-09)
|
등록번호 |
US-7403268
(2008-07-22)
|
발명자
/ 주소 |
- England,James N.
- Helser,Aron T.
- Elgin,Benjamin C.
- Holloway,Richard L.
|
출원인 / 주소 |
|
대리인 / 주소 |
Jenkins, Wilson, Taylor & Hunt, P.A.
|
인용정보 |
피인용 횟수 :
32 인용 특허 :
6 |
초록
▼
A method, computer program product, and apparatus for obtaining the geometric correspondence between at least two 3D range data sets obtained using a 3D rangefinder device. First and second 3D range data sets are provided. The first 3D range data set is displayed as a 2D displayed image. The second
A method, computer program product, and apparatus for obtaining the geometric correspondence between at least two 3D range data sets obtained using a 3D rangefinder device. First and second 3D range data sets are provided. The first 3D range data set is displayed as a 2D displayed image. The second 3D range data set is displayed as one of a second 2D displayed image and a 3D displayed image. Corresponding features within the first 2D displayed image and within the second displayed image are respectively specified. A 3D transformation between the first 3D range data set and the second 3D range data set is computed based on the geometry of the specified corresponding features. As such, that the geometric correspondence between the two 3D range data sets may be determined.
대표청구항
▼
What is claimed is: 1. A method for obtaining the geometric correspondence between at least two 3D range data sets obtained using a 3D rangefinder device, comprising: acquiring a first 3D range data set by using a first 3D rangefinder device to scan a scene and generate the first 3D range data set,
What is claimed is: 1. A method for obtaining the geometric correspondence between at least two 3D range data sets obtained using a 3D rangefinder device, comprising: acquiring a first 3D range data set by using a first 3D rangefinder device to scan a scene and generate the first 3D range data set, wherein the first 3D range data set includes, for each pixel, at least three measured dimension values; acquiring a second 3D range data set by using one of the first 3D rangefinder device and a second 3D rangefinder device to scan the scene and generate the second 3D range data set, wherein the second 3D range data set includes, for each pixel, at least three measured dimension values; representing the first 3D range data set as a first 2D displayed image; representing the second 3D range data set as a second displayed image selected from the group consisting of a second 2D displayed image and a 3D displayed image; specifying corresponding features within the first 2D displayed image and within the second displayed image respectively; and computing a 3D transformation between the first 3D range data set and the second 3D range data set based on the geometry of the specified corresponding features, such that the geometric correspondence between the two 3D range data sets is determined. 2. The method of claim 1, wherein specifying corresponding features comprises at least one of manually specifying corresponding points, manually specifying corresponding lines, manually specifying corresponding surfaces, manually specifying corresponding volumes, manually specifying corresponding other features, automatically specifying corresponding points, automatically specifying corresponding lines, automatically specifying corresponding surfaces, automatically specifying corresponding volumes, automatically specifying corresponding other features, and combinations thereof. 3. The method of claim 1, wherein specifying corresponding features comprises at least one technique selected from the group consisting of using sub-pixel interpolation in any 2D displayed image wherein a software tool may allow the user to estimate and specify the location of a feature anywhere within a pixel and not just at its origin, using interpolation between measured 3D range data points on surfaces in any 3D displayed image wherein a software tool may allow the user to estimate and specify the location of a feature anywhere upon a surface even if that particular location is not directly associated with a measured 3D range data point, using estimates of the centers of features wherein the user may estimate and specify the location of the center of a feature even if the particular pixel at that chosen center appears no different from adjacent pixels, and using holes and data interpolated across holes wherein the rangefinder device did not acquire a range measurement and wherein a software tool may allow the user to estimate and specify the location of a feature anywhere within a hole even though that particular location is not directly associated with a measured 3D range data point. 4. The method of claim 1, wherein where the second 3D range data set may be obtained from the group consisting of being obtained by the same 3D rangefinder device as the first 3D range data set but obtained from a different 3D location, being obtained by the same 3D rangefinder device as the first 3D range data set from the same 3D location but obtained at a different resolution, being obtained by the same 3D rangefinder device as the first 3D range data set from the same 3D location but obtained at a different time, being obtained by a different 3D rangefinder device as the first 3D range data set and obtained at the same 3D location, being obtained by a different 3D rangefinder device as the first 3D range data set and obtained from a different 3D location, and being obtained by a different 3D rangefinder device as the first 3D range data set and obtained at a different time. 5. The method of claim 1, wherein at least one of the first 3D range data set and the second 3D range data set are provided by a 3D rangefinder device selected from the group consisting of a scanning laser rangefinder using time of flight range measurement principles, a scanning laser rangefinder using phase comparison range measurement principles, a scanning laser rangefinder using any other range measurement principles, an imaging laser rangefinder range camera using time of flight range measurement principles, an imaging laser rangefinder range camera using phase comparison range measurement principles, an imaging laser rangefinder range camera using any other range measurement principles, a triangulation rangefinder, a stereo image rangefinder, a multiple image rangefinder, any other device that acquires a multiplicity of range data points simultaneously, and any other device that acquires a multiplicity of range data points over a period of time and combinations of the above. 6. The method of claim 1, wherein at least one of the first 2D displayed image and the second 2D displayed image comprises a 2D image selected from the group consisting of a 2D range image comprising range values from the 3D rangefinder device converted to monochrome, a 2D range image comprising range values from the 3D rangefinder device converted to false color, a 2D reflectance image comprising intensity values of the rangefinding signal reflected from a physical surface and thereafter received by the 3D rangefinder device converted to monochrome, a 2D reflectance image comprising intensity values of the rangefinding signal reflected from a physical surface and thereafter received by the 3D rangefinder device converted to false color, a 2D registered color image comprising a color camera image previously registered with 3D range data, a 2D registered color image wherein the image is acquired from the same perspective as the 3D range data set, a 2D registered color image wherein the image is acquired from a different perspective than the 3D range data set, a 2D registered color image wherein the image is acquired with the same resolution as the 3D range data set, a 2D registered color image wherein the image is acquired with different resolution from the 3D range data set, a 2D image displayed in spherical projection format, a 2D image displayed in any other 3D-to-2D projection format, and a 2D registered monochrome image comprising a monochrome camera image previously registered with 3D range data. 7. The method of claim 1, wherein the 3D displayed image comprises a 3D image selected from the group consisting of a 3D point display, a 3D point display in orthogonal projection, a 3D point display in perspective projection, a 3D polygonal mesh, a 3D polygonal mesh in orthogonal projection, a 3D polygonal mesh in perspective projection, a 3D surface geometry display, a 3D surface geometry display in orthogonal projection, and a 3D surface geometry display in perspective projection. 8. The method of claim 1, implemented in a computer processor executing a suitable computer software program product therein. 9. The method of claim 1, implemented in a suitable computer software program product embodied on computer readable tangible media. 10. The method of claim 1, wherein specifying corresponding features comprises at least one of using a computer cursor controlled by a mouse, using a computer cursor controlled by a pointing stick, using a computer cursor controlled by a joystick, using a computer cursor controlled by a touch pad, using software to specify, and combinations of the above. 11. The method of claim 1, wherein at least one 3D range data set is represented by at least two displayed images. 12. The method of claim 1, wherein at least three 3D range data sets are provided and wherein computing the 3D transformation is carried out using computing selected from the group consisting of computing the 3D transformation between at least two 3D range data sets in parallel, computing the 3D transformation between at least two 3D range data sets simultaneously, computing the 3D transformation between at least two 3D range data sets serially, and combinations of the preceding. 13. A computer program product stored in computer readable media for execution in at least one processor, the processor having access to at least two 3D range data sets obtained using a 3D rangefinder device, comprising: a first software module for acquiring a first 3D range data set generated using a first 3D rangefinder device to scan a scene, wherein the first 3D range data set includes, for each pixel, at least three measured dimension values, and for providing the first 3D range data set to the at least one processor; a second software module for acquiring a second 3D range data set generated using one of the first 3D rangefinder device and a second 3D rangefinder device to scan the scene and generate the second 3D range data set, wherein the second 3D range data set includes, for each pixel, at least three measured dimension values, and for providing the second 3D range data set to the at least one processor; a third software module for representing the first 3D range data set as a first 2D displayed image; a fourth software module for representing the second 3D range data set as a second displayed image selected from the group consisting of a second 2D displayed image and a 3D displayed image; a fifth software module for specifying corresponding features within the first 2D displayed image and within the second displayed image respectively; and a sixth software module for computing a 3D transformation between the first 3D range data set and the second 3D range data set based on the geometry of the specified corresponding features, such that the geometric correspondence between the two 3D range data sets is determined. 14. The computer program product of claim 13, wherein at least one of the first 2D displayed image and the second 2D displayed image comprises a 2D image selected from the group consisting of a 2D range image comprising range values from the 3D rangefinder device converted to monochrome, a 2D range image comprising range values from the 3D rangefinder device converted to false color, a 2D reflectance image comprising intensity values of the rangefinding signal reflected from a physical surface and thereafter received by the 3D rangefinder device converted to monochrome, a 2D reflectance image comprising intensity values of the rangefinding signal reflected from a physical surface and thereafter received by the 3D rangefinder device converted to false color, a 2D registered color image comprising a color camera image previously registered with 3D range data, a 2D registered color image wherein the image is acquired from the same perspective as the 3D range data set, a 2D registered color image wherein the image is acquired from a different perspective than the 3D range data set, a 2D registered color image wherein the image is acquired with the same resolution as the 3D range data set, a 2D registered color image wherein the image is acquired with different resolution from the 3D range data set, a 2D image displayed in spherical projection format, a 2D image displayed in any other 3D-to-2D projection format, and a 2D registered monochrome image comprising a monochrome camera image previously registered with 3D range data. 15. The computer program product of claim 13, wherein at least one 3D range data set is represented by at least two displayed images. 16. The computer program product of claim 13, wherein at least three 3D range data sets are provided and wherein computing the 3D transformation is carried out using computing selected from the group consisting of computing the 3D transformation between at least two 3D range data sets in parallel, computing the 3D transformation between at least two 3D range data sets simultaneously, computing the 3D transformation between at least two 3D range data sets serially, and combinations of the preceding. 17. An apparatus having access to at least two 3D range data sets obtained using a 3D rangefinder device, comprising: at least one computer processor; a computer program product executing within the at least one computer processor, wherein the computer program product further comprises at least the following software modules therein; a first software module for acquiring a first 3D range data set generated using a first 3D rangefinder device to scan a scene, wherein the first 3D range data set includes, for each pixel, at least three measured dimension values, and for providing the first 3D range data set to the at least one processor; a second software module for acquiring a second 3D range data set generated using one of the first 3D rangefinder device and a second 3D rangefinder device to scan the scene and generate the second 3D range data set, wherein the second 3D range data set includes, for each pixel, at least three measured dimension values, and for providing the second 3D range data set to the at least one processor; a third software module for representing the first 3D range data set as a first 2D displayed image; a fourth software module for representing the second 3D range data set as a second displayed image selected from the group consisting of a second 2D displayed image and a 3D displayed image; a fifth software module for specifying corresponding features within the first 2D displayed image and within the second displayed image respectively; and a sixth software module for computing a 3D transformation between the first 3D range data set and the second 3D range data set based on the geometry of the specified corresponding features, such that the geometric correspondence between the two 3D range data sets is determined. 18. The apparatus of claim 17, wherein at least one of the first 2D displayed image and the second 2D displayed image comprises a 2D image selected from the group consisting of a 2D range image comprising range values from the 3D rangefinder device converted to monochrome, a 2D range image comprising range values from the 3D rangefinder device converted to false color, a 2D reflectance image comprising intensity values of the rangefinding signal reflected from a physical surface and thereafter received by the 3D rangefinder device converted to monochrome, a 2D reflectance image comprising intensity values of the rangefinding signal reflected from a physical surface and thereafter received by the 3D rangefinder device converted to false color, a 2D registered color image comprising a color camera image previously registered with 3D range data, a 2D registered color image wherein the image is acquired from the same perspective as the 3D range data set, a 2D registered color image wherein the image is acquired from a different perspective than the 3D range data set, a 2D registered color image wherein the image is acquired with the same resolution as the 3D range data set, a 2D registered color image wherein the image is acquired with different resolution from the 3D range data set, a 2D image displayed in spherical projection format, a 2D image displayed in any other 3D-to-2D projection format, and a 2D registered monochrome image comprising a monochrome camera image previously registered with 3D range data. 19. The apparatus of claim 17, wherein at least one 3D range data set is represented by at least two displayed images. 20. The apparatus of claim 17, wherein at least three 3D range data sets are provided and wherein computing the 3D transformation is carried out using computing selected from the group consisting of computing the 3D transformation between at least two 3D range data sets in parallel, computing the 3D transformation between at least two 3D range data sets simultaneously, computing the 3D transformation between at least two 3D range data sets serially, and combinations of the preceding. 21. The method of claim 1 wherein the at least three measured dimension values for each pixel include range, azimuth, and elevation. 22. The computer program product of claim 13 wherein the at least three measured dimension values for each pixel include range, azimuth, and elevation. 23. The apparatus of claim 17 wherein the at least three measured dimension values for each pixel include range, azimuth, and elevation. 24. The method of claim 1 comprising generating, based on the 3D transformation, a combined 3D display of the first and second 3D range data sets. 25. The method of claim 24 wherein generating the combined display includes generating the combined display from a view point located at a location from which neither the first or second 3D range data sets were acquired.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.