최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0870488 (2015-09-30) |
등록번호 | US-10096099 (2018-10-09) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 0 인용 특허 : 377 |
Dimensioning systems may automate or assist with determining the physical dimensions of an object without the need for a manual measurement. A dimensioning system may project a light pattern onto the object, capture an image of the reflected pattern, and observe changes in the imaged pattern to obta
Dimensioning systems may automate or assist with determining the physical dimensions of an object without the need for a manual measurement. A dimensioning system may project a light pattern onto the object, capture an image of the reflected pattern, and observe changes in the imaged pattern to obtain a range image, which contains 3D information corresponding to the object. Then, using the range image, the dimensioning system may calculate the dimensions of the object. In some cases, a single range image does not contain 3D data sufficient for dimensioning the object. To mitigate or solve this problem, the present invention embraces capturing a plurality of range images from different perspectives, and then combining the range images (e.g., using image-stitching) to form a composite range-image, which can be used to determine the object's dimensions.
1. A method for dimensioning an object, the method comprising: having a portion of an object in a field-of-view of a range camera of the dimensioning system;projecting, using a pattern projector, a light pattern into the field-of-view;capturing, using the range camera of the dimensioning system, a r
1. A method for dimensioning an object, the method comprising: having a portion of an object in a field-of-view of a range camera of the dimensioning system;projecting, using a pattern projector, a light pattern into the field-of-view;capturing, using the range camera of the dimensioning system, a range image of the field-of-view, wherein each pixel of the range image represents a distance from the range camera to a respective point in the range camera's field-of-view;moving the dimensioning system and/or the object so that there is relative movement between the dimensioning system and the object, and the range camera's field-of-view contains a different portion of the object, wherein the moving of the dimensioning system and/or the object is automatic;repeating the capturing and the moving until a plurality of range images have been captured, wherein in each range image of the plurality of range images, each pixel of the range image represents a distance from the range camera to a respective point in the range camera's field-of-view;combining the plurality of range images to create a composite range-image; anddimensioning the object using the composite range-image. 2. The method according to claim 1, wherein the capturing, using the dimensioning system, a range image of the field-of-view comprises: capturing, using the range camera, an image of the field-of-view, the image comprising a reflected light-pattern; andgenerating 3D data from the image of the reflected light-pattern. 3. The method according to claim 2, wherein the plurality of range images collectively comprise 3D data for dimensioning the object. 4. The method according to claim 3, wherein the 3D data for dimensioning comprises 3D data from all surfaces of the object. 5. The method according to claim 3, wherein the 3D data for dimensioning comprises 3D data from a surface of the object without any gaps in the reflected light-pattern. 6. The method according to claim 1, wherein the dimensioning system is handheld. 7. The method according to claim 1, wherein the moving either the dimensioning system or the object comprises generating messages to guide a user to perform the movement, and the messages are selected from the group consisting of audio messages and visual messages. 8. The method according to claim 7, wherein the messages comprise instructions for the user to take action, and the instructions for the user to take action are selected from the group consisting of instructions to (i) move the dimensioning system or the object in a particular direction, (ii) move the dimensioning system or the object at a particular speed, and (iii) cease moving the dimensioning system or the object. 9. The method according to claim 1, wherein the combining the plurality of range images to create a composite range-image, comprises: image-stitching the plurality of range images. 10. The method according to claim 9, wherein the image-stitching comprises simultaneous localization and mapping (SLAM). 11. A dimensioning system, comprising: a pattern projector configured to project a light pattern onto an object;a range camera having a field of view and configured to (i) capture an image of a reflected light-pattern in the field of view, (ii) generate 3D data from the reflected light-pattern, and (iii) create a range image using the 3D data, wherein each pixel of the range image represents a distance from the range camera to a respective point in the range camera's field-of-view;at least one device configured to automatically move the dimensioning system and/or the object so that there is relative movement between the dimensioning system and the object, and the range camera's field-of-view contains a different portion of the object; anda processor communicatively coupled to the pattern projector and the range camera, wherein the processor is configured by software to: (i) trigger the range camera to capture a plurality of range images, wherein in each range image of the plurality of range images, each pixel of the range image represents a distance from the range camera to a respective point in the range camera's field-of-view,(ii) combine the plurality of range images into a composite range-image, and(iv) calculate the dimensions of the object using the composite range-image. 12. The dimensioning system according to claim 11, wherein the plurality of range images are captured as the spatial relationship between the dimensioning system and the object is changed. 13. The dimensioning system according to claim 12, wherein (i) each range image in the plurality of range images comprises 3D data from a portion of the object, and (ii) the composite range-image comprises 3D data from the object as a whole. 14. The dimensioning system according to claim 13, wherein the processor is further configured by software to: gather information as the spatial relationship between the range camera and the object is changed, and the information is selected from the group consisting of tracking information and mapping information. 15. The dimensioning system according to claim 14, wherein to combine the plurality of range images into a composite range-image, the processor is configured by software to: image-stitch the plurality of range images using the information. 16. The dimensioning system according to claim 15, wherein range images in the plurality of range images have partially overlapping fields of view. 17. The dimensioning system according to claim 14, wherein the processor is further configured to: use the information to generate messages to help a user change the spatial relationship between the range camera and the object. 18. The dimensioning system according to claim 17, wherein the messages comprise instructions to take action, and the instructions to take action are selected from the group consisting of instructions to (i) move the dimensioning system or the object in a particular direction, (ii) move the dimensioning system or the object at a particular speed, and (iii) cease moving the dimensioning system or the object. 19. The dimensioning system according to claim 11, wherein the dimensioning system is handheld.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.