최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0409291 (2013-08-23) |
등록번호 | US-9939529 (2018-04-10) |
우선권정보 | SE-1200514 (2012-08-27) |
국제출원번호 | PCT/EP2013/067500 (2013-08-23) |
국제공개번호 | WO2014/033055 (2014-03-06) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 0 인용 특허 : 543 |
A robot positioning system having a camera, a processing unit and at least a first line laser. The first line laser is arranged to illuminate a space by projecting vertical laser beams within field of view of the camera. The camera is arranged to record a picture of the space illuminated by the vert
A robot positioning system having a camera, a processing unit and at least a first line laser. The first line laser is arranged to illuminate a space by projecting vertical laser beams within field of view of the camera. The camera is arranged to record a picture of the space illuminated by the vertical laser beams, and the processing unit is arranged to extract, from the recorded picture, image data representing a line formed by the vertical laser beams being reflected against objects located within the space. The processing unit is further arranged to create, from the extracted line, a representation of the illuminated space along the projected laser lines, in respect of which the robot is positioned. Methods of positioning a robot are also provided.
1. A method of positioning a robot comprising: illuminating a space with at least a first line laser projecting vertical laser beams within a field of view of a camera;recording, with the camera, a recorded picture of the space illuminated by the vertical laser beams;generating an extracted line by
1. A method of positioning a robot comprising: illuminating a space with at least a first line laser projecting vertical laser beams within a field of view of a camera;recording, with the camera, a recorded picture of the space illuminated by the vertical laser beams;generating an extracted line by extracting, from the recorded picture, image data representing a line formed by the vertical laser beams being reflected against objects located within the space;comparing the image data of the extracted line of the recorded picture with image data of a previously extracted line of at least one previously recorded picture;generating an adjusted extracted line by adjusting the extracted line on the basis of the comparison by using dead reckoning; andcreating, from the adjusted extracted line, a representation of the illuminated space along the projected laser beams, and an estimate of a position of the robot in relation to the illuminated space. 2. The method of claim 1, wherein: the step of illuminating a space further comprises illuminating the space with at least a second line laser projecting second vertical laser beams within the field of view of the camera;the step of generating an extracted line comprises extracting, from the recorded picture, image data representing respective lines formed by the respective vertical laser beams of the first and second line lasers being reflected against objects located within the space;the step of generating an adjusted extracted line comprises adjusting the respective lines on the basis of the comparison of the image data by using dead reckoning; andthe step of creating a representation from the adjusted extracted line comprises creating, from the extracted lines of the first and second line lasers, a representation of the illuminated space along the projected laser beams of the first and second line lasers. 3. The method of claim 1, further comprising: acquiring a position estimate of the camera; andutilizing the position estimate to create the representation of the illuminated space, the representation being associated with coordinates on the basis of the position estimate. 4. The method of claim 1, further comprising: rotating the camera around a vertical axis;repeatedly recording pictures from which the representation is created. 5. The method of claim 2, further comprising: applying edge detection to the extracted image data in order to identify respective lines formed by the respective reflections of the respective beams of the first and second line lasers. 6. The method of claim 1, further comprising: mapping the image data to unique sensor array coordinates of the camera; andassigning the unique coordinates to the extracted lines, wherein the representation is associated with the unique coordinates. 7. A robot positioning system comprising: a camera;a processing unit; andat least a first line laser arranged to illuminate a space by projecting first vertical laser beams within a field of view of the camera; whereinthe camera is arranged to record a picture of the space illuminated by the first vertical laser beams; andthe processing unit is configured to extract, from the recorded picture, image data representing a first line formed by the first vertical laser beams being reflected against objects located within the space, compare the image data of the extracted first line of the recorded picture with image data of a previously extracted line of at least one previously recorded picture, adjust the image data representing the first line on the basis of the comparison by using dead reckoning, and create, from the image data representing the first line, a representation of the illuminated space along the projected first vertical laser beams and an estimation of a position of the robot in relation to the illuminated space. 8. The robot positioning system of claim 7, further comprising: a second line laser arranged to illuminate the space within the field of view of the camera by projecting second vertical laser beams; whereinthe processing unit is arranged to extract, from the recorded picture, image data representing a respective lines formed by the first and second vertical laser beams and further to create, from the respective extracted lines, a representation of the illuminated space along the projected first and second vertical laser beams. 9. The robot positioning system of claim 8, wherein the first and second line lasers are arranged on respective sides of the camera along an axis that is perpendicular to an optical axis of the camera. 10. The robot positioning system of claim 9, wherein the first and second line lasers are arranged such that the first and second vertical laser beams intersect within the field of view of the camera. 11. The robot positioning system of any one of claim 8, wherein the directions of the first and second vertical laser beams and the field of view of the camera are arranged such that a width (wC) of the illuminated space located within the field of view of the camera is greater than a width (wR) of the robot on which the robot positioning system is arranged. 12. The robot positioning system of any one of claim 8, wherein the directions of the first and second vertical laser beams and the field of view of the camera are arranged such that a height (hC) of the illuminated space located within the field of view of the camera is greater than a height (hR) of the robot on which the robot positioning system is arranged. 13. The robot positioning system of any one of claim 7, further comprising: an optical filter arranged at the camera, which optical filter is adapted to a wavelength of the light emitted by the first and second line laser. 14. The robot positioning system of claim 7, wherein the camera and the first line laser are mounted on a robot and arranged to be rotatable around a vertical axis. 15. The robot positioning system of claim 7, further comprising a positioning system for estimating an instantaneous position of the robot positioning system. 16. The robot positioning system of claim 15, wherein the processing unit is further configured to: utilize the position estimate to create the representation of the illuminated space, the representation being associated with coordinates on the basis of the position estimate. 17. The robot positioning system of claim 8, wherein the processing unit is further configured to: apply edge detection to the extracted image data in order to identify the respective lines formed by the reflections of the first and second vertical laser beams. 18. The robot positioning system of claim 7, wherein the processing unit is further configured to: map the image data to unique sensor array coordinates of the camera; andassign the unique coordinates to the extracted lines, wherein the representation is associated with the unique coordinates. 19. The robot positioning system of claim 8, further comprising: a dust sensor arranged to detect particles illuminated by the first and second line lasers by extracting image data indicating the illuminated particles, wherein operation of a robot on which robot positioning system is arranged is controlled in response to the detection of particles by the dust sensor. 20. The robot positioning system of claim 7, wherein the processing unit is further configured to relate the created representation to a coordinate system which is fixed to a surface across which a robot associated with the robot positioning system moves.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.