[미국특허]
Method for recognizing position of mobile robot by using features of arbitrary shapes on ceiling
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
H04W-004/02
G05D-001/02
G06T-007/00
출원번호
US-0241864
(2012-08-27)
등록번호
US-9402151
(2016-07-26)
우선권정보
KR-10-2011-0086112 (2011-08-27)
국제출원번호
PCT/KR2012/006809
(2012-08-27)
§371/§102 date
20140227
(20140227)
국제공개번호
WO2013/032192
(2013-03-07)
발명자
/ 주소
Song, Jae-Bok
Hwang, Seo-Yeon
출원인 / 주소
KOREA UNIVERSITY RESEARCH AND BUSINESS FOUNDATION
대리인 / 주소
Rabin & Berdo, P.C.
인용정보
피인용 횟수 :
0인용 특허 :
2
초록▼
The present invention provides a method for recognizing a position of a mobile robot by using arbitrarily shaped ceiling features on a ceiling, comprising: a providing step of providing a mobile robot device for recognizing a position by using arbitrarily shaped ceiling features on a ceiling which i
The present invention provides a method for recognizing a position of a mobile robot by using arbitrarily shaped ceiling features on a ceiling, comprising: a providing step of providing a mobile robot device for recognizing a position by using arbitrarily shaped ceiling features on a ceiling which includes an image input unit, an encoder sensing unit, a computation unit, a control unit, a storage unit, and a driving unit; a feature extraction step of extracting features which include an arbitrarily shaped ceiling feature from an outline extracted from image information inputted through the image input unit; and a localization step of recognizing a position of the mobile robot device by using the extracted features, wherein, in the feature extraction step, a descriptor indicating the characteristics of the arbitrarily shaped ceiling feature is assigned.
대표청구항▼
1. A method for localizing a mobile robot using arbitrarily shaped (AS) ceiling features, comprising: a device providing step of providing a mobile robot device which is localized using the arbitrarily shaped ceiling features and includes an image input unit, an encoder sensing unit, an arithmetic u
1. A method for localizing a mobile robot using arbitrarily shaped (AS) ceiling features, comprising: a device providing step of providing a mobile robot device which is localized using the arbitrarily shaped ceiling features and includes an image input unit, an encoder sensing unit, an arithmetic unit, a control unit, a storage unit, and a driving unit;a feature extraction step of extracting features which include an arbitrarily shaped feature from a contour extracted from image information inputted through the image input unit; anda localization step of localizing the mobile robot device using the extracted features,wherein a descriptor indicating the characteristics of the arbitrarily shaped feature is assigned in the feature extraction step,wherein the feature extraction step comprises:a region-of-interest (ROI) extraction step of detecting and labeling the contour from the image information, and extracting a region of interest (ROI);an ROI descriptor creation step of assigning a descriptor indicating the characteristics of the region of interest to the region of interest; andan ROI robustness confirmation step of confirming whether or not the region of interest is set as a feature used in the localization step based on the descriptor of the region of interest and a preset reference stored in the storage unit,wherein the ROI robustness confirmation step comprises:an ROI similarity calculation step of calculating a similarity defining a resemblance between regions of interest in the current image;an ROI uniqueness calculation step of calculating a region-of-interest (ROI) uniqueness by assigning a weighting factor to a similarity between other regions of interest adjacent to the region of interest in the current image;a uniqueness determination step of comparing the calculated ROI uniqueness with a preset uniqueness stored in the storage unit, and determining whether or not the calculated ROI uniqueness has stability as a feature; andan ROI robustness determination step of confirming whether or not the ROI uniqueness is used as a feature of the region of interest depending on a result of determination in the uniqueness determination step. 2. The method according to claim 1, wherein the region-of-interest extraction (ROI) step comprises: an image binary coding step of binary-coding the image information;a contour detection step of detecting the contour from the binary-coded image information;a labeling step of grouping regions connected by the contour detected in the contour detection step; andan ROI setting step of setting the regions grouped in the labeling step as a region of interest. 3. The method according to claim 2, wherein the labeling step uses a dilation operation, and a circular window having a predetermined radius is used in the dilation operation. 4. The method according to claim 1, wherein the ROI descriptor creation step comprises: an ROI node distribution confirmation step of setting the outer corners of the region of interest as nodes of the region of interest, and confirming coordinates of the nodes of the region of interest from the outer corners of the region of interest;an ROI size confirmation step of confirming a size of the region of interest; andan ROI orientation confirmation step of confirming an orientation of region of interest. 5. The method according to claim 4, wherein the coordinates of the nodes of the region of interest is polar coordinates. 6. The method according to claim 4, wherein the coordinates of the nodes of the region of interest are Cartesian coordinates. 7. The method according to claim 1, wherein the ROI similarity calculation step comprises: an ROI confirmation step of confirming the regions of interest in the current image;an ROI setting step of setting a target region of interest and a comparison region of interest from the confirmed regions of interest;a node distribution similarity calculation step of calculating a similarity in node distribution between the target region of interest and the comparison region of interest;a size similarity calculation step of calculating a similarity in size between the target region of interest and the comparison region of interest;an orientation similarity calculation step of calculating a similarity in orientation between the target region of interest and the comparison region of interest;an ROI similarity calculation step of finally calculating a region-of-interest (ROI) similarity S from the node distribution similarity, the size similarity, and the orientation similarity;a remaining comparison ROI determination step of determining whether or not there exists a remaining comparison region of interest which is not compared with the target region of interest among the regions of interest in the current image; anda remaining ROI determination step of determining whether or not there exists a remaining region of interest which is not set as the target region of interest among the regions of interest in the current image if it is determined in the remaining comparison ROI determination step that there does not exist the remaining comparison region of interest. 8. The method according to claim 7, wherein the node distribution similarity calculation step comprises: a node distance calculation step of calculating a distance between the nodes the target region of interest and the comparison region of interest;a node pair comparison and confirmation step of comparing the node distance with a preset similar range distance stored in the storage unit 400, confirming the number N of node pairs having the node distance d smaller than the similar range distance, and storing coordinate information of the node pairs;a target ROI rotation step of rotating the target region of interest about a center point of the target region of interest by a preset angle;a target ROI rotation completion determination step of confirming whether or not the rotation angle of the target region of interest reaches 360°;a node distribution similarity operation step of operating the similarity in node distribution between the target region of interest and the comparison region of interest using the minimum node distance and the number of node pairs if it is determined that the rotation angle of the target region of interest reaches 360° in the ROI rotation completion determination step. 9. The method according to claim 1, wherein the localization step comprises: a prediction step of creating a predicted position of the mobile robot device at a current stage and a predicted image of an arbitrarily shaped feature at the predicted position, based on an estimated position of the mobile robot device using arbitrarily shaped ceiling features and an estimated image of the arbitrarily shaped ceiling feature at a previous stage, which are calculated based on image information obtained from the image input unit, and a signal of the encoder sensing unit;a feature matching step of confirming whether or not there is a match between the arbitrarily shaped feature extracted in the feature extraction step and the arbitrarily shaped feature in the predicted image, and creating match information; andan estimation step of correcting the predicted position of the mobile robot device and a predicted value of the arbitrarily shaped feature depending on the match information created in the feature matching step. 10. The method according to claim 9, wherein if it is determined that the arbitrarily shaped feature in the predicted image in the feature matching step does not match the arbitrarily shaped feature extracted from the image information obtained from the image input unit, the arbitrarily shaped feature in the image information obtained from the image input unit is set and stored as a new feature. 11. The method according to claim 9, wherein the feature matching step comprises: a predicted image feature extraction step of extracting the arbitrarily shaped feature in the predicted image;a predicted image feature descriptor creation step of assigning a descriptor indicating the characteristics of the arbitrarily shaped feature to the arbitrarily shaped feature in the predicted image; anda predicted image feature matching step of comparing the arbitrarily shaped feature in the image information, which is extracted in the feature extraction step with the arbitrarily shaped feature in the predicted image. 12. A method for localizing a mobile robot using arbitrarily shaped (AS) ceiling features, comprising: a device providing step of providing a mobile robot device which is localized using the arbitrarily shaped ceiling features and includes an image input unit, an encoder sensing unit, an arithmetic unit, a control unit, a storage unit, and a driving unit;a feature extraction step of extracting features which include an arbitrarily shaped feature from a contour extracted from image information inputted through the image input unit; anda localization step of localizing the mobile robot device using the extracted features,wherein a descriptor indicating the characteristics of the arbitrarily shaped feature is assigned in the feature extraction step,wherein the localization step comprises:a prediction step of creating a predicted position of the mobile robot device at a current stage and a predicted image of an arbitrarily shaped feature at the predicted position, based on an estimated position of the mobile robot device using arbitrarily shaped ceiling features and an estimated image of the arbitrarily shaped ceiling feature at a previous stage, which are calculated based on image information obtained from the image input unit, and a signal of the encoder sensing unit;a feature matching step of confirming whether or not there is a match between the arbitrarily shaped feature extracted in the feature extraction step and the arbitrarily shaped feature in the predicted image, and creating match information; andan estimation step of correcting the predicted position of the mobile robot device and a predicted value of the arbitrarily shaped feature depending on the match information created in the feature matching step,wherein the feature matching step comprises:a predicted image feature extraction step of extracting the arbitrarily shaped feature in the predicted image;a predicted image feature descriptor creation step of assigning a descriptor indicating the characteristics of the arbitrarily shaped feature to the arbitrarily shaped feature in the predicted image; anda predicted image feature matching step of comparing the arbitrarily shaped feature in the image information, which is extracted in the feature extraction step with the arbitrarily shaped feature in the predicted image,wherein the predicted image feature matching step comprises:a predicted intersection number confirmation step of confirming whether or not a predicted number of intersections in which the extracted arbitrarily shaped feature in the image information intersects a region of uncertainty of the arbitrarily shaped feature in the predicted image;a predicted intersection number determination step of determining whether or not the predicted number of intersections in the predicted intersection number confirmation step is 1;a predicted feature similarity calculation step of calculating a similarity in predicted feature defining a resemblance between the arbitrarily shaped feature in the image information and the arbitrarily shaped feature in the predicted image if it is determined that the predicted number of intersections in the predicted intersection number confirmation step is 1;a feature similarity comparison step of comparing the similarity in predicted feature with a preset similarity in feature stored in the storage unit; anda feature similarity confirmation step of confirming whether or not there is a match between the arbitrarily shaped feature in the image information and the arbitrarily shaped feature in the predicted image depending on a result of comparison in the feature similarity comparison step. 13. The method according to claim 12, wherein if it is determined in the predicted intersection number determination step that the predicted number of intersections is zero, the control flow proceeds to a position restoration step, wherein the position restoration step comprises: a position restoration feature matching step of comparing all the descriptors between an arbitrarily shaped feature in current image information and features stored in the storage unit, confirming whether or not there is another matched feature stored in the storage unit, and assigning a candidate position of the mobile robot device to the circumference of the other matched featurea mobile robot device orientation confirmation step of confirming an orientation of the mobile robot device using two-dimensional coordinates of the other matched feature on a global coordinate system, the position of the mobile robot device on the candidate position, and an angle formed between a reference coordinate axis in the image information and the other matched feature; anda priority candidate position selection step of assigning a priority to the candidate position based on the position information and orientation information on the other matched feature, and the orientation information of the mobile robot device.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.