Methods and systems for recognizing machine-readable information on three-dimensional objects
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
B25J-009/16
B65G-047/90
B65G-047/46
B65G-047/50
B65G-061/00
B25J-019/02
B65H-067/06
출원번호
US-0213191
(2014-03-14)
등록번호
US-9227323
(2016-01-05)
발명자
/ 주소
Konolige, Kurt
Rublee, Ethan
Bradski, Gary
출원인 / 주소
Google Inc.
대리인 / 주소
McDonnell Boehnen Hulbert & Berghoff LLP
인용정보
피인용 횟수 :
5인용 특허 :
25
초록▼
Methods and systems for recognizing machine-readable information on three-dimensional (3D) objects are described. A robotic manipulator may move at least one physical object through a designated area in space. As the at least one physical object is being moved through the designated area, one or mor
Methods and systems for recognizing machine-readable information on three-dimensional (3D) objects are described. A robotic manipulator may move at least one physical object through a designated area in space. As the at least one physical object is being moved through the designated area, one or more optical sensors may determine a location of a machine-readable code on the at least one physical object and, based on the determined location, scan the machine-readable code so as to determine information associated with the at least one physical object encoded in the machine-readable code. Based on the information associated with the at least one physical object, a computing device may then determine a respective location in a physical environment of the robotic manipulator at which to place the at least one physical object. The robotic manipulator may then be directed to place the at least one physical object at the respective location.
대표청구항▼
1. A system comprising: a robotic manipulator;one or more optical sensors;at least one processor; anddata storage comprising instructions executable by the at least one processor to cause the system to perform functions comprising: based on data indicating that a machine-readable code is included on
1. A system comprising: a robotic manipulator;one or more optical sensors;at least one processor; anddata storage comprising instructions executable by the at least one processor to cause the system to perform functions comprising: based on data indicating that a machine-readable code is included on at least one face of at least one physical object, causing the robotic manipulator to grasp the at least one physical object at one or more locations on the at least one physical object different from the at least one face and move the at least one physical object through a designated area in space of a physical environment of the system such that the at least one face of the at least one physical object moves within a particular field of view of the one or more optical sensors,causing the one or more optical sensors to (i) determine a location of the machine-readable code on the at least one physical object as the at least one physical object is moved through the designated area in space and (ii) based on the determined location, scan the machine-readable code as the at least one physical object is moved through the designated area in space so as to determine information associated with the at least one physical object encoded in the machine-readable code, andbased on the information associated with the at least one physical object, determining a respective location in the physical environment at which to cause the robotic manipulator to place the at least one physical object. 2. The system of claim 1, wherein the information associated with the at least one physical object includes one or more of: dimensions of the at least one physical object, a weight of the at least one physical object, a respective transport structure on which to place the at least one physical object, a respective transport vehicle on which to place the at least one physical object, and an identifier of at least one other physical object in the physical environment at which to place the at least one physical object proximate to. 3. The system of claim 1, wherein the one or more optical sensors include multiple optical sensors coupled to respective locations surrounding the designated area, the functions further comprising: determining a focus point in space in the designated area at which to focus the scanning performed by the multiple optical scanners, andcausing the robotic manipulator to orient the at least one physical object while moving the at least one physical object through the designated area in space such that the machine-readable code is scanned at the determined focus point. 4. The system of claim 1, the functions further comprising: causing the robotic manipulator to place the at least one physical object at the respective location after a predetermined period of time from the determining of the information associated with the at least one physical object encoded in the machine-readable code. 5. The system of claim 1, wherein scanning the machine-readable code comprises: determining at least one image corresponding to at least a portion of the machine-readable code,modifying a scale and orientation of the at least one image,making a comparison between the modified at least one image and a predetermined machine-readable code template,based on the comparison, determining a remaining portion of the machine-readable code so as to enable the one or more optical sensors to scan the machine-readable code, andscanning the machine-readable code. 6. The system of claim 1, wherein the one or more optical sensors include multiple depth sensors, the functions further comprising: as the at least one physical object is moved through the designated area in space, receiving sensor data from the multiple depth sensors, the sensor data being representative of a three-dimensional (3D) virtual model of the at least one physical object, anddetermining dimensions of the at least one physical object from the sensor data,wherein determining the respective location in the physical environment at which to cause the robotic manipulator to place the at least one physical object is further based on the determined dimensions of the at least one physical object. 7. A method performed by a computing device, the method comprising: based on data indicating that machine-readable information is included on at least one face of at least one physical object, causing a robotic manipulator to grasp the at least one physical object at one or more locations on the at least one physical object different from the at least one face and move the at least one physical object through a designated area in space such that the at least one face of the at least one physical object moves within a particular field of view of the one or more optical sensors, wherein the machine-readable information comprises information associated with the at least one physical object;determining, via one or more optical sensors, a location of the machine-readable information included on the at least one physical object as the at least one physical object is moved through the designated area in space;based on the determined location and further based on a scanning of the machine-readable information by the one or more optical sensors as the at least one physical object is moved through the designated area in space, receiving the information associated with the at least one physical object;based on the information associated with the at least one physical object, determining a respective location in a physical environment of the robotic manipulator at which to place the at least one physical object; andcausing the robotic manipulator to move the at least one physical object to the respective location. 8. The method of claim 7, further comprising: determining a trajectory and a speed at which to move the at least one physical object through the designated area in space, wherein causing the robotic manipulator to move the at least one physical object through the designated area in space comprises causing the robotic manipulator to move the at least one physical object through the designated area in space at the determined trajectory and at the determined speed; andbased on the determined trajectory and the determined speed, determining respective scan times for the one or more optical sensors,wherein the scanning of the machine-readable information by the one or more optical sensors comprises the computing device causing the one or more optical sensors to scan the machine-readable information at the determined respective scan times as the at least one physical object is moved through the designated area in space at the determined trajectory and at the determined speed. 9. The method of claim 7, further comprising: in response to the scanning, receiving data indicative of an incomplete scanning of the at least one physical object;based on the data indicative of the incomplete scanning and further based on the determined location, causing the robotic manipulator to move the at least one physical object through the designated area; andas the at least one physical object is moved through the designated area, causing the one or more optical sensors to rescan the machine-readable information so as to determine the information associated with the at least one physical object. 10. The method of claim 7, wherein the machine-readable information included on the at least one physical object includes one or more of: a one-dimensional (1D) barcode, a two-dimensional (2D) barcode, and at least one text string. 11. The method of claim 7, wherein the machine-readable information included on the at least one physical object includes at least one text string, and wherein the scanning of the machine-readable information as the at least one physical object is moved through the designated area in space comprises: determining at least one image of the machine-readable information,determining the at least one text string using optical character recognition (OCR), anddetermining the information associated with the at least one physical object based on the text string. 12. The method of claim 7, wherein the one or more optical sensors include multiple optical sensors coupled to respective locations surrounding the designated area, and wherein the multiple optical sensors include telephoto lenses, the method further comprising: determining a focus point in space in the designated area at which to focus the scanning performed by the multiple optical scanners, andcausing the robotic manipulator to orient the at least one physical object while moving the at least one physical object through the designated area in space such that the machine-readable information is scanned at the determined focus point. 13. The method of claim 7, wherein the one or more optical sensors include multiple depth sensors, the method further comprising: as the at least one physical object is moved through the designated area in space, the computing device receiving sensor data from the multiple depth sensors, the sensor data being representative of a three-dimensional (3D) virtual model of the at least one physical object, anddetermining dimensions of the at least one physical object from the sensor data,wherein determining the respective location in the physical environment at which to cause the robotic manipulator to place the at least one physical object is further based on the determined dimensions of the at least one physical object.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (25)
Criswell, Tim; Fisher, Andrew; Aurora, Deepak, Automatic case loader and method for use of same.
Chung, Yoon Su; Cheong, Hee Jeong; Kim, Jin Seog; Kweon, In So, Automatic parcel volume capture system and volume capture method using parcel image recognition.
Red Walter E. (Provo UT) Davies Brady R. (Orem UT) Wang Xuguang (Provo UT) Turner Edgar R. (Provo UT), Device and method for correction of robot inaccuracy.
Cottone, Norbert; Eberhardt, Thorsten; Heissmeyer, Sven; Hollinger, Alexander; Peghini, Martin, Method and system for depalletizing tires using a robot.
Zhao, Wenyi; Hasser, Christopher J J; Nowlin, William C.; Hoffman, Brian D., Methods and systems for robotic instrument tool tracking with adaptive fusion of kinematics information and image information.
Morency, Sylvain-Paul; Ducharme, Marc; Jodoin, Robert; Simon, Christian; Fournier, Jonathan; Lemay, Sebastien, Vision-assisted system and method for picking of rubber bales in a bin.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.