Detection and reconstruction of an environment to facilitate robotic interaction with the environment
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G05B-019/18
B25J-009/16
출원번호
US-0212328
(2014-03-14)
등록번호
US-9102055
(2015-08-11)
발명자
/ 주소
Konolige, Kurt
Rublee, Ethan
Hinterstoisser, Stefan
Straszheim, Troy
Bradski, Gary
Strasdat, Hauke
출원인 / 주소
Industrial Perception, Inc.
대리인 / 주소
McDonnell Boehnen Hulbert & Berghoff LLP
인용정보
피인용 횟수 :
55인용 특허 :
26
초록▼
Methods and systems for detecting and reconstructing environments to facilitate robotic interaction with such environments are described. An example method may involve determining a three-dimensional (3D) virtual environment representative of a physical environment of the robotic manipulator includi
Methods and systems for detecting and reconstructing environments to facilitate robotic interaction with such environments are described. An example method may involve determining a three-dimensional (3D) virtual environment representative of a physical environment of the robotic manipulator including a plurality of 3D virtual objects corresponding to respective physical objects in the physical environment. The method may then involve determining two-dimensional (2D) images of the virtual environment including 2D depth maps. The method may then involve determining portions of the 2D images that correspond to a given one or more physical objects. The method may then involve determining, based on the portions and the 2D depth maps, 3D models corresponding to the portions. The method may then involve, based on the 3D models, selecting a physical object from the given one or more physical objects. The method may then involve providing an instruction to the robotic manipulator to move that object.
대표청구항▼
1. A method comprising: determining a three-dimensional (3D) virtual environment based on data received from one or more sensors, the 3D virtual environment being representative of a physical environment of a robotic manipulator including a plurality of 3D virtual objects corresponding to respective
1. A method comprising: determining a three-dimensional (3D) virtual environment based on data received from one or more sensors, the 3D virtual environment being representative of a physical environment of a robotic manipulator including a plurality of 3D virtual objects corresponding to respective physical objects in the physical environment;determining one or more two-dimensional (2D) images of the 3D virtual environment, wherein the one or more 2D images include respective 2D depth maps representative of distances between respective surfaces of the physical objects and a reference plane associated with a perspective of the one or more sensors;determining one or more portions of the one or more 2D images corresponding to a given one or more physical objects;based on the determined one or more portions of the one or more 2D images and further based on portions of the 2D depth maps associated with the given one or more physical objects, determining 3D models corresponding to respective determined portions of the one or more 2D images;based on the determined 3D models, selecting a particular physical object from the given one or more physical objects; andproviding an instruction to the robotic manipulator to move the particular physical object. 2. The method of claim 1, wherein the determined one or more 2D images include one or more 2D orthographic projections of the determined 3D virtual environment. 3. The method of claim 1, wherein respective 3D models of the determined 3D models include two parallel surfaces of an identical shape, wherein the shape is a shape of the respective determined portion of the one or more 2D images to which the respective 3D model corresponds. 4. The method of claim 1, wherein determining the one or more portions of the one or more 2D images comprises determining one or more portions of the one or more 2D images based on a predetermined range of dimensions of the physical objects. 5. The method of claim 4, wherein the predetermined range of dimensions of the physical objects is bound by one or more of: a minimum height of the physical objects, a minimum length of the physical objects, a minimum width of the physical objects, an average height of the physical objects, an average length of the physical objects, an average width of the physical objects, a maximum height of the physical objects, a maximum length of the physical objects, and a maximum width of the physical objects. 6. The method of claim 5, wherein determining the 3D models corresponding to respective determined portions of the one or more 2D images is based on the predetermined range of dimensions of the physical objects. 7. The method of claim 1, wherein determining the one or more portions of the one or more 2D images comprises determining the one or more portions based on templates corresponding to known polygonic shapes so that the one or more portions include dimensions of the known polygonic shapes. 8. The method of claim 1, further comprising: refining one or more of the determined 3D models based on templates corresponding to known 3D geometric shapes so that the one or more refined 3D models include dimensions of the known 3D geometric shapes,wherein selecting the particular physical object from the given one or more physical objects is based on the one or more refined 3D models. 9. A non-transitory computer readable medium having stored thereon instructions that, upon execution by a computing device, cause the computing device to perform functions comprising: determining a three-dimensional (3D) virtual environment based on data received from one or more sensors, the 3D virtual environment being representative of a physical environment of a robotic manipulator including a plurality of 3D virtual objects corresponding to respective physical objects in the physical environment;determining one or more two-dimensional (2D) images of the 3D virtual environment, wherein the one or more 2D images include respective 2D depth maps representative of distances between respective surfaces of the physical objects and a reference plane associated with a perspective of the one or more sensors;determining one or more portions of the one or more 2D images corresponding to a given one or more physical objects;based on the determined one or more portions of the one or more 2D images and further based on portions of the 2D depth maps associated with the given one or more physical objects, determining 3D models corresponding to respective determined portions of the one or more 2D images;based on the determined 3D models, selecting a particular physical object from the given one or more physical objects; andproviding an instruction to the robotic manipulator to move the particular physical object. 10. The non-transitory computer readable medium of claim 9, wherein the physical objects comprise a stacked pallet of physical objects within the physical environment, and wherein the one or more 2D images include one or more of: at least one 2D projection associated with a side view of the stacked pallet of physical objects and at least one 2D projection associated with a top-down view of the stacked pallet of physical objects. 11. The non-transitory computer readable medium of claim 9, wherein the physical objects comprise a stacked pallet of physical objects within the physical environment, and wherein selecting the particular physical object from the given one or more physical objects comprises selecting, as the particular physical object, an object on a top row of the stacked pallet closest in proximity to the robotic manipulator. 12. The non-transitory computer readable medium of claim 9, wherein selecting the particular physical object from the given one or more physical objects based on the determined 3D models comprises selecting a physical object corresponding to a 3D model that includes at least three surfaces different from surfaces of the 3D model that are proximate to surfaces of other 3D models of the determined 3D models. 13. The non-transitory computer readable medium of claim 9, the functions further comprising: based on dimensions of a determined 3D model that corresponds to the particular physical object, determining a location on the particular physical object at which to instruct the robotic manipulator to initiate movement of the particular physical object,wherein providing the instruction to the robotic manipulator to move the particular physical object comprises providing an instruction to the robotic manipulator to contact the particular physical object at the determined location on the particular physical object so as to initiate movement of the particular physical object. 14. The non-transitory computer readable medium of claim 9, the functions further comprising: based on movement of other physical objects from the physical environment by the robotic manipulator before the movement of the particular physical object by the robotic manipulator, based on portions of the 2D images corresponding to the other moved physical objects, and based on 3D models corresponding to the other moved physical objects, determining a range of dimensions of the other moved physical objects,wherein determining the one or more portions of the one or more 2D images is based on the determined range of dimensions of the other moved physical objects,wherein determining the 3D models corresponding to respective determined portions of the one or more 2D images is based on the determined range of dimensions of the other moved physical objects. 15. A system comprising: a robotic manipulator;one or more sensors;at least one processor; anddata storage comprising instructions executable by the at least one processor to cause the system to perform functions comprising: determining a three-dimensional (3D) virtual environment based on data received from the one or more sensors, the 3D virtual environment being representative of a physical environment of the robotic manipulator including a plurality of 3D virtual objects corresponding to respective physical objects in the physical environment;determining one or more two-dimensional (2D) images of the 3D virtual environment, wherein the one or more 2D images include respective 2D depth maps representative of distances between respective surfaces of the physical objects and a reference plane associated with a perspective of the one or more sensors;determining one or more portions of the one or more 2D images corresponding to a given one or more physical objects;based on the determined one or more portions of the one or more 2D images and further based on portions of the 2D depth maps associated with the given one or more physical objects, determining 3D models corresponding to respective determined portions of the one or more 2D images;based on the determined 3D models, selecting a particular physical object from the given one or more physical objects; andproviding an instruction to the robotic manipulator to move the particular physical object. 16. The system of claim 15, wherein at least one edge of a respective determined portion is determined based at least in part on boundaries between substantially proximate physical objects, and wherein the boundaries are indicated by one or more of the 2D depth maps. 17. The system of claim 15, wherein at least one edge of a respective determined portion is determined based on one or more predetermined dimensions of the physical objects. 18. The system of claim 15, wherein determining the 3D models is further based on templates representative of known 3D models, the functions further comprising: determining one or more features on the 3D model that corresponds to the selected particular physical object based on corresponding marked features in a given template representative of a known 3D model that was used to determine the 3D model that corresponds to the selected particular physical object,wherein providing the instruction to the robotic manipulator to move the particular physical object comprises providing the instruction to the robotic manipulator to move the particular physical object based on the determined one or more features. 19. The system of claim 15, wherein the physical objects include cuboid-shaped physical objects, and wherein determining the one or more portions of the one or more 2D images comprises determining one or more portions of the one or more 2D images that are convex polygons. 20. The system of claim 15, wherein determining one or more portions of the one or more 2D images comprises determining one or more substantially quadrilateral portions of the one or more 2D images. 21. The system of claim 15, wherein the physical environment includes one or more walls at least partially surrounding the physical objects, and wherein the data received from the one or more sensors includes a plurality of 3D data points associated with the physical environment, the functions further comprising: determining one or more portions of the 3D data points representative of locations in the physical environment that exceed a threshold distance from the robotic manipulator and are substantially proximate to the one or more walls in the physical environment, anddetermining one or more virtual planes in the 3D virtual environment that fit respective portions of the one or more portions of the 3D data points, wherein the one or more virtual planes are representative of the one or more walls in the physical environment,wherein providing the instruction to the robotic manipulator to move the particular physical object comprises providing an instruction to the robotic manipulator to move the particular physical object while avoiding physical contact between the robotic manipulator and the one or more walls and between the particular physical object and the one or more walls. 22. The system of claim 15, wherein the physical environment includes one or more walls at least partially surrounding the physical objects, and wherein the data received from the one or more sensors includes a plurality of 3D data points associated with the physical environment, the functions further comprising: determining one or more portions of the 3D data points representative of locations in the physical environment that exceed a threshold distance from the robotic manipulator and are substantially proximate to the one or more walls in the physical environment,determining one or more 2D orthographic projections of the one or more portions of the 3D data points, andbased on the one or more 2D orthographic projections, determining one or more 3D models associated with respective 2D orthographic projections and representative of the one or more walls in the physical environment,wherein the 3D virtual environment includes the one or more 3D models representative of the one or more walls, andwherein providing the instruction to the robotic manipulator to move the particular physical object comprises providing an instruction to the robotic manipulator to move the particular physical object while avoiding physical contact between the robotic manipulator and the one or more walls and between the particular physical object and the one or more walls.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (26)
Criswell, Tim; Fisher, Andrew; Aurora, Deepak, Automatic case loader and method for use of same.
Chung, Yoon Su; Cheong, Hee Jeong; Kim, Jin Seog; Kweon, In So, Automatic parcel volume capture system and volume capture method using parcel image recognition.
Red Walter E. (Provo UT) Davies Brady R. (Orem UT) Wang Xuguang (Provo UT) Turner Edgar R. (Provo UT), Device and method for correction of robot inaccuracy.
Cottone, Norbert; Eberhardt, Thorsten; Heissmeyer, Sven; Hollinger, Alexander; Peghini, Martin, Method and system for depalletizing tires using a robot.
Zhao, Wenyi; Hasser, Christopher J J; Nowlin, William C.; Hoffman, Brian D., Methods and systems for robotic instrument tool tracking with adaptive fusion of kinematics information and image information.
Imai, Daiji; Kita, Takahito; Kondo, Motomasa; Yamada, Hiroshi; Nadatani, Ryu, Storage medium encoded with display control program, display, display system, and display control method.
Ramalingam, Srikumar; Taguchi, Yuichi, Method for reconstructing a 3D scene as a 3D model using images acquired by 3D sensors and omnidirectional cameras.
Kirmani, Ghulam Ahmed; Colaco, Andrea; Shin, Dongeek, Simulating an infrared emitter array in a video monitoring camera to construct a lookup table for depth determination.
Kirmani, Ghulam Ahmed; Colaco, Andrea; Shin, Dongeek, Using a scene illuminating infrared emitter array in a video monitoring camera for depth determination.
Kirmani, Ghulam Ahmed; Colaco, Andrea; Shin, Dongeek, Using a scene illuminating infrared emitter array in a video monitoring camera for depth determination.
Dixon, Michael; Shin, Dongeek; Heitz, III, George Alban; Varadharajan, Srivatsan, Using scene information from a security camera to reduce false security alerts.
Morency, Sylvain-Paul; Ducharme, Marc; Jodoin, Robert; Simon, Christian; Fournier, Jonathan; Lemay, Sebastien, Vision-assisted system and method for picking of rubber bales in a bin.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.