Continuous updating of plan for robotic object manipulation based on received sensor data
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
B25J-018/00
B25J-009/16
출원번호
US-0212994
(2014-03-14)
등록번호
US-9238304
(2016-01-19)
발명자
/ 주소
Bradski, Gary
Konolige, Kurt
Rublee, Ethan
Straszheim, Troy
Strasdat, Hauke
Hinterstoisser, Stefan
출원인 / 주소
Industrial Perception, Inc.
대리인 / 주소
McDonnell Boehnen Hulbert & Berghoff LLP
인용정보
피인용 횟수 :
4인용 특허 :
24
초록▼
Example systems and methods allow for dynamic updating of a plan to move objects using a robotic device. One example method includes determining a virtual environment by one or more processors based on sensor data received from one or more sensors, the virtual environment representing a physical env
Example systems and methods allow for dynamic updating of a plan to move objects using a robotic device. One example method includes determining a virtual environment by one or more processors based on sensor data received from one or more sensors, the virtual environment representing a physical environment containing a plurality of physical objects, developing a plan, based on the virtual environment, to cause a robotic manipulator to move one or more of the physical objects in the physical environment, causing the robotic manipulator to perform a first action according to the plan, receiving updated sensor data from the one or more sensors after the robotic manipulator performs the first action, modifying the virtual environment based on the updated sensor data, determining one or more modifications to the plan based on the modified virtual environment, and causing the robotic manipulator to perform a second action according to the modified plan.
대표청구항▼
1. A method comprising: determining a virtual environment by one or more processors based on sensor data received from one or more sensors on a robotic manipulator, the virtual environment representing a physical environment containing a plurality of physical objects;developing a plan, based on the
1. A method comprising: determining a virtual environment by one or more processors based on sensor data received from one or more sensors on a robotic manipulator, the virtual environment representing a physical environment containing a plurality of physical objects;developing a plan, based on the virtual environment, to cause the robotic manipulator to move one or more of the physical objects in the physical environment;causing the robotic manipulator to pick up at least one object of the plurality of physical objects with the robotic manipulator, move the at least one object to a drop-off location, and drop off the at least one object at the drop-off location according to the plan;after the robotic manipulator drops off the at least one object at the drop-off location, causing the robotic manipulator to perform a scan of an area of the physical environment vacated by the at least one object by moving the robotic manipulator while receiving updated sensor data from the one or more sensors on the robotic manipulator, wherein the updated sensor data is indicative of at least one other physical object previously obscured by the at least one object;modifying the virtual environment based on the updated sensor data;determining one or more modifications to the plan based on the modified virtual environment; andcausing the robotic manipulator to perform another action according to the modified plan. 2. The method of claim 1, further comprising receiving the updated sensor data from the one or more sensors on the robotic manipulator based on continuous scans of the physical environment by the one or more sensors. 3. The method of claim 1, further comprising receiving the updated sensor data from the one or more sensors on the robotic manipulator based on periodic scans of the physical environment by the one or more sensors at a certain time interval. 4. The method of claim 1, wherein: the plurality of physical objects comprise a plurality of different shapes of objects; andthe virtual environment comprises information indicative of a shape of at least some of the physical objects within the physical environment. 5. The method of claim 1, wherein the virtual environment comprises a facade of some of the physical objects located along a plane within the physical environment, wherein the facade comprises a two-dimensional depth map of distances of the physical objects from the plane. 6. The method of claim 1, further comprising: causing the robotic manipulator to move into the area vacated by the at least one object to receive the updated sensor data from a point of view of the area of the physical environment vacated by the at least one object after the robotic manipulator drops off the at least one object; anddetermining one or more modifications to the plan based on the updated sensor data received from the point of view of the area of the physical environment vacated by the at least one object. 7. The method of claim 1, wherein determining one or more modifications to the plan comprises determining that the robotic manipulator failed to move an object according to the plan while performing an action according to the plan. 8. The method of claim 1, wherein developing the plan comprises determining an ordering of at least some of the physical objects, and wherein the plan indicates to cause the robotic manipulator to move the physical objects from an area of the physical environment in the determined ordering. 9. The method of claim 1, further comprising: determining an area of the physical environment vacated by moved physical objects; andcausing a moveable cart on which the robotic manipulator is mounted to move into the vacated area in order to position the robotic manipulator to move one or more other objects of the plurality. 10. The method of claim 1, wherein the plan indicates to cause the robotic manipulator to stack the physical objects onto a pallet within the physical environment, wherein the pallet comprises a portable platform capable of supporting a plurality of physical objects. 11. The method of claim 10, wherein the plan indicates to cause the robotic manipulator to stack the physical objects onto the pallet in a configuration that minimizes a total amount of air gaps between the physical objects. 12. The method of claim 10, wherein the plan indicates to cause the robotic manipulator to stack the physical objects onto the pallet into interlaced columns of physical objects, such that portions of certain physical objects are positioned within at least two of the interlaced columns. 13. The method of claim 1, wherein: the physical environment comprises a plurality of physical objects on a moving conveyer belt;the modified virtual environment comprises a representation of a current position of the physical objects on the moving conveyer belt; andthe method further comprises determining one or more modifications to the plan based on the current position of the physical objects in the modified virtual environment. 14. The method of claim 1, wherein the virtual environment comprises a segmentation of the plurality of physical objects, and wherein the method further comprises causing the robotic manipulator to perturb the plurality of physical objects to determine the segmentation of the plurality of physical objects based on the sensor data. 15. A system, comprising: a robotic manipulator;at least one sensor on the robotic manipulator; anda control system configured to: determine a virtual environment based on sensor data received from the at least one sensor on the robotic manipulator, the virtual environment representing a physical environment containing a plurality of physical objects;develop a plan, based on the virtual environment, to cause the robotic manipulator to move one or more of the physical objects in the physical environment;cause the robotic manipulator to pick up at least one object of the plurality of physical objects with the robotic manipulator, move the at least one object to a drop-off location, and drop off the at least one object at the drop-off location according to the plan;after the robotic manipulator drops off the at least one object at the drop-off location, cause the robotic manipulator to perform a scan of an area vacated by the at least one object by moving the robotic manipulator while receiving updated sensor data from the one or more sensors on the robotic manipulator, wherein the updated sensor data is indicative of at least one other physical object previously obscured by the at least one object;modify the virtual environment based on the updated sensor data;determine one or more modifications to the plan based on the modified virtual environment; andcause the robotic manipulator to perform another action according to the modified plan. 16. The system of claim 15, further comprising a moveable cart, wherein the robotic manipulator is mounted on the moveable cart, and wherein the control system is further configured to: determine an area of the physical environment vacated by removed physical objects; andcause the moveable cart to move into the vacated area in order to position the robotic manipulator to perform additional scanning of at least one remaining object of the plurality with the at least one sensor on the robotic manipulator. 17. The system of claim 15, wherein the control system is further configured to: cause the robotic manipulator to move into the area vacated by the at least one physical object to receive the updated sensor data from a point of view of the area of the physical environment vacated by the at least one object after the robotic manipulator drops off the at least one object; anddetermine one or more modifications to the plan based on the sensor data received from the point of view of the area of the physical environment vacated by the at least one object. 18. The system of claim 15, wherein the control system is further configured to develop the plan by determining an ordering of at least some of the physical objects, and wherein the plan indicates to cause the robotic manipulator to move the physical objects from an area of the physical environment in the determined ordering. 19. The system of claim 15, wherein the plan indicates to cause the robotic manipulator to stack the physical objects onto a pallet within the physical environment, wherein the pallet comprises a portable platform capable of supporting a plurality of physical objects. 20. A non-transitory computer readable medium having stored therein instructions, that when executed by a computing system, cause the computing system to perform functions comprising: determining a virtual environment by one or more processors based on sensor data received from one or more sensors on a robotic manipulator, the virtual environment representing a physical environment containing a plurality of physical objects;developing a plan, based on the virtual environment, to cause the robotic manipulator to move one or more of the physical objects in the physical environment;causing the robotic manipulator to pick up at least one object of the plurality of physical objects with the robotic manipulator, move the at least one object to a drop-off location, and drop off the at least one object at the drop-off location according to the plan;after the robotic manipulator drops off the at least one object at the drop-off location, causing the robotic manipulator to perform a scan of an area vacated by the at least one object by moving the robotic manipulator while receiving updated sensor data from the one or more sensors on the robotic manipulator, wherein the updated sensor data is indicative of at least one other physical object previously obscured by the at least one object;modifying the virtual environment based on the updated sensor data;determining one or more modifications to the plan based on the modified virtual environment; andcausing the robotic manipulator to perform another action according to the modified plan.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (24)
Criswell, Tim; Fisher, Andrew; Aurora, Deepak, Automatic case loader and method for use of same.
Red Walter E. (Provo UT) Davies Brady R. (Orem UT) Wang Xuguang (Provo UT) Turner Edgar R. (Provo UT), Device and method for correction of robot inaccuracy.
Zhao, Wenyi; Hasser, Christopher J J; Nowlin, William C.; Hoffman, Brian D., Methods and systems for robotic instrument tool tracking with adaptive fusion of kinematics information and image information.
Nusser, Stefan; Rublee, Ethan; Straszheim, Troy Donald; Watts, Kevin William; Zevenbergen, John William, Methods and systems for distributing remote assistance to facilitate robotic object manipulation.
Straszheim, Troy Donald; Nusser, Stefan; Watts, Kevin William; Rublee, Ethan; Zevenbergen, John William, Methods and systems for distributing remote assistance to facilitate robotic object manipulation.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.