System and method for seamless task-directed autonomy for robots
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06F-017/00
G05B-015/00
출원번호
US-0048110
(2008-03-13)
등록번호
US-8271132
(2012-09-18)
발명자
/ 주소
Nielsen, Curtis
Bruemmer, David
Few, Douglas
Walton, Miles
출원인 / 주소
Battelle Energy Alliance, LLC
대리인 / 주소
TraskBritt
인용정보
피인용 횟수 :
18인용 특허 :
86
초록▼
Systems, methods, and user interfaces are used for controlling a robot. An environment map and a robot designator are presented to a user. The user may place, move, and modify task designators on the environment map. The task designators indicate a position in the environment map and indicate a task
Systems, methods, and user interfaces are used for controlling a robot. An environment map and a robot designator are presented to a user. The user may place, move, and modify task designators on the environment map. The task designators indicate a position in the environment map and indicate a task for the robot to achieve. A control intermediary links task designators with robot instructions issued to the robot. The control intermediary analyzes a relative position between the task designators and the robot. The control intermediary uses the analysis to determine a task-oriented autonomy level for the robot and communicates target achievement information to the robot. The target achievement information may include instructions for directly guiding the robot if the task-oriented autonomy level indicates low robot initiative and may include instructions for directing the robot to determine a robot plan for achieving the task if the task-oriented autonomy level indicates high robot initiative.
대표청구항▼
1. A graphical user interface for controlling a robot, comprising: an environment map window configured for displaying a map of an environment proximate the robot;a robot designator configured for showing a robot position and a robot pose in the environment map window;at least one task designator co
1. A graphical user interface for controlling a robot, comprising: an environment map window configured for displaying a map of an environment proximate the robot;a robot designator configured for showing a robot position and a robot pose in the environment map window;at least one task designator configured for positioning by a user in the environment map window and indicating a task for the robot to achieve; anda control intermediary configured for linking user defined tasks with robot instructions by: analyzing a position of the at least one task designator relative to a position of at least one robot component;determining a task-oriented autonomy level for the robot responsive to the analysis; andcommunicating target achievement information to the robot, wherein the target achievement information comprises: instructions from the control intermediary for guiding the robot to achieve the task if the task-oriented autonomy level comprises low robot initiative; andinstructions from the control intermediary directing the robot to determine a robot plan for achieving the task if the task-oriented autonomy level comprises high robot initiative. 2. The graphical user interface of claim 1, wherein the map is selected from the group consisting of a two-dimensional map and a three-dimensional map. 3. The graphical user interface of claim 1, wherein the at least one task designator includes position information selected from the group consisting of two-dimensional information and three-dimensional information. 4. The graphical user interface of claim 1, wherein the at least one task designator includes task attribute information. 5. The graphical user interface of claim 1, wherein the at least one task designator is selected from the group consisting of a navigation target, an imaging target, an artillery target, a sensor target, and a manipulator target. 6. The graphical user interface of claim 1, wherein analyzing a position further comprises analyzing a change in the at least one task designator, wherein the change is selected from the group consisting of: a time interval between movements of the at least one task designator;a distance between the robot and the at least one task designator;an event horizon relative to the at least one task designator;a change from one task designator to another task designator; andcombinations thereof. 7. The graphical user interface of claim 1, wherein instructions from the control intermediary for guiding the robot include instructions for robot conduct and robot behavior selected from the group consisting of focus behaviors, manipulation behaviors, go-to points, waypoints, path-planning, search region, patrol region, retro-traverse, and laser tracking. 8. A method for controlling a robot, comprising: providing a user interface for controlling the robot;positioning at least one task-oriented target in a map on the user interface representing an environment of the robot;determining a task-oriented autonomy level of the robot correlated to a change in the at least one task-oriented target;if the change in the at least one task-oriented target is smaller than a change threshold, instructing the robot to achieve the at least one task-oriented target by intervention instructions from the user interface; andif the change in the at least one task-oriented target is larger than the change threshold, instructing the robot to achieve the at least one task-oriented target using robot initiative to determine a robot plan for achieving the at least one task-oriented target and implementing the robot plan. 9. The method of claim 8, further comprising selecting the at least one task-oriented target from the group consisting of a navigation target, an imaging target, an artillery target, a sensor target, and a manipulator target. 10. The method of claim 8, wherein the change in the at least one task-oriented target is selected from the group consisting of: a time interval between movements of the at least one task-oriented target;a distance between the robot and the at least one task-oriented target;an event horizon relative to the at least one task-oriented target;a change from one task-oriented target to another task-oriented target; andcombinations thereof. 11. The method of claim 8, wherein the user interface adjusts the task-oriented autonomy level between a level selected from the group consisting of: a teleoperation mode configured to maximize a user interface intervention and minimize the robot initiative;a safe mode configured to include less of the user interface intervention and more of the robot initiative relative to the teleoperation mode;a shared mode configured to include less of the user interface intervention and more of the robot initiative relative to the safe mode;a collaborative tasking mode configured to include less of the user interface intervention and more of the robot initiative relative to the shared mode; andan autonomous mode configured to minimize the user interface intervention and maximize the robot initiative. 12. The method of claim 8, wherein the intervention instructions include instructions for robot conduct and robot behavior selected from the group consisting of focus behaviors, manipulation behaviors, go-to points, waypoints, path-planning, search region, patrol region, retro-traverse, and laser tracking. 13. The method of claim 8, wherein the robot plan includes robot conduct and robot behavior selected from the group consisting of focus behaviors, manipulation behaviors, go-to points, waypoints, path-planning, search region, patrol region, retro-traverse, and laser tracking. 14. The method of claim 8, wherein positioning the at least one task-oriented target in the map is performed by a user and the intervention instructions are determined by the user interface responsive to the change in the at least one task-oriented target. 15. A method for controlling a robot, comprising: receiving instructions for achieving at least one task-oriented target from a user interface, the instructions comprising at least one of intervention instructions, robot initiative instructions, and instructions for setting a task-oriented autonomy level;if the instructions are the robot initiative instructions, then developing a robot plan to achieve the at least one task-oriented target and performing the robot plan; andif the instructions are the intervention instructions, then performing the intervention instructions for achieving the at least one task-oriented target and, if present, overriding the robot plan to achieve the at least one task-oriented target. 16. The method of claim 15, further comprising reporting the robot plan to the user interface. 17. The method of claim 15, wherein the intervention instructions include instructions for robot conduct and robot behavior selected from the group consisting of focus behaviors, manipulation behaviors, go-to points, waypoints, path-planning, search region, patrol region, retro-traverse, and laser tracking. 18. The method of claim 15, wherein the robot plan includes robot conduct and robot behavior selected from the group consisting of focus behaviors, manipulation behaviors, go-to points, waypoints, path-planning, search region, patrol region, retro-traverse, and laser tracking. 19. The method of claim 15, further comprising returning to the performing the robot plan after performing the intervention instructions if the user interface modifies the task-oriented autonomy level to provide additional robot initiative. 20. The method of claim 15, further comprising selecting the at least one task-oriented target from the group consisting of a navigation target, an imaging target, an artillery target, a sensor target, and a manipulator target. 21. The method of claim 15, wherein the user interface adjusts the task-oriented autonomy level between a level selected from the group consisting of: a teleoperation mode configured to maximize a user interface intervention and minimize the robot initiative;a safe mode configured to include less of the user interface intervention and more of the robot initiative relative to the teleoperation mode;a shared mode configured to include less of the user interface intervention and more of the robot initiative relative to the safe mode;a collaborative tasking mode configured to include less of the user interface intervention and more of the robot initiative relative to the shared mode; andan autonomous mode configured to minimize the user interface intervention and maximize the robot initiative. 22. A robot platform, comprising: at least one perceptor configured for perceiving environmental variables of interest;at least one locomotor configured for providing mobility to the robot platform; anda system controller configured for executing a task-oriented autonomy system, comprising: receiving instructions from a user interface using a communication interface, the instructions for achieving at least one task-oriented target and comprising at least one of intervention instructions, robot initiative instructions, and instructions for setting a task-oriented autonomy level;if the instructions are the robot initiative instructions, developing a robot plan to achieve the at least one task-oriented target and performing the robot plan; andif the instructions are the intervention instructions, then performing the intervention instructions for achieving the at least one task-oriented target from the user interface and, if present, overriding the robot plan to achieve the at least one task-oriented target. 23. The robot platform of claim 22, further configured for reporting the robot plan through the communication interface to the user interface. 24. The robot platform of claim 22, wherein the system controller is further configured for returning to performing the robot plan after performing the intervention instructions if the user interface modifies the task-oriented autonomy level to provide additional robot initiative. 25. The robot platform of claim 22, wherein the system controller is further configured for returning to performing the robot plan after performing the intervention instructions if the robot determines that additional robot initiative is appropriate and the user interface enables a change in autonomy initiated by the robot. 26. The robot platform of claim 22, wherein the at least one task-oriented target is selected from the group consisting of a navigation target, an imaging target, an artillery target, a sensor target, and a manipulator target. 27. The robot platform of claim 22, wherein the robot plan includes robot conduct and robot behavior selected from the group consisting of focus behaviors, manipulation behaviors, go-to points, waypoints, path-planning, search region, patrol region, retro-traverse, and laser tracking. 28. A robot control system, comprising: a user computer comprising a memory and at least one processor configured for executing a user interface, the user interface configured for: positioning at least one task-oriented target in a map on the user interface representing an environment of the robot;determining a task-oriented autonomy level of the robot correlated to a change in the at least one task-oriented target;if the change in the at least one task-oriented target is smaller than a change threshold, instructing the robot to achieve the at least one task-oriented target by intervention instructions from the user interface; andif the change in the at least one task-oriented target is larger than the change threshold, instructing the robot to achieve the at least one task-oriented target using robot initiative to determine a robot plan for achieving the at least one task-oriented target and implementing the robot plan; anda robot platform comprising a locomotor and a system controller, the system controller configured for: developing the robot plan to achieve the at least one task-oriented target and performing the robot plan if robot initiative instructions are received from the user computer; andperforming the intervention instructions for achieving the at least one task-oriented target and, if appropriate, overriding the robot plan to achieve the at least one task-oriented target if the intervention instructions are received from the user computer. 29. The system of claim 28, wherein the at least one task-oriented target is selected from the group consisting of a navigation target, an imaging target, an artillery target, a sensor target, and a manipulator target. 30. The system of claim 28, wherein the change in the at least one task-oriented target is selected from the group consisting of: a time interval between movements of the at least one task-oriented target;a distance between the robot and the at least one task-oriented target;an event horizon relative to the at least one task-oriented target;a change from one task-oriented target to another task-oriented target; andcombinations thereof. 31. The system of claim 28, wherein positioning the at least one task-oriented target in the map is performed by a user and the intervention instructions are derived by the user interface responsive to the change in the at least one task-oriented target.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (86)
Werbos, Paul J., 3-brain architecture for an intelligent decision and control system.
Matsuoka,Tsunetaro; Otsuki,Tadashi; Konishi,Tetsuya; Kasuga,Tomoaki; Takemoto,Kunio; Okita,Ayako; Fujita,Yaeko; Ogura,Toshiya, Automatic apparatus, information server, robotic apparatus and commercial transaction method for performing an action based on information.
Daggett Kenneth E. (Murrysville PA) Onaga Eimei M. (Brookfield Center CT) Casler ; Jr. Richard J. (Newtown CT) Booth Barrett L. (Brookfield CT) Penkar Rajan C. (Woodbury CT) Vercellotti Leonard C. (O, Digital control for multiaxis robots.
Allen Bruce S. (Willow St. East Kingston NH 03827) Dunalvey Michael R. (276 Harris Ave. Needham MA 02192) King Bruce A. (R.F.D. 2 Bolton MA 01740) DuPrie Harold J. (57 High St. ; Apt. 1B Andover MA 0, Man machine interface.
Slaughter, Gregory; Hudson, Jr., Donald; Saulpaugh, Thomas; Yeh, Yuh-Yen; Traversat, Bernard, Mechanism by which devices on unforeseen platform variants may be supported without re-release of core platform kernel software.
Paradie, Michael John; Hunt, Andrew Evan; Forde, John James, Method and apparatus for determining location of objects based on range readings from multiple sensors.
Stoddard, Kenneth A.; Kneifel, II, R. William; Martin, David M.; Mirza, Khalid; Chaffee, Michael C.; Hagenauer, Andreas; Graf, Stefan, Method and control system for controlling a plurality of robots.
Vinzenz Jank AT; Manfred Ruhrnossl AT; Rupert Frauenschuh AT; Helmut Friedl AT, Method for controlling a welding apparatus and corresponding control device.
Kanda Shinji (Kawasaki JPX) Wakitani Jun (Kawasaki JPX) Maruyama Tsugito (Kawasaki JPX) Morita Toshihiko (Kawasaki JPX), Method for determining orientation of contour line segment in local area and for determining straight line and corner.
Kanda Shinji,JPX ; Wakitani Jun,JPX ; Maruyama Tsugito,JPX ; Morita Toshihiko,JPX, Method for determining orientation of contour line segment in local area and for determining straight line and corner.
Maruyama Tsugito (Machida JPX) Kanda Shinji (Yokohama JPX) Hanahara Keishi (Yamato JPX), Method for measuring a three-dimensional position of an object.
Dudar Aed M. ; Ward Clyde R. ; Jones Joel D. ; Mallet William R.,CAX ; Harpring Larry J. ; Collins Montenius X. ; Anderson Erin K., Mobile autonomous robotic apparatus for radiologic characterization.
Everett ; Jr. Hobart R. (San Diego CA) Gilbreath Gary A. (San Diego CA) Laird Robin T. (San Diego CA), Navigational control system for an autonomous vehicle.
Bauer Rudolf,DEX ; Wienkop Uwe,DEX, Process for preparing an area plan having a cellular structure and comprising a unit moving automatically and positioned in said area using sensors based on wave reflection.
Gudat Adam J. ; Bradbury Walter J. ; Christensen Dana A. ; Kemner Carl A. ; Koehrsen Craig L. ; Kyrtsos Christos T. ; Lay Norman K. ; Peterson Joel L. ; Schmidt Larry E. ; Stafford Darrell E. ; Weinb, System and a method for enabling a vehicle to track a preset path.
Dividock Ellen Marie ; Kamnikar Anthony Joseph ; Lewis Elaine M. ; Pepoy Alan Joseph ; Rogers William Edward, System for logging premises hazard inspections.
Ferla,Davide; Lachello,Luca; Gastaldi,Gianluca; Cantello,Giorgio; Calcagno,Renzo, System for programming robots or similar automatic apparatuses, comprising a portable programming terminal.
Schwenke Marvin J. ; Staron Raymond J. ; Sinclair James A. ; Franklin Paul F. ; Hoskins Josiah C., System, method and article of manufacture for displaying an animated, realtime updated control sequence chart.
Pong William (Brookfield Center CT) Engelberger Joseph F. (Newtown CT) Kazman William S. (Danbury CT), Tether-guided vehicle and method of controlling same.
Maruyama Tsugito (Machida JPX) Kanda Shinji (Yokohama JPX) Wakitani Jun (Kawasaki JPX), Three-dimensional measuring apparatus having improved speed and resolution.
Love Simon (Bristol GB3) Boswell Elizabeth M. C. (Chippenham GB3) Quy Roger J. (Bristol GB3), User interface simulation and management for program-controlled apparatus.
Duggan,David S.; Felio,David A.; Pate,Billy B.; Longhi,Vince R.; Petersen,Jerry L.; Bergee,Mark J., Vehicle control system including related methods and components.
Lafaye, Jory; Collette, Cyrille; Wieber, Pierre-Brice, Omnidirectional wheeled humanoid robot based on a linear predictive position and velocity controller.
Versteeg, Roelof J.; Few, Douglas A.; Kinoshita, Robert A.; Johnson, Doug; Linda, Ondrej, Real time explosive hazard information sensing, processing, and communication for autonomous operation.
Versteeg, Roelof J.; Few, Douglas A.; Kinoshita, Robert A.; Johnson, Douglas; Linda, Ondrej, Real time explosive hazard information sensing, processing, and communication for autonomous operation.
Pinter, Marco; Lai, Fuji; Sanchez, Daniel Steven; Ballantyne, James; Roe, David Bjorn; Wang, Yulun; Jordan, Charles S.; Taka, Orjeta; Wong, Cheuk Wah, Social behavior rules for a medical telepresence robot.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.