A robotic system includes a robot, sensors which measure status information including a position and orientation of the robot and an object within the workspace, and a controller. The controller, which visually debugs an operation of the robot, includes a simulator module, action planning module, an
A robotic system includes a robot, sensors which measure status information including a position and orientation of the robot and an object within the workspace, and a controller. The controller, which visually debugs an operation of the robot, includes a simulator module, action planning module, and graphical user interface (GUI). The simulator module receives the status information and generates visual markers, in response to marker commands, as graphical depictions of the object and robot. An action planning module selects a next action of the robot. The marker generator module generates and outputs the marker commands to the simulator module in response to the selected next action. The GUI receives and displays the visual markers, selected future action, and input commands. Via the action planning module, the position and/or orientation of the visual markers are modified in real time to change the operation of the robot.
대표청구항▼
1. A robotic system comprising: a robot responsive to input commands;sensors which measure a set of status information in real time, including a position and an orientation of the robot and an object within the workspace; anda controller having a processor and memory on which is recorded instruction
1. A robotic system comprising: a robot responsive to input commands;sensors which measure a set of status information in real time, including a position and an orientation of the robot and an object within the workspace; anda controller having a processor and memory on which is recorded instructions for visually debugging an operation of the robot, thereby allowing a user to change the robot's behavior in real time, the controller including: a simulator module in communication with the sensors, wherein the simulator module receives the set of status information from the sensors in real time and generates visual markers, in response to marker commands, as graphical depictions of the object and the robot in the workspace, wherein the visual markers provide graphical representations of current and future actions of the robot;an action planning module configured to select a next action of the robot;a marker generator module in communication with the action planning module, and configured to generate and output the marker commands to the simulator module in response to the selected next action of the robot; anda graphical user interface (GUI) having a display screen, wherein the GUI is in communication with the simulator module, and is operable to receive and display the visual markers, and to receive the input commands and modify, via the action planning module, at least one of the position and orientation of the visual markers to change the operation of the robot in real time, thereby visually debugging the operation;wherein the visual markers include a target marker indicating a desired target of an end effector of the robot, trajectory markers indicating an approach and a departure trajectory of the end effector, an objective marker indicating where the object will be in the future, and a collision marker indicating where the end effector will collide with the object. 2. The robotic system of claim 1, wherein the trajectory markers include a first arrow indicating an approach trajectory of the end effector and a second arrow indicating a departure trajectory of the end effector. 3. The robotic system of claim 1, wherein the visual marker includes a semi-transparent shape surrounding the object that allows the object to remain visible inside of the semi-transparent shape. 4. The robotic system of claim 1, wherein the sensors include an environmental sensor operable to capture images of the object and robot, and to output the captured images to the simulator module as part of the set of status information. 5. The robotic system of claim 1, wherein the action planning module includes an action selector module and a state predictor module, wherein the action planning module selects the next action of the robot using task information from the simulator module, and the state predictor module is configured to predict future states of the robot using the task information. 6. The robotic system of claim 5, wherein the state predictor module utilizes a state tree to predict the future states. 7. The robotic system of claim 5, wherein the action planning module is configured to determine the next action as a lowest cost action using a costing model. 8. The robotic system of claim 1, wherein the simulator module includes a marker module operable to convert the marker commands into two-dimensional and three-dimensional visual markers. 9. A robotic system comprising: a robot responsive to input commands, and having an end effector;sensors which measure a set of status information in real time, including a position and orientation of the robot and an object within the workspace; anda controller having a processor and memory on which is recorded instructions for visually debugging an operation of the robot, thereby allowing a user to change the robot's behavior in real time, including: a simulator module in communication with the sensors, wherein the simulator module receives the set of status information from the sensors in real time and generates, in response to marker commands, a set of visual markers as graphical depictions of the object and the robot in the workspace, wherein the set of visual markers provides graphical representations of current and future actions of the robot;an action planning module configured to select future actions of the robot as a lowest cost action using a costing model, the action planning module including an action selector module that selects a next action of the robot using task information from the simulator module;a marker generator module in communication with the action planning module, and configured to output the marker commands to the simulator module in real time in response to the selected next action; anda graphical user interface (GUI) having a display screen, wherein the GUI is in communication with the simulator module, and is operable to receive and display the visual markers and the selected future action, and also to receive the input commands and modify, via the action planning module, at least one of the position and orientation of the visual markers in real time to change the operation of the robot thereby visually debugging the operation;wherein the visual markers include a target marker indicating a desired target of the end effector, a trajectory marker indicating an approach trajectory arrow and a departure trajectory arrow as trajectories of the end effector, an objective marker indicating where the object will be in the future, and a collision marker indicating where the end effector will collide with the object, and wherein at least one of the target marker, the objective marker, and the collision marker includes a semi-transparent shape surrounding the object that allows the object to remain visible inside of the semi-transparent shape. 10. The robotic system of claim 9, wherein the sensors include an environmental sensor operable to capture images of the object and robot, and to output the captured images to the simulator module as part of the set of status information. 11. The robotic system of claim 9, wherein the action planning module includes a state predictor module that is configured to predict future states of the robot using the task information. 12. The robotic system of claim 11, wherein the state predictor module utilizes a state tree to predict the future states. 13. The robotic system of claim 9, wherein the simulator module includes a marker module operable to convert the marker commands into two-dimensional and three-dimensional visual markers. 14. A method for visually debugging a robot in a robotic system having a robot responsive to input commands, sensors which measure a set of status information, including a position and orientation of the robot and an object within the workspace, and a controller having a processor and memory on which is recorded instructions for visually debugging an operation of the robot in real time, the method comprising: receiving the set of status information from the sensors in real time via a simulator module of the controller;transmitting a plurality of marker commands to the simulator module via a marker generator module of the controller;generating visual markers in real time in response to the marker commands, via the simulator module, as graphical depictions of the object and the robot in the workspace, including generating a target marker indicating a desired target of the end effector, a trajectory marker indicating an approach trajectory and a departure trajectory of the end effector, an objective marker indicating where the object will be in the future, and a collision marker indicating where the end effector will collide with the object:displaying the visual markers and the selected future action on a display screen of a graphical user interface (GUI) of the controller;selecting a future action of the robot via an action planning module of the controller; andmodifying, via the action planning module, at least one of the position and orientation of the visual markers in real time in response to input signals to change the operation of the robot in real time, thereby visually debugging the operation of the robot. 15. The method of claim 14, wherein the action planning module includes an action selector module and a state predictor module, the method further comprising: selecting the next action of the robot via the action planning module using task information from the simulator module; andpredicting future states of the robot via a state predictor module using the task information. 16. The method of claim 14, wherein determining the next action is performed as a lowest cost action using a costing model. 17. The method of claim 14, wherein predicting future states of the robot includes using a state tree.
Bennett Ross L. (Round Rock TX) Price Robert B. (Austin TX) Springen Clyde H. (Austin TX) Kilbourn Harlan C. (Austin TX), Hand-held manipulator application module.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.