Hierarchical robotic controller apparatus and methods
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
B25J-009/16
G06N-003/04
G06N-003/00
출원번호
US-0918298
(2013-06-14)
등록번호
US-9792546
(2017-10-17)
발명자
/ 주소
Passot, Jean-Baptiste
Sinyavskiy, Oleg
Ponulak, Filip
Laurent, Patryk
Gabardos, Borja Ibarz
Izhikevich, Eugene
Polonichko, Vadim
출원인 / 주소
Brain Corporation
대리인 / 주소
Gazdzinski & Associates, P.C.
인용정보
피인용 횟수 :
3인용 특허 :
99
초록▼
A robot may be trained by a user guiding the robot along target trajectory using a control signal. A robot may comprise an adaptive controller. The controller may be configured to generate control commands based on the user guidance, sensory input and a performance measure. A user may interface to t
A robot may be trained by a user guiding the robot along target trajectory using a control signal. A robot may comprise an adaptive controller. The controller may be configured to generate control commands based on the user guidance, sensory input and a performance measure. A user may interface to the robot via an adaptively configured remote controller. The remote controller may comprise a mobile device, configured by the user in accordance with phenotype and/or operational configuration of the robot. The remote controller may detect changes in the robot phenotype and/or operational configuration. The remote controller may comprise multiple control elements configured to activate respective portions of the robot platform. Based on training, the remote controller may configure composite controls configured based two or more of control elements. Activation of a composite control may enable the robot to perform a task.
대표청구항▼
1. A method for controlling a robot to execute a task, the method comprising: during a given trial of one or more training trials, based on a first indication received from a user, executing a plurality of actions, individual ones of the plurality of actions being configured based on sensory input a
1. A method for controlling a robot to execute a task, the method comprising: during a given trial of one or more training trials, based on a first indication received from a user, executing a plurality of actions, individual ones of the plurality of actions being configured based on sensory input and a given user input of a plurality of user inputs, where each one of the one or more training trials has a trial duration associated therewith, the trial duration at least being configured to separate two or more activations of at least one actuator of the robot;determining a performance measure associated with the executing of the plurality of actions, the performance measure being based at least in part on a difference between target actions and the plurality of actions;when the performance measure does not meet or exceed a target level, during a subsequent trial of the one or more training trials, executing another plurality of actions based on a second indication received from the user, and determining another performance measure associated with the executing of the other plurality of actions; andwhen the performance measure meets or exceeds the target level, based on a third indication received from a user, associating a control component with the task, the control component being configured to be activated by a presence of a user discernible representation;wherein:the execution of the plurality of actions is configured to effectuate execution of the task by the robot; andactivation by the user of the control component using the user discernible representation is configured to cause the robot to execute the task in a sequence corresponding to the plurality of actions that were executed based on the first indication received from the user. 2. The method of claim 1, wherein the provision of the user discernible representation comprises: disposing an icon on a display; andconfiguring a user interface device to receive input based on the user activation configured in accordance with the icon. 3. The method of claim 1, wherein the user discernible representation comprises a voice command, an audio signal, or one or more gestures. 4. A non-transitory computer readable medium having instructions embodied thereon, the instructions configured to, when executed by a physical processor, cause the physical processor to: based on a detection of a sequence of discrete control actions comprising two or more activations of a robotic apparatus by a user, generate a composite control element, the composite control element being configured to execute the sequence of discrete control actions in an order of execution provided by one or more parameters associated with at least one of the sequence of discrete control actions; andwhen the execution of the sequence of discrete control actions is within an expected performance value assign the generated composite control element to a tag, the tag associated with a user interface control element;wherein: an invocation of the tag is configured to execute the sequence of discrete control actions in accordance with the determined order of execution;the robotic apparatus comprises a controller configured to generate a control signal, individual ones of the sequence of discrete control actions being configured based on the control signal, the generation of the control signal being effectuated by a learning process;the learning process comprises execution of multiple training trials, individual ones of the multiple training trials being characterized by a trial duration;individual ones of the sequence of discrete control actions correspond to an execution of a given training trial of the multiple training trials; andthe two or more activations of the robotic apparatus comprise activations of at least one actuator at two or more instances of time, the two or more instances of time being separated by a time period, the time period corresponding to the trial duration. 5. The non-transitory computer readable medium of claim 4, wherein: the sequence is configured to cause the robotic apparatus to execute a target task;individual ones of the two or more activations are configured to execute responsive to the user issuing two or more control commands via a remote control interface; andthe composite control element is configured to execute the target task responsive to a single activation of the composite control element by the user. 6. The non-transitory computer readable medium of claim 5, wherein individual ones of the two or more control commands comprise multiple instances of a given control operation effectuated based on multiple activations of a first control element associated with the remote control interface. 7. The non-transitory computer readable medium of claim 5, wherein: individual ones of the two or more control commands comprise one or more instances of two or more control operation effectuated based on activations of a first control element and a second control element associated with the remote control interface; andthe single activation of the composite control element is configured to obviate the activations of the first control element and the second control element by the user. 8. The non-transitory computer readable medium of claim 5, wherein: the user comprises a human; anddetection is configured based on a request by the user. 9. The non-transitory computer readable medium of claim 4, wherein: the learning process comprises adjusting a learning parameter based on a performance measure, the performance measure being configured based on individual ones of the sequence of discrete control actions and a target action; andthe detection is effectuated based on an indication provided by the learning process absent user request. 10. The non-transitory computer readable medium of claim 4, wherein: the robotic apparatus comprises two or more individually controllable actuators; andthe two or more activations of the robotic apparatus comprise activations of individual ones of the two or more actuators. 11. A remote control apparatus of a robot, the remote control apparatus comprising: a physical processor configured to operate a learning process;a sensor coupled to the physical processor;a user interface configured to present one or more human perceptible control elements; anda remote communications interface configured to communicate to the robot a plurality of control commands configured by the learning process based on an association between a sensor input and individual ones of a plurality of user inputs provided via one or more of the one or more human perceptible control elements;wherein:the communication to the robot of the plurality of control commands is configured to cause the robot to execute a plurality of actions;the learning process is configured based on a performance measure between a target action and individual ones of the plurality of actions;the association between the sensor input and the individual ones of the plurality of user inputs is configured to cause generation of one or more of control primitives, individual ones of the one or more of control primitives being configured to cause execution of a respective action of the plurality of actions;the learning process is configured to generate a composite control;the composite control is configured to actuate the plurality of actions that result in an execution of the target action responsive to a single activation of the composite control by a user, the single activation by the user being configured to cause the execution of the plurality of actions in an order determined according to the plurality of user inputs;the single activation of the composite control is configured based on a detection, by the remote control apparatus, of a presence of the one or more human perceptible control elements;the generation of the composite control comprises a presentation of the one or more human perceptible control elements, the one or more human perceptible control elements being configured to cause activation of the composite control by the user; andthe activation of the composite control is configured based on a detection of one or more of: an audio signal, a touch signal, and an electrical signal by the user interface, where the user interface comprises a camera, and the electrical signal is configured based on a captured representation of the user by the camera. 12. The apparatus of claim 11, wherein: the learning process is configured to generate the composite control based on a detection, by the remote control apparatus, of the individual ones of the plurality of user inputs provided via the one or more of the one or more human perceptible control elements; andan individual action of the plurality of actions corresponds to execution of a task. 13. The apparatus of claim 11, wherein: the learning process is configured to generate the composite control based on a request by the user. 14. The apparatus of claim 11, wherein the learning process is configured based on a supervised learning process, the supervised learning process being configured based on a sensory context and a combination of a control signal and the individual ones of the plurality of user inputs. 15. The apparatus of claim 11, wherein: the provision of individual ones of the plurality of user inputs is configured based on an audible tag or a touch indication of the user interface. 16. The apparatus of claim 11, wherein: the user interface comprises a touch sensitive interface; andthe touch signal is configured based on a pattern provided by the user via the touch interface.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (99)
Werbos Paul J., 3-brain architecture for an intelligent decision and control system.
Ito, Masato; Minamino, Katsuki; Yoshiike, Yukiko; Suzuki, Hirotaka; Kawamoto, Kenta, Apparatus and method for embedding recurrent neural networks into the nodes of a self-organizing map.
DeYong Mark R. (Las Cruces NM) Findley Randall L. (Austin TX) Eskridge Thomas C. (Las Cruces NM) Fields Christopher A. (Rockville MD), Asynchronous temporal neural processing element.
Kerr Randal H. (Richford NY) Mesnard Robert M. (Endicott NY), Automatic generation of executable computer code which commands another program to perform a task and operator modificat.
Frank D. Francone ; Peter Nordin SE; Wolfgang Banzhaf DE, Computer implemented machine learning method and system including specifically defined introns.
Spoerre Julie K. (Tallahassee FL) Lin Chang-Ching (Tallahassee FL) Wang Hsu-Pin (Tallahassee FL), Machine performance monitoring and fault classification using an exponentially weighted moving average scheme.
Grossberg Stephen (Newton Highlands MA) Kuperstein Michael (Brookline MA), Massively parellel real-time network architectures for robots capable of self-calibrating their operating parameters thr.
Abdallah, Muhammad E; Platt, Robert; Wampler, II, Charles W.; Reiland, Matthew J; Sanders, Adam M, Method and apparatus for automatic control of a humanoid robot.
Sakaue Shiyuki (Yokohama JPX) Sugimoto Koichi (Hiratsuka JPX) Arai Shinichi (Yokohama JPX), Method and apparatus for controlling a robot hand along a predetermined path.
Peltola Tero (Helsinki FIX) Matakselka Jorma (Vantaa FIX) Harju Esa (Espoo FIX) Salovuori Heikki (Helsinki FIX) Keskinen Jukka (Vantaa FIX) Makinen Kari (Helsinki FIX) Roikonen Olli (Espoo FIX), Method for congestion management in a frame relay network and a node in a frame relay network.
Wilson Charles L. (Darnestown MD) Garris Michael D. (Gaithersburg MD) Wilkinson ; Jr. Robert A. (Hyattstown MD), Object/anti-object neural network segmentation.
Yokono, Jun; Sabe, Kohtaro; Costa, Gabriel; Ohashi, Takeshi, Operational control method, program, and recording media for robot device, and robot device.
Eguchi, Toru; Yamada, Akihiro; Kusumi, Naohiro; Sekiai, Takaaki; Fukai, Masayuki; Shimizu, Satoru, Plant control system and thermal power generation plant control system.
Coenen, Olivier, Proportional-integral-derivative controller effecting expansion kernels comprising a plurality of spiking neurons associated with a plurality of receptive fields.
Hickman, Ryan; Kuffner, Jr., James J.; Bruce, James R.; Gharpure, Chaitanya; Kohler, Damon; Poursohi, Arshan; Francis, Jr., Anthony G.; Lewis, Thor, Shared robot knowledge base for use with cloud computing system.
Shaffer Gary K. (Butler PA) Whittaker William L. (Pittsburgh PA) West Jay H. (Pittsburgh PA) Clow Richard G. (Phoenix AZ) Singh Sanjiv J. (Pittsburgh PA) Lay Norman K. (Peoria IL) Devier Lonnie J. (P, System and method for detecting obstacles in the path of a vehicle.
Blumberg, Bruce; Brooks, Rodney; Buehler, Christopher J.; Deegan, Patrick A.; DiCicco, Matthew; Dye, Noelle; Ens, Gerry; Linder, Natan; Siracusa, Michael; Sussman, Michael; Williamson, Matthew M., Training and operating industrial robots.
Mochizuki, Yoshiyuki; Naka, Toshiya; Asahara, Shigeo, Virtual space control data receiving apparatus,virtual space control data transmission and reception system, virtual space control data receiving method, and virtual space control data receiving prog.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.