Predictive robotic controller apparatus and methods
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G05B-019/18
B25J-009/16
출원번호
US-0132003
(2016-04-18)
등록번호
US-9950426
(2018-04-24)
발명자
/ 주소
Laurent, Patryk
Passot, Jean-Baptiste
Sinyavskiy, Oleg
Ponulak, Filip
Gabardos, Borja Ibarz
Izhikevich, Eugene
출원인 / 주소
Brain Corporation
대리인 / 주소
Gazdzinski & Associates, PC
인용정보
피인용 횟수 :
0인용 특허 :
110
초록▼
Robotic devices may be trained by a user guiding the robot along target action trajectory using an input signal. A robotic device may comprise an adaptive controller configured to generate control signal based on one or more of the user guidance, sensory input, performance measure, and/or other info
Robotic devices may be trained by a user guiding the robot along target action trajectory using an input signal. A robotic device may comprise an adaptive controller configured to generate control signal based on one or more of the user guidance, sensory input, performance measure, and/or other information. Training may comprise a plurality of trials, wherein for a given context the user and the robot's controller may collaborate to develop an association between the context and the target action. Upon developing the association, the adaptive controller may be capable of generating the control signal and/or an action indication prior and/or in lieu of user input. The predictive control functionality attained by the controller may enable autonomous operation of robotic devices obviating a need for continuing user guidance.
대표청구항▼
1. A computerized method of training a robotic apparatus comprising: performing a physical training action in a context determined from at least one or more inputs from one or more sensors, the physical training action performed based on at least a training instruction provided by a user;performing
1. A computerized method of training a robotic apparatus comprising: performing a physical training action in a context determined from at least one or more inputs from one or more sensors, the physical training action performed based on at least a training instruction provided by a user;performing a first physical action in the context, the first physical action comprising a movement of the robotic apparatus along a first trajectory, the first physical action being determined by at least a user instruction and a first controller instruction indicative at least in part of a first predicted physical action determined from at least the context and the physical training action;receiving a corrective input signal based on the performed first physical action; andperforming a second physical action in the context based on the corrective input signal, the second physical action comprising a movement of the robotic apparatus along a second trajectory, the second physical action further being determined by at least a second controller instruction generated from at least the first physical action, the physical training action, and a second predicted physical action determined from at least the context, wherein a deviation between performance measures of the second physical action and the physical training action is less than a deviation between performance measures of the first physical action and the physical training action. 2. The method of claim 1, further comprising computing a learning parameter that characterizes a neuron process, the computing of the learning parameter being based on at least the deviation between performance measures of the first physical action and the physical training action, and the second controller instruction is further generated from the learning parameter. 3. The method of claim 2, wherein the computing of the learning parameter comprises using a supervised learning process configured based on at least the context and a combination of the first action and the training action. 4. The method of claim 1, wherein performing each of the physical training action, first physical action, first predicted physical action, second physical action, and second predicted physical action comprises navigating a trajectory. 5. The method of claim 1, wherein performing the physical training action, first physical action, first predicted physical action, second physical action, and second predicted physical action each comprise manipulating a motorized operational element of the robotic apparatus. 6. The method of claim 1, wherein determining the context comprises generating an object representation, and the performing of the physical training action comprises at least one of performing an object approach maneuver and performing an object avoidance maneuver. 7. The method of claim 1, further comprising calculating the discrepancy between the first physical action and the physical training action from a proximity measure between the first action and the training action. 8. A trainable robotic apparatus comprising: a sensor configured to generate a sensory input;an interface configured to receive one or more user instructions;a motorized operational element configured to perform physical actions; anda processor configured to: determine a context from the sensory input,provide a training control signal based on at least a training instruction received through the interface, the training control signal causing the motorized operation element to perform a physical training action in the context,provide a first control signal based on at least a user instruction received through the interface and a first predicted physical action determined from at least the context and the physical training action, the first control signal causing the motorized operation element to perform a first physical action in the context, the first physical action comprising at least a movement of the robotic apparatus along a first trajectory via the motorized operation element,compute a learning parameter that characterizes a neuron process, the computation of the learning parameter being based on at least a deviation between performance measures of the first physical action and the physical training action, andprovide a second control signal based on at least the learning parameter and a second predicted physical action based on at least the context, the second control signal causing the motorized operation element to perform a second physical action in the context, the second physical action comprising at least a movement of the robotic apparatus along a second trajectory via the motorized operation element, wherein a deviation between performance measures of the second physical action and the physical training action is less than a deviation between performance measures of the first physical action and the physical training action. 9. The trainable robotic apparatus of claim 8, wherein the sensor includes a camera. 10. The trainable robotic apparatus of claim 8, wherein the physical training action, first physical action, first predicted physical action, second physical action, and second predicted physical action each comprise navigation of respective trajectories. 11. The trainable robotic apparatus of claim 8, wherein the context comprises an object representation and the training action comprises at least one of an object approach maneuver or an object avoidance maneuver. 12. The trainable robotic apparatus of claim 8, wherein the deviation between performance measures of the first physical action and the physical training action is calculated from a proximity measure between the first action and the training action. 13. The trainable robotic apparatus of claim 8, wherein computing the learning parameter comprises using a supervised learning process configured based on at least the context and a combination of the first action and the training action. 14. A robotic apparatus comprising: one or more sensors;a processor apparatus; anda non-transitory computer-readable storage medium having a computer program stored thereon, the computer program comprising a plurality of instructions configured to, when executed by the processor apparatus, cause the robotic apparatus to: perform a physical training action in an environmental context determined from at least one or more inputs from the one or more sensors, the physical training action performed based on at least a training instruction provided by a user;perform a first physical action in the context, the first physical action comprising a movement of the robotic apparatus along a first trajectory, the first physical action being determined by at least a user instruction and a first controller instruction indicative at least in part of a first predicted physical action determined from at least the context and the physical training action;compute a learning parameter based on at least a deviation between performance measures of the first physical action and the physical training action; andperform a second physical action in the context, the second physical action comprising a movement of the robotic apparatus along a second trajectory, the second physical action being determined by at least a second controller instruction generated from at least the learning parameter and a second predicted physical action determined from at least the context;wherein a deviation between performance measures of the second physical action and the physical training action is less than the deviation between the performance measures of the first physical action and the physical training action. 15. The robotic apparatus of claim 14, wherein the one or more sensors comprise at least a camera. 16. The robotic apparatus of claim 14, wherein each of the physical training action, the first predicted physical action, and the second predicted physical action comprises a movement of the robotic apparatus along a respective trajectory. 17. The robotic apparatus of claim 16, wherein: the robotic apparatus further comprises at least one motorized operational element;the plurality of instructions are further configured to, when execute by the processor apparatus, cause the robotic apparatus to manipulate the at least one motorized operational element to perform the physical training action, the first predicted physical action, and the second predicted physical action. 18. The robotic apparatus of claim 14, wherein the context comprises an object representation, and the physical training action comprises at least one of an object approach maneuver and an object avoidance maneuver. 19. The robotic apparatus of claim 14, wherein the deviation between the performance measures of the first physical action and the physical training action is calculated from a proximity measure between the first physical action and the physical training action. 20. The robotic apparatus of claim 14, wherein the computation of the learning parameter comprises a usage of a supervised learning process configured based on at least the context and a combination of the first physical action and the physical training action.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (110)
Werbos Paul J., 3-brain architecture for an intelligent decision and control system.
Ito, Masato; Minamino, Katsuki; Yoshiike, Yukiko; Suzuki, Hirotaka; Kawamoto, Kenta, Apparatus and method for embedding recurrent neural networks into the nodes of a self-organizing map.
DeYong Mark R. (Las Cruces NM) Findley Randall L. (Austin TX) Eskridge Thomas C. (Las Cruces NM) Fields Christopher A. (Rockville MD), Asynchronous temporal neural processing element.
Kerr Randal H. (Richford NY) Mesnard Robert M. (Endicott NY), Automatic generation of executable computer code which commands another program to perform a task and operator modificat.
Frank D. Francone ; Peter Nordin SE; Wolfgang Banzhaf DE, Computer implemented machine learning method and system including specifically defined introns.
Spoerre Julie K. (Tallahassee FL) Lin Chang-Ching (Tallahassee FL) Wang Hsu-Pin (Tallahassee FL), Machine performance monitoring and fault classification using an exponentially weighted moving average scheme.
Grossberg Stephen (Newton Highlands MA) Kuperstein Michael (Brookline MA), Massively parellel real-time network architectures for robots capable of self-calibrating their operating parameters thr.
Abdallah, Muhammad E; Platt, Robert; Wampler, II, Charles W.; Reiland, Matthew J; Sanders, Adam M, Method and apparatus for automatic control of a humanoid robot.
Sakaue Shiyuki (Yokohama JPX) Sugimoto Koichi (Hiratsuka JPX) Arai Shinichi (Yokohama JPX), Method and apparatus for controlling a robot hand along a predetermined path.
Peltola Tero (Helsinki FIX) Matakselka Jorma (Vantaa FIX) Harju Esa (Espoo FIX) Salovuori Heikki (Helsinki FIX) Keskinen Jukka (Vantaa FIX) Makinen Kari (Helsinki FIX) Roikonen Olli (Espoo FIX), Method for congestion management in a frame relay network and a node in a frame relay network.
Wilson Charles L. (Darnestown MD) Garris Michael D. (Gaithersburg MD) Wilkinson ; Jr. Robert A. (Hyattstown MD), Object/anti-object neural network segmentation.
Yokono, Jun; Sabe, Kohtaro; Costa, Gabriel; Ohashi, Takeshi, Operational control method, program, and recording media for robot device, and robot device.
Eguchi, Toru; Yamada, Akihiro; Kusumi, Naohiro; Sekiai, Takaaki; Fukai, Masayuki; Shimizu, Satoru, Plant control system and thermal power generation plant control system.
Coenen, Olivier, Proportional-integral-derivative controller effecting expansion kernels comprising a plurality of spiking neurons associated with a plurality of receptive fields.
Onaga Eimei M. (Brookfield Center CT) Casler ; Jr. Richard J. (Newton CT) Penkar Rajan C. (Woodbury CT) Lancraft Roy E. (Southbury CT) Sha Chi (Pittsburgh PA), Robot control system having adaptive feedforward torque control for improved accuracy.
Hickman, Ryan; Kuffner, Jr., James J.; Bruce, James R.; Gharpure, Chaitanya; Kohler, Damon; Poursohi, Arshan; Francis, Jr., Anthony G.; Lewis, Thor, Shared robot knowledge base for use with cloud computing system.
Shaffer Gary K. (Butler PA) Whittaker William L. (Pittsburgh PA) West Jay H. (Pittsburgh PA) Clow Richard G. (Phoenix AZ) Singh Sanjiv J. (Pittsburgh PA) Lay Norman K. (Peoria IL) Devier Lonnie J. (P, System and method for detecting obstacles in the path of a vehicle.
Blumberg, Bruce; Brooks, Rodney; Buehler, Christopher J.; Deegan, Patrick A.; DiCicco, Matthew; Dye, Noelle; Ens, Gerry; Linder, Natan; Siracusa, Michael; Sussman, Michael; Williamson, Matthew M., Training and operating industrial robots.
Mochizuki, Yoshiyuki; Naka, Toshiya; Asahara, Shigeo, Virtual space control data receiving apparatus,virtual space control data transmission and reception system, virtual space control data receiving method, and virtual space control data receiving prog.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.