Sensor use and analysis for dynamic update of interaction in a social robot
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
B25J-011/00
B25J-009/00
B25J-009/16
G06N-007/00
G06F-003/16
출원번호
US-0794765
(2015-07-08)
등록번호
US-9724824
(2017-08-08)
발명자
/ 주소
Annan, Brandon
Cole, Joshua R.
Gilbert, Deborah M.
Indurkar, Dhananjay
출원인 / 주소
Sprint Communications Company L.P.
인용정보
피인용 횟수 :
3인용 특허 :
8
초록▼
A method of optimizing social interaction between a robot and a human. The method comprises generating then executing a robot motion script for interaction with a human by a robot based on a characteristic detected by at least one of a plurality of sensors on the robot. The method further comprises
A method of optimizing social interaction between a robot and a human. The method comprises generating then executing a robot motion script for interaction with a human by a robot based on a characteristic detected by at least one of a plurality of sensors on the robot. The method further comprises detection, by at least one sensor of the robot, a reaction of the human during a first period. The robot then analyzes the reaction of the human and assigns a positive or negative classification to the reaction based on pre-defined mapping stored in the memory of the robot. The method further comprises modifying the robot motion script to incorporate a pre-defined modification based on the determination of a negative classification of the human reaction. The method further comprises executing the modified robot motion script during a second period to obtain an improved interaction with the human.
대표청구항▼
1. A social robot comprising: a speaker;a plurality of sensors, the plurality of sensors comprising at least one audio sensor and at least one visual sensor;a processor; anda non-transitory computer readable storage medium storing programming for execution by the processor, the programming including
1. A social robot comprising: a speaker;a plurality of sensors, the plurality of sensors comprising at least one audio sensor and at least one visual sensor;a processor; anda non-transitory computer readable storage medium storing programming for execution by the processor, the programming including instructions to: receive input via at least one of the plurality of sensors;identify a plurality of characteristics of a human based on the input received via the at least one of the plurality of sensors, the plurality of characteristics of the human comprising a diction level, a voice cadence, a gender, and an age range;generate, based on the plurality of characteristics, a robot dialog that includes a first control question for interaction with the human, wherein the first control question invokes at least one of a physical reaction or vocal reaction from the human to confirm that the plurality of characteristics are associated with one of a plurality of profiles stored in the storage medium;playback, via the speaker, the robot dialog as audible speech to initiate a social interaction between the social robot and the human;in response to playback of the first control question of the robot dialog, detect, via one or more of the plurality of sensors, a reaction of the human, the reaction being at least one of a physical reaction and a vocal reaction;confirm that a profile corresponding to the human matches the plurality of characteristics based on the detected reaction;based on the profile, initiate a motion script that activates actuators of the social robot;irrespective of the detected reaction of the human, insert additional control questions into the robot dialog, wherein the additional control questions are dispersed throughout the robot dialog and invoke one or more of a physical reaction and a vocal reaction from the human; andcontinue playback of the robot dialog that includes the additional control questions. 2. The social robot of claim 1, wherein the plurality of characteristics of the human identified further comprises a race and a non-anthropometric identifier chosen by the human. 3. The social robot of claim 1, wherein the plurality of characteristics is identified based on at least one of an audio data acquired by the at least one of the plurality of sensors and a visual data acquired by the at least one of the plurality of sensors. 4. A method for adapting robot motion to improve social interaction between a social robot and a human comprising: receiving, by a social robot executing a processor, input via a plurality of sensors of the social robot;identifying, by the social robot, a plurality of characteristics of the human based on the input received via the plurality of sensors of the social robot, the plurality of characteristics of the human comprising a diction level, a voice cadence, a gender, and an age range;based on identifying the plurality of characteristics, generating, by the social robot, a robot dialog that includes a first control question for interaction with the human, wherein the first control question invokes at least one of a physical reaction or vocal reaction from the human to confirm that the plurality of characteristics are associated with one of a plurality of profiles stored in a non-transitory storage medium communicatively coupled to the social robot;generating, by the social robot executing a motion script application, a robot motion script for interaction with the human;executing, by the social robot, the robot motion script that activates actuators of the social robot to initiate a social interaction between the social robot and the human;playing, via a speaker of the social robot, the robot dialog as audible speech while the robot motion script is executing;in response to playing the first control question of the robot dialog, detecting, via the plurality of sensors, a reaction of the human, the reaction being at least one of a physical reaction and a vocal reaction;confirming, by the social robot, that a profile corresponding to the human matches the plurality of characteristics based on the detected reaction;irrespective of the detected reaction of the human, inserting, by the social robot, additional control questions into the robot dialog, wherein the additional control questions are dispersed throughout the robot dialog and invoke one or more of a physical reaction and a vocal reaction from the human; andcontinuing, by the social robot, playback of the robot dialog that includes the additional control questions. 5. The method of claim 4, wherein the plurality of characteristics is identified based on at least one of an audio data acquired by one of the plurality of the sensors and a visual data acquired by one of the plurality of the sensors. 6. The method of claim 4, wherein executing the motion script occurs over a first time period. 7. The method of claim 4, further comprising: adapting, by the social robot, the robot motion script and the robot dialog in response to reactions from the human following playback of the additional questions. 8. The method of claim 4, wherein a characteristic of the plurality of characteristics corresponds with a race of the human. 9. The method of claim 4, wherein the plurality of characteristics is identified by a cadence range of voice based on a pre-defined mapping stored in a memory of the social robot mapped to at least one of an age range, race, and gender. 10. The method of claim 4, wherein the plurality of characteristics is identified by a visually acquired data based on a pre-defined mapping stored in a memory of the social robot mapped to at least one of an age profile, a race profile, and a gender profile. 11. The method of claim 4, wherein the robot motion script comprises at least one or more of audio and movement commands. 12. The method of claim 4, wherein identifying the plurality of characteristics comprises: collecting, by the plurality of sensors, audio or visual data corresponding to the human, anddetermining, by the social robot, that the audio or visual data is associated with at least one user profile stored in a memory of the social robot. 13. A method for adapting robot motion to improve social interaction between a robot and a human comprising: identifying, by a social robot, a plurality of characteristics of a human via at least one sensor of the social robot, the plurality of characteristics of the human comprising a diction level, a voice cadence, a gender, and an age range;based on identifying the plurality of characteristics, generating, by the social robot, a robot dialog that includes a first control question for interaction with the human, wherein the first control question invokes at least one of a physical reaction or vocal reaction from the human to confirm that the plurality of characteristics are associated with one of a plurality of profiles stored in a non-transitory storage medium communicatively coupled to the social robot;generating, by the social robot, a robot motion script for interaction with the human;executing, by the social robot, the robot motion script to interact with the human during a first period;playing, via a speaker of the social robot, the robot dialog as audible speech while the robot motion script is executing;in response to playing at least the control question of the robot dialog, detecting, by the social robot, a reaction of the human during the first period via a sensor of the social robot;in response to detecting, analyzing, by the social robot, the reaction of the human;confirming, by the social robot, that a profile corresponding to the human matches the plurality of characteristics based on analyzing the reaction;based on the profile, modifying, by the social robot, the robot motion script to obtain a modified robot motion script;irrespective of the detected reaction of the human, inserting, by the social robot, additional control questions into the robot dialog, wherein the additional control questions are dispersed throughout the robot dialog and invoke one or more of a physical reaction and a vocal reaction from the human;executing, by the social robot, the modified robot motion script during a second period to obtain an improved interaction with the human; andcontinuing, by the social robot, playback of the robot dialog that includes at least some of the additional control questions during the second period. 14. The method of claim 13, wherein the reaction comprises at least one of a vocal reaction or a physical reaction, the vocal reaction comprising at least one voice generated sound by the human and the physical reaction comprising at least one body movement made by the human. 15. The method of claim 14, wherein analyzing the reaction by the social robot comprises determining a positive or negative classification of the reaction based on playback of the first control question. 16. The method of claim 15, wherein the determining the positive or negative classification comprises: detecting, by the social robot, at least one of a voice generated sound or a body movement of the human during the first period; anddetermining, by the social robot, that the voice generated sound or the body movement is associated with one of the positive or negative classification based on a pre-defined mapping stored in a memory of the social robot, wherein the pre-defined mapping maps known body movement or voice-generated sound queues to positive or negative reactions. 17. The method of claim 13, wherein modifying the robot motion script comprises incorporating a pre-defined modification stored in a memory of the social robot, and wherein the pre-defined modification is based on the determination of a negative reaction. 18. The method of claim 13, wherein the robot motion script comprises at least one or more of audio and movement commands of the social robot. 19. The method of claim 13, wherein the modified robot motion script comprises at least one or more of audio and movement commands of the social robot. 20. The method of claim 13, wherein the steps of detecting the reaction, analyzing the reaction of the human, modifying the robot motion script based on the analysis, and executing, by the social robot, the modified robot motion script during a second period are repeated until a positive reaction from the human is determined to occur by the social robot.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (8)
Annan, Brandon C.; Cole, Joshua R.; Gilbert, Deborah L.; Indurkar, Dhananjay, Interactive behavior engagement and management in subordinate airborne robots.
Annan, Brandon C.; Cole, Joshua R.; Gilbert, Deborah M.; Indurkar, Dhananjay, Matrix barcode enhancement through capture and use of neighboring environment image.
Seale, Jennifer; Lindsley, Hannah; Margheim, Timothy Allen, Method and apparatus for defining an artificial brain via a plurality of concept nodes defined by frame semantics.
Chefalas, Thomas E.; Kochut, Andrzej; Pickover, Clifford A.; Weldemariam, Komminist, Drone and drone-based system and methods for helping users assemble an object.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.