Interpretation of ambiguous vehicle instructions
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G10L-015/22
G10L-015/20
G06N-099/00
G06F-003/01
G02B-027/01
출원번호
US-0465049
(2014-08-21)
등록번호
US-9747898
(2017-08-29)
발명자
/ 주소
Ng-Thow-Hing, Victor
Bark, Karlin
Tran, Cuong
출원인 / 주소
Honda Motor Co., Ltd.
대리인 / 주소
Rankin, Hill & Clark LLP
인용정보
피인용 횟수 :
1인용 특허 :
39
초록▼
Various exemplary embodiments relate to a command interpreter for use in a vehicle control system in a vehicle for interpreting user commands, a vehicle interaction system including such a command interpreter, a vehicle including such a vehicle interaction system, and related method and non-transito
Various exemplary embodiments relate to a command interpreter for use in a vehicle control system in a vehicle for interpreting user commands, a vehicle interaction system including such a command interpreter, a vehicle including such a vehicle interaction system, and related method and non-transitory machine-readable storage medium, including: a memory and a processor, the processor being configured to: receive, from at least one human via a first input device, a first input having a first type; receive a second input having a second type via a second input device, wherein the second type comprises at least one of sensed information describing a surrounding environment of the vehicle and input received from at least one human; interpret both the first input and the second input to generate a system instruction; and transmit the system instruction to a different system of the vehicle.
대표청구항▼
1. A command interpreter of a vehicle control system comprising: a memory and a processor, the memory includes command interpretation instructions, the processor executes the command interpretation instructions, the command interpretation instructions include:receiving from at least one of a human v
1. A command interpreter of a vehicle control system comprising: a memory and a processor, the memory includes command interpretation instructions, the processor executes the command interpretation instructions, the command interpretation instructions include:receiving from at least one of a human vehicle driver and a human vehicle passenger via a first input device, a first input having a first input type;receiving a second input having a second input type different from the first input type via a second input device, wherein the second input type comprises at least one of sensed information describing a surrounding environment of the vehicle and input received from at least one of the human vehicle driver and the human vehicle passenger;interpreting both the first input and the second input to generate a system instruction, wherein interpreting both the first input and the second input to generate the system instruction includes correlating the first input and the second input to determine at least one instruction command, wherein upon determining the at least one instruction command, the processor retrieves metadata associated with the at least one instruction command and determines one or more parameter sets for the at least one instruction command based on the metadata,wherein the processor determines if a sufficient confidence level exists in the one or more parameter sets based on a vehicle state indicative of a current vibration level of a cabin of the vehicle received from a vehicle state sensing subsystem and the processor generates the system instruction based on the at least one instruction command and the one or more parameter sets in which the sufficient confidence level exists, wherein the system instruction includes a change lane command to move the vehicle to a lane associated with a vibration level lower than the current vibration level; andcontrolling an operation of an autonomous control system to control motion of the vehicle based on execution of the system instruction. 2. The command interpreter of claim 1, wherein interpreting both the first input and the second input to generate the system instruction includes: upon determining the at least one instruction command the processor analyzes the at least one instruction command to determine an instruction command that includes a sufficient confidence level that represents an intention of the at least one of the human vehicle driver and the human passenger, wherein upon determining that the at least one instruction command does not include the sufficient confidence level, a request for command clarification is output. 3. The command interpreter of claim 1, wherein interpreting both the first input and the second input to generate the system instruction includes upon determining that the sufficient confidence level does not exist in the one or more parameter sets, a request for parameter clarification is output. 4. The command interpreter of claim 1, wherein, interpreting both the first input and the second input to generate the system instruction includes, utilizing the second input to select from a plurality of potential instructions identified as being associated with the first input. 5. The command interpreter of claim 1, wherein the processor utilizes machine learning to generate a plurality of rules. 6. The command interpreter of claim 1, wherein: the second input device is a camera; the processor receives the second input from a gesture recognition system; and the second input describes a gesture performed by the at least one of the human vehicle driver and the human vehicle passenger,wherein the processor determines the at least one instruction command based on a correlation of the first input and the gesture performed by the at least one of the human vehicle driver and the human vehicle passenger and retrieving metadata associated with the at least one instruction command to determine the one or more parameter sets for the instruction command based on the metadata. 7. The command interpreter of claim 1, wherein: the processor receives the second input from an environment sensing system that identifies objects present in the environment outside of the vehicle; and the second input describes at least one object present in the environment outside the vehicle. 8. The command interpreter of claim 1, wherein the system instruction is associated with at least one parameter, wherein the at least one parameter is based on a correlation of the first input, the second input, and the one or more parameter sets, wherein the at least one parameter defines an action associated with the system instruction to be performed by the autonomous control system. 9. The command interpreter of claim 1, wherein the system instruction indicates a tactical maneuver for the autonomous control system to perform. 10. The command interpreter of claim 1, further including controlling an operation of a heads-up-display positioned for viewing by at least one of the human vehicle driver and the human vehicle passenger and the system instruction is associated with a graphic to be displayed via the heads-up-display. 11. A vehicle interaction system comprising: a first input system that receives a first input having a first input type from at least one of a human vehicle driver and a human passenger;a second input system that receives a second input having a second input type that is different from the first input type, wherein the second input type comprises sensed information describing a surrounding environment of a vehicle and input received from at least one of the human vehicle driver and the human passenger;a storage device that stores the first input, the second input, and command interpretation instructions;an output system that accepts instructions from other systems;a command interpreter that executes the command interpretation instructions that include:obtaining the first input and the second input stored in the storage device,interpreting both the first input and the second input to generate a system instruction, wherein interpreting both the first input and the second input to generate the system instruction includes correlating the first input and the second input to determine at least one instruction command, wherein upon determining the at least one instruction command the command interpreter retrieves metadata associated with the at least one instruction command and determines one or more parameter sets for the at least one instruction command based on the metadata,wherein the command interpreter determines if a sufficient confidence level exists in the one or more parameter sets based on a vehicle state indicative of a current vibration level of a cabin of the vehicle received from a vehicle state sensing subsystem and the command interpreter generates the system instruction based on the at least one instruction command and the one or more parameter sets in which the sufficient confidence level exists, wherein the system instruction includes a change lane command to move the vehicle to a lane associated with a vibration level lower than the current vibration level, andcontrolling an operation of an autonomous control system to control motion of the vehicle based on execution of the system instruction. 12. The command interpreter of claim 11, wherein, in interpreting both the first input and the second input to generate the system instruction, the command interpreter: analyzes the at least one instruction command to determine an instruction command that includes a sufficient confidence level that represents an intention of the at least one of the human vehicle driver and the human passenger upon determining the at least one instruction command, wherein the command interpreter outputs a request for command clarification is output upon determining that the at least one instruction command does not include the sufficient confidence level. 13. The vehicle interaction system of claim 11, wherein the system instruction includes a command and at least one parameter and, in interpreting both the first input and the second input to generate the system instruction, the command interpreter: outputs a request for parameter clarification upon determining that the sufficient confidence level does not exist in the one or more parameter sets. 14. The vehicle interaction system of claim 11, wherein, in interpreting both the first input and the second input to generate the system instruction, the command interpreter utilizes the second input to select from a plurality of potential instructions identified as being associated with the first input. 15. The vehicle interaction system of claim 11, wherein, in interpreting both the first input and the second input to generate the system instruction, the command interpreter evaluates at least one rule of a plurality of rules correlating various inputs to various system instructions. 16. The vehicle interaction system of claim 15, wherein the command interpreter utilizes machine learning to generate the plurality of rules. 17. The vehicle interaction system of claim 11, wherein: the second input system is a gesture recognition system;the second input describes a gesture performed by the at least one of the human vehicle driver and the human passenger,wherein the command interpreter determines the at least one instruction command based on a correlation of the first input and the gesture performed by the at least one of the human vehicle driver and the human passenger and retrieving metadata associated with the at least one instruction command to determine the one or more parameter sets for the instruction command based on the metadata. 18. The vehicle interaction system of claim 11, wherein: the second input system is an environment sensing system that identifies objects present in the environment outside of the vehicle; andthe second input describes at least one object present in the environment outside the vehicle. 19. The vehicle interaction system of claim 11, wherein the system instruction is associated with at least one parameter, wherein the at least one parameter is based on a correlation of the first input, the second input, and the one or more parameter sets, wherein the at least one parameter defines an action associated with the system instruction to be performed by the autonomous control system.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (39)
Hahn, Stefan, Attention control for operators of technical equipment.
De Jong Durk J. (Eindhoven NLX), Method of displaying navigation data for a vehicle in an image of the vehicle environment, a navigation system for perfo.
Lee, Ethan J.; Lee, Daniel B.; Hendricks, Michael G.; Roth, Mark R.; Myers, Craig R.; Poe, G. Bruce; Blaker, David A.; Schaudel, Carsten; VanVuuren, Mark A.; VanderPloeg, John A., Multi-display mirror system and method for expanded view around a vehicle.
Vogt, Wilhelm; Varchmin, Axel; Mai, Kerstin, Route guidance method and system for implementing such a method, as well as a corresponding computer program and a corresponding computer-readable storage medium.
Agarwal, Jitender Kumar; Paulraj, Vasantha Selvi; Krishna, Kiran Gopala; Garg, Chaya; Burgin, Roger W.; De Mers, Robert E, Methods and apparatus for post-processing speech recognition results of received radio voice messages onboard an aircraft.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.