Disclosed herein are systems, computer-implemented methods, and computer-readable media for dialog modeling. The method includes receiving spoken dialogs annotated to indicate dialog acts and task/subtask information, parsing the spoken dialogs with a hierarchical, parse-based dialog model which ope
Disclosed herein are systems, computer-implemented methods, and computer-readable media for dialog modeling. The method includes receiving spoken dialogs annotated to indicate dialog acts and task/subtask information, parsing the spoken dialogs with a hierarchical, parse-based dialog model which operates incrementally from left to right and which only analyzes a preceding dialog context to generate parsed spoken dialogs, and constructing a functional task structure of the parsed spoken dialogs. The method can further either interpret user utterances with the functional task structure of the parsed spoken dialogs or plan system responses to user utterances with the functional task structure of the parsed spoken dialogs. The parse-based dialog model can be a shift-reduce model, a start-complete model, or a connection path model.
대표청구항▼
1. A method comprising: training a plurality of hierarchical, parsed-based dialog models comprising a shift-reduce model, a start-complete model, and a connection path model, wherein the plurality of hierarchical, parsed-based dialog models operate incrementally from left to right and only analyze a
1. A method comprising: training a plurality of hierarchical, parsed-based dialog models comprising a shift-reduce model, a start-complete model, and a connection path model, wherein the plurality of hierarchical, parsed-based dialog models operate incrementally from left to right and only analyze an immediately preceding dialog context;parsing, via a processor, spoken dialogs with a hierarchical, parse-based dialog model from the plurality of hierarchical, parsed-based dialog models, to yield parsed spoken dialogs, wherein the spoken dialogs are annotated to indicate dialog acts, feature vectors, and task/subtask information;constructing a functional task structure of the parsed spoken dialogs, wherein the functional task structure does not comprise a rhetorical structure of the parsed spoken dialogs;predicting a likely next dialog act using the functional task structure, the feature vectors, and the hierarchical, parsed-based dialog model; andselecting a language model for a next utterance based on the likely next dialog act. 2. The method of claim 1, wherein the shift-reduce model has a stack and a tree which (a) shifts each utterance onto the stack, (b) inspects the stack, and (c) based on the stack inspection, performs a reduce action that creates subtrees in the tree. 3. The method of claim 1, wherein the start-complete model uses a stack to maintain a global parse state and produces a dialog task structure directly without producing an equivalent tree. 4. The method of claim 1, wherein the connection path model does not use a stack to maintain a global parse state, and wherein the connection path model (a) directly predicts a connection path from a root to a terminal for each received spoken dialog, and (b) creates a parse tree representing the connection path for each received spoken dialog. 5. The method of claim 1, further comprising: incrementally receiving user utterances as a dialog progresses;assigning a dialog act to a current user utterance based on the functional task structure of the parsed spoken dialogs;assigning a subtask label to the current user utterance based on the functional task structure of the parsed spoken dialogs;predicting a system subtask label for a next system utterance based on the functional task structure of the parsed spoken dialogs;predicting a system dialog act for a next system utterance based on the functional task structure of the parsed spoken dialogs;predicting a next subtask label for a next user utterance based on the functional task structure of the parsed spoken dialogs; andpredicting a next dialog act for a next user utterance based on the functional task structure of the parsed spoken dialogs. 6. The method of claim 5, wherein interpreting and predicting are modeled as maximum entropy classifiers which select dialog acts or subtask labels from a pre-selected list. 7. The method of claim 5, further comprising measuring dialog efficiency at different dialog stages. 8. A system comprising: a processor; anda non-transitory computer-readable storage medium having instructions stored, which when executed on the processor, cause the processor to perform operations comprising: training a plurality of hierarchical, parsed-based dialog models comprising a shift-reduce model, a start-complete model, and a connection path model, wherein the plurality of hierarchical, parsed-based dialog models operate incrementally from left to right and only analyze an immediately preceding dialog context;parsing, via a processor, spoken dialogs with a hierarchical, parse-based dialog model from the plurality of hierarchical, parsed-based dialog models, to yield parsed spoken dialogs, wherein the spoken dialogs are annotated to indicate dialog acts, feature vectors, and task/subtask information;constructing a functional task structure of the parsed spoken dialogs, wherein the functional task structure does not comprise a rhetorical structure of the parsed spoken dialogs;predicting a likely next dialog act using the functional task structure, the feature vectors, and the hierarchical, parsed-based dialog model; andselecting a language model for a next utterance based on the likely next dialog act. 9. The system of claim 8, wherein the shift-reduce model has a stack and a tree which (a) shifts each utterance onto the stack, (b) inspects the stack, and (c) based on the stack inspection, performs a reduce action that creates subtrees in the tree. 10. The system of claim 8, wherein the start-complete model uses a stack to maintain a global parse state and produces a dialog task structure directly without producing an equivalent tree. 11. The system of claim 8, wherein the connection path model does not use a stack to maintain a global parse state, and wherein the connection path model (a) directly predicts a connection path from a root to a terminal for each received spoken dialog, and (b) creates a parse tree representing the connection path for each received spoken dialog. 12. The system of claim 8, the non-transitory computer-readable storage medium having additional instructions stored which, when executed by the processor, result in operations comprising: incrementally receiving user utterances as a dialog progresses;assigning a dialog act to a current user utterance based on the functional task structure of the parsed spoken dialogs;assigning a subtask label to the current user utterance based on the functional task structure of the parsed spoken dialogs;predicting a system subtask label for a next system utterance based on the functional task structure of the parsed spoken dialogs;predicting a system dialog act for a next system utterance based on the functional task structure of the parsed spoken dialogs;predicting a next subtask label for a next user utterance based on the functional task structure of the parsed spoken dialogs; andpredicting a next dialog act for a next user utterance based on the functional task structure of the parsed spoken dialogs. 13. The system of claim 12, the non-transitory computer-readable storage medium having additional instructions stored which, when executed by the processor, cause the processor to perform further operations comprising modeling predictions as maximum entropy classifiers which select dialog acts or subtask labels from a pre-selected list. 14. The system of claim 12, the non-transitory computer-readable storage medium having additional instructions stored which, when executed by the processor, cause the processor to perform further operations comprising measuring dialog efficiency at different dialog stages. 15. A computer-readable device having instructions stored which, when executed by a computing device, cause the computing device to perform operations comprising: training a plurality of hierarchical, parsed-based dialog models comprising a shift-reduce model, a start-complete model, and a connection path model, wherein the plurality of hierarchical, parsed-based dialog models operate incrementally from left to right and only analyze an immediately preceding dialog context;parsing, via a processor, spoken dialogs with a hierarchical, parse-based dialog model from the plurality of hierarchical, parsed-based dialog models, to yield parsed spoken dialogs, wherein the spoken dialogs are annotated to indicate dialog acts, feature vectors, and task/subtask information;constructing a functional task structure of the parsed spoken dialogs, wherein the functional task structure does not comprise a rhetorical structure of the parsed spoken dialogs;predicting a likely next dialog act using the functional task structure, the feature vectors, and the hierarchical, parsed-based dialog model; andselecting a language model for a next utterance based on the likely next dialog act. 16. The computer-readable device of claim 15, wherein the shift-reduce model has a stack and a tree which (a) shifts each utterance onto the stack, (b) inspects the stack, and (c) based on the stack inspection, performs a reduce action that creates subtrees in the tree. 17. The computer-readable device of claim 15, wherein the start-complete model uses a stack to maintain a global parse state and produces a dialog task structure directly without producing an equivalent tree. 18. The computer-readable device of claim 15, wherein the connection path model does not use a stack to maintain a global parse state, and wherein the connection path model (a) directly predicts a connection path from a root to a terminal for each received spoken dialog, and (b) creates a parse tree representing the connection path for each received spoken dialog. 19. The computer-readable device of claim 15, having additional instructions stored which, when executed by the processor, cause the processor to perform further operations comprising: incrementally receiving user utterances as a dialog progresses;assigning a dialog act to a current user utterance based on the functional task structure of the parsed spoken dialogs;assigning a subtask label to the current user utterance based on the functional task structure of the parsed spoken dialogs;predicting a system subtask label for a next system utterance based on the functional task structure of the parsed spoken dialogs;predicting a system dialog act for a next system utterance based on the functional task structure of the parsed spoken dialogs;predicting a next subtask label for a next user utterance based on the functional task structure of the parsed spoken dialogs; andpredicting a next dialog act for a next user utterance based on the functional task structure of the parsed spoken dialogs. 20. The computer-readable device of claim 19, having additional instructions stored which, when executed by the processor, cause the processor to perform further operations comprising measuring dialog efficiency at different dialog stages.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (16)
Amirghodsi Siamak (Prairie View IL) Daneshbodi Farnoud (Prairie View IL), Adaptive natural language computer interface system.
Horvitz, Eric J., System and methods for inferring informational goals and preferred level of detail of results in response to questions posed to an automated information-retrieval or question-answering service.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.