IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0412050
(1999-10-04)
|
발명자
/ 주소 |
- Sinai,Julian
- Ehrlich,Steven C.
- Ragoobeer,Rajesh
|
출원인 / 주소 |
|
대리인 / 주소 |
Blakely, Sokoloff, Taylor &
|
인용정보 |
피인용 횟수 :
60 인용 특허 :
15 |
초록
▼
A computer-implemented graphical design tool allows a developer to graphically author a dialog flow for use in a voice response system and to graphically create an operational link between a hypermedia page and a speech object. The hypermedia page may be a Web site, and the speech object may define
A computer-implemented graphical design tool allows a developer to graphically author a dialog flow for use in a voice response system and to graphically create an operational link between a hypermedia page and a speech object. The hypermedia page may be a Web site, and the speech object may define a spoken dialog interaction between a person and a machine. Using a drag-and-drop interface, the developer can graphically define a dialog as a sequence of speech objects. The developer can also create a link between a property of any speech object and any field of a Web page, to voice-enable the Web page, or to enable a speech application to access Web site data.
대표청구항
▼
What is claimed is: 1. A computer-implemented graphical design tool configured to allow a user of a computer system to graphically create an operational link between a hypermedia page and a component defining a spoken dialog interaction between a person and a machine. 2. A computer-implemented gr
What is claimed is: 1. A computer-implemented graphical design tool configured to allow a user of a computer system to graphically create an operational link between a hypermedia page and a component defining a spoken dialog interaction between a person and a machine. 2. A computer-implemented graphical design tool as recited in claim 1, wherein the hypermedia page comprises a World Wide Web page. 3. A computer-implemented graphical design tool as recited in claim 2, wherein the component comprises a speech object. 4. A computer-implemented graphical design tool as recited in claim 1, wherein the tool is configured to allow the user to graphically map a field of a hypermedia page to a property of a speech object. 5. A computer-implemented graphical design tool as recited in claim 1, wherein the component comprises a speech object. 6. A computer-implemented graphical design tool as recited in claim 4, wherein the speech object comprises a grammar and a set of prompts associated with the grammar. 7. A computer-implemented tool for allowing a user of a computer system to specify an operational link between a hypermedia page and a component defining a dialog interaction between a person and a machine, the tool comprising: an editor configured to allow a user to specify a correspondence between an element of said component and an element of the hypermedia page; and a runtime unit configured to functionally link said component with the hypermedia page during execution of the dialog, according to the specified correspondence. 8. A computer-implemented tool as recited in claim 7, wherein the editor is configured to allow the user to specify the correspondence graphically. 9. A computer-implemented tool as recited in claim 7, wherein the hypermedia page comprises a World Wide Web page. 10. A computer-implemented tool as recited in claim 7, wherein the hypermedia page comprises a World Wide Web page and the component comprises a speech object. 11. A computer-implemented tool as recited in claim 10, wherein the editor is further configured to: receive a user input specifying a field of the hypermedia page; and respond to the user input by automatically selecting an appropriate speech object from a set of selectable speech objects, based on said field. 12. A computer-implemented tool as recited in claim 7, wherein the component comprises a speech object. 13. A computer-implemented tool as recited in claim 11, wherein the speech object comprises a grammar and a set of prompts associated with the grammar. 14. A computer-implemented graphical design tool for allowing a user of a computer system to graphically specify an operational link between a hypermedia page and a component that defines a spoken dialog interaction between a person and a machine, the tool comprising: an editor configured to allow a user to specify a correspondence between an element of said component and an element of the hypermedia page; and a runtime unit configured to functionally link said component with the hypermedia page during execution of the spoken dialog according to the specified correspondence. 15. A computer-implemented tool as recited in claim 14, wherein the hypermedia page comprises a World Wide Web page. 16. A computer-implemented tool as recited in claim 15, wherein said element of the hypermedia page is a field of the World Wide Web page. 17. A computer-implemented tool as recited in claim 14, wherein the hypermedia page comprises a World Wide Web page and the component comprises a speech object. 18. A computer-implemented tool as recited in claim 14, wherein the component comprises a speech object. 19. A computer-implemented tool as recited in claim 18, wherein said element of the component is a property of the speech object. 20. A computer-implemented tool as recited in claim 18, wherein the speech object comprises a grammar and a set of prompts associated with the grammar. 21. A tool for allowing a user of a computer system to specify an operational link between a hypermedia page and a component that defines a dialog interaction between a person and a machine, the tool comprising: means for allowing a user to specify a correspondence between an element of the component and an element of the hypermedia page; and means for functionally linking the component with the hypermedia page during execution of the dialog according to the specified correspondence. 22. A tool as recited in claim 21, wherein the hypermedia page comprises a World Wide Web page. 23. A tool as recited in claim 22, wherein said element of the hypermedia page is a field of the World Wide Web page. 24. A tool as recited in claim 21, wherein the hypermedia page comprises a World Wide Web page and the component comprises a speech object. 25. A tool as recited in claim 21, wherein the component comprises a speech object. 26. A tool as recited in claim 25, wherein the speech object comprises a grammar and a set of prompts associated with the grammar. 27. A tool as recited in claim 26, wherein said element of the component is a property of the speech object. 28. A tool as recited in claim 25, wherein the element of said hypermedia page is a field of the hypermedia page, the tool further comprising: means for receiving a user input specifying the field of the hypermedia page; and means for responding to the user input by automatically selecting an appropriate speech object from a set of selectable speech objects, based on said field. 29. A tool for authoring content for use in a voice response system, the tool comprising: a first editor configured to allow a user to specify a spoken dialog between a person and a machine from a set of user-selectable components defining spoken dialog interactions; and a second editor configured to allow the user to specify operational links between hypermedia pages and said components. 30. A tool as recited in claim 29, wherein the first editor is configured to allow the user to graphically specify the spoken dialog, and the second editor is configured to allow the user to graphically specify the operational links. 31. A tool as recited in claim 29, wherein the hypermedia pages comprise World Wide Web pages. 32. A tool as recited in claim 29, wherein the set of user-selectable components comprises a set of speech objects. 33. A tool as recited in claim 29, wherein the hypermedia pages comprise World Wide Web pages and the set of components comprise a set of speech objects. 34. A tool as recited in claim 33, wherein each of the set of speech objects comprises a grammar and a set of prompts associated with the grammar. 35. A tool as recited in claim 29, further comprising a runtime unit configured to functionally link the set of components with the hypermedia pages during execution of the spoken dialog according to the specified links. 36. A tool for authoring content for use in a voice response system, the tool comprising: a first editor configured to allow a user to specify a dialog between a person and a machine from a set of user-selectable components defining dialog interactions; and a hypermedia query mechanism including a second editor configured to allow a user to specify a correspondence between an element of a selected one of the components and an element of a hypermedia page, and a runtime unit configured to functionally link the selected one of the components with the hypermedia page during execution of the dialog according to the specified correspondence. 37. A tool as recited in claim 36, wherein the first editor is configured to allow the user to specify the dialog graphically. 38. A tool as recited in claim 37, wherein the second editor is configured to allow the user to specify the correspondence graphically. 39. A design tool for authoring content for use in a voice response system, the tool comprising: a first editor configured to provide a first graphical user interface allowing a user to graphically specify a spoken dialog between a person and a machine from a set of user-selectable components, each component for defining a spoken dialog interaction; and a query mechanism including a second editor configured to provide a second graphical user interface allowing the user to specify correspondences between properties of any of said components and fields of one or more hypermedia pages, and a runtime unit configured to functionally link said components and said hypermedia pages during execution of the spoken dialog according to the specified correspondences. 40. A design tool as recited in claim 39, wherein each of the set of user-selectable components is a speech object. 41. A design tool for authoring content for use in a voice response system, the tool comprising: a first editor configured to provide a first graphical user interface allowing a user to graphically specify a spoken dialog flow between a person and a machine from a set of user-selectable speech objects, the speech objects each for defining a spoken dialog interaction between a person and a machine; and a Web query mechanism including a second editor configured to provide a second graphical user interface allowing the user to specify correspondences between properties of any of said speech objects and fields of one or more World Wide Web pages, and a runtime unit configured to functionally link said speech objects and said World Wide Web pages during execution of a spoken dialog according to the specified correspondences. 42. A design tool as recited in claim 41, wherein at least one of the set of user-selectable components is a speech object. 43. A design tool as recited in claim 41, wherein the Web query mechanism is further configured to: receive a user input directed to a field of a Web page; and respond to the user input by automatically selecting an appropriate speech object from a set of selectable speech objects, based on said field. 44. A method of allowing a user of a computer system to create content for use in a voice response processing system, the method comprising: receiving user input specifying a correspondence between an element of a hypermedia page and an element of a component that represents a spoken dialog interaction between a person and a machine; and storing data representative of the correspondence based on the user input, the data for use during execution of the spoken dialog. 45. A method as recited in claim 44, further comprising, during execution of the spoken dialog, automatically creating a functional link between the component and the hypermedia page according to the specified correspondence. 46. A method as recited in claim 44, wherein the hypermedia page comprises a World Wide Web page. 47. A method as recited in claim 46, wherein said element of the hypermedia page is a field of the World Wide Web page. 48. A method as recited in claim 44, wherein the hypermedia page comprises a World Wide Web page and the component comprises a speech object. 49. A method as recited in claim 44, wherein the component comprises a speech object. 50. A method as recited in claim 49, wherein said element of the component is a property of the speech object. 51. A method as recited in claim 49, wherein the speech object comprises a grammar and a set of prompts associated with the grammar. 52. A method of allowing a user of a computer system to specify an operational link between a hypermedia page and a component that represents a dialog interaction between a person and a machine, the method comprising: receiving user input specifying a correspondence between a property of the component and a field of the hypermedia page; during execution of the dialog, automatically creating a functional link between the component and the hypermedia page according to the specified correspondence. 53. A method as recited in claim 52, wherein said user input comprises a drag-and-drop operation between a customizer of the component and a customizer associated with the hypermedia page. 54. A method as recited in claim 52, wherein the hypermedia page comprises a World Wide Web page. 55. A method as recited in claim 52, wherein the hypermedia page comprises a World Wide Web page and the component comprises a speech object. 56. A method as recited in claim 52, wherein the component comprises a speech object. 57. A method as recited in claim 56, wherein the speech object comprises a grammar and a set of prompts associated with the grammar. 58. A method of allowing a user of a computer to create content for use in a voice response system, the method comprising: receiving first user input graphically specifying a spoken dialog between a person and a machine, the first user input including inputs directed to a set of user-selectable components defining spoken dialog interactions; storing first data representing a dialog flow for the spoken dialog based on the first user input; receiving second user input graphically specifying a correspondence between a field of a hypermedia page and a property of one of said components; and storing second data representing the correspondence based on the second user input, wherein the first data and the second data are for use by the voice response system to execute the spoken dialog. 59. A method as recited in claim 58, further comprising: receiving third user input selecting a field of the hypermedia page; and in response to the third user input, automatically identifying a component of said set of user-selectable components, for inclusion in the spoken dialog. 60. A method as recited in claim 58, further comprising: receiving third user input specifying a portion of the hypermedia page that is to be text-to-speech converted at run-time; and in response to the third user input, enabling text-to-speech conversion of the specified portion of the Web page. 61. A method as recited in claim 58, wherein the hypermedia page comprises a World Wide Web page. 62. A method as recited in claim 61, wherein the hypermedia page comprises a World Wide Web page and the set of components comprises a set of speech objects. 63. A method as recited in claim 58, wherein the set of user-selectable components comprises a set of speech objects. 64. A method as recited in claim 63, wherein each of the set of speech objects comprises a grammar and a set of prompts associated with the grammar. 65. A method of allowing a user of a computer to create content for use in a voice response system, the method comprising: enabling the user to create graphically a dialog flow for a spoken dialog between a person and a machine by allowing the user to graphically specify a set of visually-represented speech objects to define the dialog; and enabling the user to establish graphically a functional link between a hypermedia page and one of the speech objects by allowing the user to incorporate graphically an object of a predetermined type into the dialog flow, the object of the predetermined type specifying a correspondence between an element of a hypermedia page and an element of one of the speech objects. 66. A method as recited in claim 65, wherein the object of the predetermined type specifies a correspondence between a field of the hypermedia page and a property of the speech object. 67. A method as recited in claim 66, wherein said enabling the user to establish the functional link comprises enabling the user to specify graphically the correspondence using drag-and-drop inputs. 68. A method as recited in claim 65, wherein said enabling the user to establish the functional link comprises enabling the user to specify graphically the correspondence using drag-and-drop inputs. 69. A method as recited in claim 65, wherein said enabling the user to create graphically a dialog flow comprises: receiving user input selecting a field of the hypermedia page; and in response to the third user input, automatically identify an appropriate one of the speech object, for inclusion in the spoken dialog. 70. A method as recited in claim 65, wherein said enabling the user to establish graphically a functional link between a hypermedia page and one of the speech objects comprises: receiving user input specifying a portion of the hypermedia page that is to be text-to-speech converted as part of a response to a Web query; and in response to the third user input, enabling text-to-speech conversion of the specified portion of the Web page to be performed in response to a Web query.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.