Location-based conversational understanding may be provided. Upon receiving a query from a user, an environmental context associated with the query may be generated. The query may be interpreted according to the environmental context. The interpreted query may be executed and at least one result ass
Location-based conversational understanding may be provided. Upon receiving a query from a user, an environmental context associated with the query may be generated. The query may be interpreted according to the environmental context. The interpreted query may be executed and at least one result associated with the query may be provided to the user.
대표청구항▼
1. A system comprising: at least one processor; andmemory coupled to the at least one processor, the memory comprising computer executable instructions that, when executed by the at least one processor, performs a method for providing location-based conversational understanding, the method comprisin
1. A system comprising: at least one processor; andmemory coupled to the at least one processor, the memory comprising computer executable instructions that, when executed by the at least one processor, performs a method for providing location-based conversational understanding, the method comprising: receiving, by a computing device, a speech-based query from a user at a location;determining whether an environmental context is associated with the location;when it is determined that no environmental context is associated with the location: identifying at least an acoustic interference in the speech-based query;identifying a subject of the speech-based query;based at least on the identified subject of the speech-based query, creating the environmental context, wherein the environmental context suppresses the acoustic interference; andconverting the speech-based query to a text-based query using the environmental context. 2. The system of claim 1, wherein the environmental context is associated with at least one of: an understanding model to facilitate speech-to-text conversion; anda semantic model to facilitate query execution. 3. The system of claim 1, wherein the location is determined using at least one of: global positioning system (GPS) coordinates;an area code associated with the user;a zip code associated with the user; anda proximity to a landmark. 4. The system of claim 1, wherein identifying at least an acoustic interference comprises analyzing audio of the query and identifying background noise in the audio. 5. The system of claim 1, wherein identifying the subject of the speech-based query comprises: asking for clarification of the speech-based query from the user; andcorrelating a plurality of queries, wherein the plurality of queries are identified as requesting similar information. 6. The system of claim 1, wherein creating the environmental context comprises: associating the identified acoustic interference and the identified subject with the location; andstoring the identified acoustic interference, the identified subject and an the location information in a context database. 7. The system of claim 1, wherein converting the speech-based query comprises applying a filter for removing the acoustic interference associated with the environmental context. 8. The system of claim 1, wherein converting the speech-based query includes utilizing a Hidden Markov Model algorithm comprising statistical weightings for at least one of: words and semantic concepts. 9. The system of claim 1, the method further comprising: executing the text-based query according to the environmental context within a search domain associated with the identified subject; andproviding a result of the executed text-based query to the user. 10. A method for providing location-based conversational understanding, the method comprising: receiving, by a computing device, a first speech-based query from a user at a location;determining whether an environmental context is associated with the location;when it is determined that no environmental context is associated with the location: identifying at least a first acoustic interference in the first speech-based query;identifying a subject of the first speech-based query;based at least on the identified subject of the first speech-based query, creating a first environmental context, wherein the environmental context suppresses the acoustic interference; andconverting the first speech-based query to a text-based query using the first environmental context. 11. The method of claim 10, wherein converting the first speech-based query includes utilizing a Hidden Markov Model algorithm comprising statistical weightings for at least one of: words likely to be associated with an understanding model; andsemantic concepts associated with a semantic model. 12. The method of claim 11, further comprising increasing the statistical weightings of one or more predicted words according to one or more previous queries received at the location. 13. The method of claim 10, wherein the first environmental context comprises an acoustic model associated with the location, and wherein the first speech-based query is adapted according to the first acoustic interference using the acoustic model. 14. The method of claim 13, wherein adapting the first speech-based query comprises: identifying at least one background sound from the first acoustic interference;adapting the first speech-based query to ignore the at least one background sound; andstoring the at least one background sound. 15. The method of claim 14, further comprising: receiving a second speech-based query associated with the location;applying the acoustic model associated with the location to the second speech-based query; andadapting the second speech-based query to ignore the stored at least one background sound. 16. The method of claim 15, further comprising: identifying a second acoustic interference in the second speech-based query;based on the second acoustic interference, updating the acoustic model associated with the location; andadapting the second speech-based query to ignore one or more background sounds in the second acoustic interference. 17. The method of claim 10, further comprising: receiving a second speech-based query associated with the location;creating a second environmental context based on the second speech-based query;aggregating the first environmental context and the second environmental context into an aggregated environmental context, wherein the aggregated environmental context is associated with the locations; andstoring the aggregated environmental context. 18. The method of claim 17, wherein the aggregated environmental context comprises the subject of the first speech-based query and a subject of the second speech-based query. 19. The method of claim 18, wherein the subject of the first speech-based query is used to improve a result for the second speech-based query. 20. A computer-readable storage device storing computer executable instructions that when executed cause a computing system to perform a method for providing location-based conversational understanding, the method comprising: receiving, by a computing device, a speech-based query from a user at a location;determining whether an environmental context is associated with the location;when it is determined that no environmental context is associated with the location: identifying at least an acoustic interference in the speech-based query;identifying a subject of the speech-based query;based at least on the identified subject of the speech-based query, creating an environmental context, wherein the environmental context suppresses the acoustic interference; andconverting the speech-based query to a text-based query using the environmental context.
Moran Thomas P. ; Chiu Patrick ; Melle William Van ; Kurtenbach Gordon,CAX, Apparatus and method for implementing visual animation illustrating results of interactive editing operations.
Bhandari Archna ; Janiszewski Mary E. ; Mehrotra Rajiv, Computer program product and a method for using natural language for the description, search and retrieval of multi-medi.
Ando Haru (Kokubunji JPX) Kitahara Yoshinori (Musashimurayama JPX), Display system capable of accepting user commands by use of voice and gesture inputs.
Cohen, Charles J.; Beach, Glenn; Cavell, Brook; Foulk, Gene; Jacobus, Charles J.; Obermark, Jay; Paul, George, Gesture-controlled interfaces for self-service machines and other applications.
Appelt, Douglas E.; Arnold, James Frederick; Bear, John S.; Hobbs, Jerry Robert; Israel, David J.; Kameyama, Megumi; Martin, David L.; Myers, Karen Louise; Ravichandran, Gopalan; Stickel, Mark Edward, Information retrieval by natural language querying.
Gruber, Thomas Robert; Cheyer, Adam John; Kittlaus, Dag; Guzzoni, Didier Rene; Brigham, Christopher Dean; Giuli, Richard Donald; Bastea-Forte, Marcello; Saddler, Harry Joseph, Intelligent automated assistant.
Bennett, Ian M.; Babu, Bandi Ramesh; Morkhandikar, Kishor; Gururaj, Pallaki, Interactive speech based learning/training system formulating search queries based on natural language parsing of recognized user queries.
Carus Alwin B. (Newton MA) Wiesner Michael (West Roxbury MA) Haque Ateeque R. (Medford MA), Method and apparatus for automated search and retrieval process.
Moran Thomas P. ; Chiu Patrick ; van Melle William J. ; Kurtenbach Gordon P.,CAX, Method and apparatus for grouping graphic objects on a computer based system having a graphical user interface.
Kennewick, Robert A.; Locke, David; Kennewick, Sr., Michael R.; Kennewick, Jr., Michael R.; Kennewick, Richard; Freeman, Tom, Method and system for asynchronously processing natural language utterances.
Anward, Jan; Pang, Christianne; Jonsson, Marcus; Samzelius, Jan, Method and system for text analysis based on the tagging, processing, and/or reformatting of the input text.
Hammer Bernard (Pfaffing DEX), Method for data reduction of digital picture signals by vector quantization of coefficients acquired by orthonormal tran.
Copperi Maurizio (Venaria ITX) Sereno Daniele (Torino ITX), Method of and device for speech signal coding and decoding by subband analysis and vector quantization with dynamic bit.
Rafii, Abbas; Bamji, Cyrus; Sze, Cheng-Feng; Torunoglu, Iihami, Methods for enhancing performance and data acquired from three-dimensional image systems.
Baker Bruce R. (Pittsburgh PA) Nyberg Eric H. (Pittsburgh PA), Natural language processing system and method for parsing a plurality of input symbol sequences into syntactically or pr.
Hobson Samuel D. (Seattle WA) Horvitz Eric (Kirkland WA) Heckerman David E. (Bellevue WA) Breese John S. (Mercer Island WA) Finkelstein Erich-Sren (Bellevue WA) Shaw Gregory L. (Kirkland WA) Flynn Ja, On-line help method and system utilizing free text query.
Cooper, Edwin Riley; Bierner, Gann; Graham, Laurel Kathleen; Yuret, Deniz; Williams, James Charles; Beghelli, Filippo, Ontology for use with a system, method, and computer readable medium for retrieving information and response to a query.
Dehlin, Joel P.; Chen, Christina Summer; Wilson, Andrew D.; Robbins, Daniel C.; Horvitz, Eric J.; Hinckley, Kenneth P.; Wobbrock, Jacob O., Recognizing gestures and using gestures for interacting with software applications.
Goldberg Randy G. ; Rosen Kenneth H. ; Sachs Richard M. ; Winthrop ; III Joel A., Selective noise/channel/coding models and recognizers for automatic speech recognition.
Gagnon, Jean; Roy, Philippe; Lagassey, Paul J., Speech interface system and method for control and interaction with applications on a computing system.
Hofmann,Thomas; Puzicha,Jan Christian, System and method for personalized search, information filtering, and for generating recommendations utilizing statistical latent class models.
Jarrell, Bruce; Nirenburg, Sergei; McShane, Marjorie Joan; Beale, Stephen, Techniques for implementing virtual persons in a system to train medical personnel.
Kuhn, Roland; Davis, Tony; Junqua, Jean-Claude; Zhao, Yi; Li, Weiying, Universal remote control allowing natural language modality for television and multimedia searches and requests.
Di Fabbrizio, Giuseppe; Dutton, Dawn L; Gupta, Narendra K.; Hollister, Barbara B.; Rahim, Mazin G; Riccardi, Giuseppe; Schapire, Robert Elias; Schroeter, Juergen, Voice-enabled dialog system.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.