최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0288848 (2011-11-03) |
등록번호 | US-RE44326 (2013-06-25) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 2 인용 특허 : 310 |
A method and system of speech recognition presented by a back channel from multiple user sites within a network supporting cable television and/or video delivery is disclosed.
1. A method of using a back channel containing a multiplicity of speech channels from a multiplicity of user devices presented to a speech recognition system in a network supporting content deliveryfor speech directed information delivery, comprising the steps of: partitioning a received back channe
1. A method of using a back channel containing a multiplicity of speech channels from a multiplicity of user devices presented to a speech recognition system in a network supporting content deliveryfor speech directed information delivery, comprising the steps of: partitioning a received back channel containing a multiplicity of speech channels from a multiplicity of user devices into a multiplicity of received identified speech channels;processing said multiplicity of received identified speech channels to create recognized speech for each of said received identified speech channels; andtransmitting a unique response to each of said user devices, based upon said recognized speech.receiving speech information at a first device, wherein said first device is a wireless device;transferring said speech information from said first wireless device via a first network path to a speech recognition engine; andat said speech recognition engine, recognizing said speech information and effecting information delivery to a second device via a second network path. 2. The method of claim 1, further comprising at least one of the steps of: determining a user site associated with a user device from said received identified speech channel;determining said associated user site from said recognized speech;determining said associated user site from said recognized speech and a speaker identification library;determining said associated user site from said recognized speech and a speech recognition library; anddetermining said associated user site from an identification within said speech channelwherein said first network path and said second network path are different paths. 3. The method of claim 1, further comprising the steps of: assessing said response identified as to said user device to create a financial consequence; andbilling a user associated with said user device based upon said financial consequencewherein said first device and said second device paths are different devices. 4. The method of claim 1, further comprising the steps of: assessing said response to create a financial consequence identified with said user site;communicating said financial consequence to said user;said user confirming said communicated financial consequence to create a financial commitment; andbilling said user based upon said financial commitmentwherein said speech information comprises video search information; and wherein said information delivery comprises video information. 5. The method of claim 2, further comprising of the steps of: fetching a user profile list based upon said user device, said user profile list containing at least one user profile; andidentifying said user based upon said recognized speech and based upon said user profile list1, wherein said speech information transfer comprises transferring said speech information in either of a partially recognized state or an unrecognized state. 6. The method of claim 1, wherein said processing step comprising of the step of: responding to said recognized speech identified as to said user device based upon natural language to create a response uniquely identified with said user devicewireless devise is used for input and output for control purposes, wherein said information delivery is to said second device which comprises a television and STB. 7. A method for controlling a speech recognition system coupled to a network,The method of claim 1, further comprising at least one of the steps of: processing a multiplicity of received identified speech channels to create a multiplicity of recognized speech;responding to said recognized speech to create a recognized speech response that is unique to each of said multiplicity of recognized speech; andproviding said speech recognition system at a back channel accessible by a multiplicity of user devices coupled to said networkdetermining a user site associated with a user of said first device;determining said associated user site from said recognized speech;determining said associated user site from said recognized speech and a speaker identification library;determining said associated user site from said recognized speech and a speech recognition library; anddetermining said associated user site from an identification within said speech channel. 8. The method of claim 71, further comprising any of the steps of: determining a user associated with user device from a receivedassessing a response identified speech channel;determining said user associated with said user device from said recognized speech;determining said user associated with said user device from said recognized speech and a speaker identification library;determining said user associated with said user device from said recognized speech and a speech recognition library; anddetermining said user associated with said user device from an identification within a speech channelas to a user device comprising any of said first device and said second device to create a financial consequence; andbilling a user associated with said user device based upon said financial consequence. 9. The method of claim 71, further comprising the steps of: assessing saida response to create a financial consequence identified with a user associated with a user device to create asite; communicating said financial consequence to said user;said user confirming said communicated financial consequence to create a financial commitment; andbilling said user based upon said financial consequencecommitment. 10. The method of claim 71, further comprising of the steps of: fetching a user profile list based upon said userfirst device, said user profile list containing at least one user profile; andidentifying said user based upon recognized speech and based upon said user profile list. 11. An apparatus for speech recognition in a networkThe method of claim 1, further comprising the steps of providing: a speech recognition system coupled to said network for receiving a back channel from a multiplicity of user devices;a back channel receiver for receiving said back channel;a speech channel partitioner for partitioning said received back channel into a multiplicity of received identified speech channels;a processor for processing said multiplicity of said received identified speech channels to createresponding to recognized speech for each of said received identified speech channels; andrespondingas to said recognized speechfirst device based upon natural language to create a unique response for transmission to each ofuniquely identified with said user devices. 12. The apparatus of claim 11, said processingA method for speech directed information delivery comprising means for: determining a user associated with a user device from said received identified speech channel;determining said associated user from said recognized speech;determining said associated user from said recognized speech and a speaker identification library;determining said associated user from said recognized speech and a speech recognition library; anddetermining said associated user from an identification within said speech channelreceiving speech information at a first device, wherein said first device is a wireless device;transferring said speech information in an unrecognized state from said first device via a first network path to a speech recognition engine; andat said speech recognition engine, recognizing said speech information and effecting information delivery to a second device via a second network path, wherein said second device is capable of displaying electronically coded and propagated moving or still images and playing electronically coded and propagated audio;wherein said first network path and said second network paths are different. 13. The apparatusmethod of claim 1112, further comprising: means for assessing content response identified as to said user to create a financial consequence to said user site; andbilling said user site based upon said financial consequencewherein said first device and said second device are different devices. 14. The apparatusmethod of claim 1112, further comprising: means for fetching a user profile list based upon said user devices, said user profile list containing at least one user profile; andmeans for identifying said user based upon said recognized speech content and based upon said user profile listwherein said speech information comprises video search information; and wherein said information delivery comprises video information. 15. The method of claim 12, wherein said speech information transfer comprises transferring said speech information in either of a partially recognized state or an unrecognized state. 16. The method of claim 12, wherein said wireless first device is used for input and output for control purposes, wherein said information delivery is to said second device which comprises a television and STB. 17. The method of claim 12, further comprising at least one of the steps of: determining a user site associated with a user of said first device;determining said associated user site from said recognized speech;determining said associated user site from said recognized speech and a speaker identification library;determining said associated user site from said recognized speech and a speech recognition library; anddetermining said associated user site from an identification within said speech channel. 18. The method of claim 12, further comprising the steps of: assessing a response identified as to a user device comprising any of said first device and said second device to create a financial consequence; andbilling a user associated with said user device based upon said financial consequence. 19. The method of claim 12, further comprising the steps of: assessing a response to create a financial consequence identified with a user site;communicating said financial consequence to said user;said user confirming said communicated financial consequence to create a financial commitment; andbilling said user based upon said financial commitment. 20. The method of claim 12, further comprising of the steps of: fetching a user profile list based upon said first device, said user profile list containing at least one user profile; andidentifying said user based upon said recognized speech and based upon said user profile list. 21. The method of claim 12, further comprising the step of: responding to recognized speech identified as to said first device based upon natural language to create a response uniquely identified with said user device. 22. A method for speech directed information delivery, comprising: receiving speech information at a first device, wherein said first device is a wireless device;transferring said speech information, after processing to complete the initial stages of speech recognition, from said first device via a first network path to a speech recognition engine; andat said speech recognition engine, performing further processing to complete the recognition of said speech information and effecting information delivery to said first device via a second network path;wherein said first network path and said second network paths are different. 23. The method of claim 22, wherein said speech information comprises video search information; and wherein said information delivery comprises video information. 24. The method of claim 22, further comprising at least one of the steps of: determining a user site associated with a user of said first device;determining said associated user site from said recognized speech;determining said associated user site from said recognized speech and a speaker identification library;determining said associated user site from said recognized speech and a speech recognition library; anddetermining said associated user site from an identification within said speech channel. 25. The method of claim 22, further comprising the steps of: assessing said response identified as to said first device to create a financial consequence; andbilling a user associated with said first device based upon said financial consequence. 26. The method of claim 22, further comprising the steps of: assessing a response to create a financial consequence identified with a user site:communicating said financial consequence to said user;said user confirming said communicated financial consequence to create a financial commitment; andbilling said user based upon said financial commitment. 27. The method of claim 22, further comprising of the steps of: fetching a user profile list based upon said first device, said user profile list containing at least one user profile; andidentifying said user based upon said recognized speech and based upon said user profile list. 28. The method of claim 22, further comprising the step of: responding to recognized speech identified as to said first device based upon natural language to create a response uniquely identified with said first device. 29. A method for speech directed information delivery, comprising: receiving speech information at a first device, wherein said first device is a wireless device;performing speech recognition at said first device;transferring said recognized speech information from said first device via a first network path to a source of information; andeffecting information delivery to said first device via a second network path;wherein said first network path and said second network paths are different. 30. The method of claim 29, wherein said speech information comprises video search information; and wherein said information delivery comprises video information. 31. The method of claim 29, further comprising at least one of the steps of: determining a user site associated with a user of said first device;determining said associated user site from said recognized speech;determining said associated user site from said recognized speech and a speaker identification library;determining said associated user site from said recognized speech and a speech recognition library; anddetermining said associated user site from an identification within said speech channel. 32. The method of claim 29, further comprising the steps of: assessing a response identified as to said first device to create a financial consequence; andbilling a user associated with said first device based upon said financial consequence. 33. The method of claim 29, further comprising the steps of: assessing a response to create a financial consequence identified with said first site;communicating said financial consequence to said user;said user confirming said communicated financial consequence to create a financial commitment; andbilling said user based upon said financial commitment. 34. The method of claim 29, further comprising of the steps of: fetching a user profile list based upon said first device, said user profile list containing at least one user profile; andidentifying said user based upon said recognized speech and based upon said user profile list. 35. The method of claim 29, further comprising the step of: responding to said recognized speech identified as to said first device based upon natural language to create a response uniquely identified with said first device. 36. A method for speech directed information delivery, comprising: receiving speech information at a first device, wherein said first device is a wireless device;transferring said speech information from said first device via a first network path to a speech recognition engine; andat said speech recognition engine, performing processing to recognize said speech information and effecting information delivery to said first device via a second network path;wherein said first network path and said second network paths are different. 37. The method of claim 36, wherein said speech information comprises video search information; and wherein said information delivery comprises video information. 38. The method of claim 36, further comprising at least one of the steps of: determining a user site associated with a user of said first device;determining said associated user site from said recognized speech;determining said associated user site from said recognized speech and a speaker identification library;determining said associated user site from said recognized speech and a speech recognition library; anddetermining said associated user site from an identification within said speech channel. 39. The method of claim 36, further comprising the steps of: assessing said response identified as to said first device to create a financial consequence; andbilling a user associated with said first device based upon said financial consequence. 40. The method of claim 36, further comprising the steps of: assessing a response to create a financial consequence identified with a user site;communicating said financial consequence to said user;said user confirming said communicated financial consequence to create a financial commitment; andbilling said user based upon said financial commitment. 41. The method of claim 36, further comprising of the steps of: fetching a user profile list based upon said first device, said user profile list containing at least one user profile; andidentifying said user based upon said recognized speech and based upon said user profile list. 42. The method of claim 36, further comprising the step of: responding to recognized speech identified as to said first device based upon natural language to create a response uniquely identified with said first device.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.