Efficient conversion of voice messages into text
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G10L-015/00
G10L-021/00
H04M-001/64
H04M-011/00
G06F-019/00
출원번호
US-0260923
(2008-10-29)
등록번호
US-8239197
(2012-08-07)
발명자
/ 주소
Webb, Mike O.
Peterson, Bruce J.
Kaseda, Janet S.
출원인 / 주소
Intellisist, Inc.
대리인 / 주소
Inouye, Patrick J. S.
인용정보
피인용 횟수 :
36인용 특허 :
60
초록▼
A system and method for efficiently transcribing verbal messages transmitted over the Internet (or other network) into text. The verbal messages are initially checked to ensure that they are in a valid format and include a return network address, and if so, are processed either as whole verbal messa
A system and method for efficiently transcribing verbal messages transmitted over the Internet (or other network) into text. The verbal messages are initially checked to ensure that they are in a valid format and include a return network address, and if so, are processed either as whole verbal messages or split into segments. These whole verbal messages and segments are processed by an automated speech recognition (ASR) program, which produces automatically recognized text. The automatically recognized text messages or segments are assigned to selected workbenches for manual editing and transcription, producing edited text. The segments of edited text are reassembled to produce whole edited text messages, undergo post processing to correct minor errors and output as an email, an SMS message, a file, or an input to a program. The automatically recognized text and manual edits thereof are returned as feedback to the ASR program to improve its accuracy.
대표청구항▼
1. A method for transcribing verbal messages into text, comprising the steps of: receiving verbal messages over a network and queuing the verbal messages in a queue for processing into text;automatically processing at least portions of successive verbal messages from the queue with online processors
1. A method for transcribing verbal messages into text, comprising the steps of: receiving verbal messages over a network and queuing the verbal messages in a queue for processing into text;automatically processing at least portions of successive verbal messages from the queue with online processors using an automated speech recognition (ASR) program to produce corresponding text;assigning whole verbal messages or segments of the verbal messages that have been automatically processed to selected workbench stations for further editing and transcription by operators at the workbench stations;enabling the operators at the workbench stations to which the whole or the segments of the verbal messages have been assigned to listen to the verbal messages, correct errors in the text that was produced by the automatic processing, and transcribe portions of the verbal messages that have not been automatically processed by the ASR program, producing final text messages or segments of final text messages corresponding to the verbal messages that were in the queue;assembling the segments of the final text messages produced by the operators at the workbench stations from the segments of the verbal messages that were processed as text messages corresponding to the verbal messages that were processed, producing final output text messages; andapplying post processing to the text corresponding to the verbal messages that were automatically processed, wherein: if editing the automatically produced text for one of the automatically processed whole verbal messages by at least one operator on the workbench station will exceed a required turn-around-time, further comprising the step of immediately post processing the automatically produced text without using any edits provided by the operator at the workbench station; andif editing the segments of one of the automatically processed verbal messages will exceed the required turn-around-time, further comprising the step of post processing any text of the verbal message that was automatically recognized and has a confidence rating that is greater than a predefined minimum, any text of the automatically processed verbal message that have already been edited or transcribed by one of the operators on the workbench station, and any text of the verbal message that was automatically recognized and was moved into a workbench station queue but has not yet been edited by an operator at a workbench station. 2. The method of claim 1, further comprising the step of validating a format of the verbal messages and a return address for delivery of the output text messages before enabling queuing of each verbal message. 3. The method of claim 1, further comprising the step of assigning the verbal messages to specific online processors in accord with predefined assignment rules. 4. The method of claim 1, wherein the whole verbal messages are simultaneously sent to the online processors for processing using the ASR program and to a queue for processing by one of the workbench stations. 5. The method of claim 1, further comprising the step of separating audio content in at least one of the verbal messages from associated metadata, wherein the associated metadata includes one or more elements selected from the group consisting of: proper nouns;a caller name, if the verbal message is a voice mail; anda name of a person being called, if the verbal message is a voice mail. 6. The method of claim 5, wherein the audio content and the metadata of the verbal messages in the queue are input to the online processors for improving accuracy of the ASR program. 7. The method of claim 1, wherein the step of automatically processing includes the steps of: checking for common content patterns in the verbal messages to aid in automated speech recognition; andchecking automatically recognized speech using a pattern matching technique to identify any common message formats. 8. The method of claim 1, further comprising the step of: breaking up at least one of the verbal messages into the segments based on predefined rules, including one or more rules selected from the group consisting of: breaking the verbal message into the segments where silence is detected;breaking the verbal message into the segments so that the segments have a predefined maximum duration; andbreaking the verbal message into the segments so that the segments have between a predefined minimum and a predefined maximum number of words. 9. The method of claim 8, further comprising the steps of: assigning confidence ratings to the segments of the verbal messages that were automatically recognized by the ASR program;assigning at least one of the verbal messages, the automatically recognized text, a timeline for the verbal message, and the confidence ratings of the segments to a workbench partial message queue; andwithholding the segments that have a confidence rating above a predefined level from the workbench partial message queue, based on a high probability that the automatically recognized text is correct. 10. The method of claim 1, wherein the step of assigning the whole verbal messages or the segments of verbal messages comprises the steps of: assigning the whole verbal messages or the segments of verbal messages to a specific workbench station used by the operator eligible to process verbal messages of that type; andassigning the segments of verbal messages having a lower quality to workbench stations first to ensure said segments are transcribed with a highest quality, in a time allotted to process each of the verbal messages. 11. The method of claim 1, wherein the operators at the workbench stations edit and control transcription of the verbal messages in a browsing program display, and wherein transcription of the whole verbal messages is selectively carried out in one of three modes, including: a word mode that includes keyboard inputs for specific transcription inputs;a line mode that facilitates looping through an audible portion of at least one of the whole verbal messages to focus on a single line of transcribed text at a time; anda whole message mode, in which the operator working at the workbench station listens to the whole verbal message to produce the corresponding text. 12. The method of claim 11, wherein transcription of the portions of the one or more verbal messages is carried out by one of the operators at the workbench station, and further comprising the step of displaying a graphical representation of an audio waveform for at least a part of the verbal message to the operator, with the portions to be transcribed visually indicated. 13. The method of claim 1, further comprising the step of applying post processing to the text corresponding to the verbal messages that were transcribed, for correcting minor errors in the text. 14. The method of claim 1, wherein the step of producing the final output text messages comprises the steps of making the final output text messages available to an end user by transmitting the final output text messages to the end user in connection with one of: an email message transmitted over the network;a short message service message transmitted over the network and through a telephone system;a file transmitted over the network to a program interface; anda file transmitted over the network to a web portal. 15. The method of claim 1, further comprising the step of employing the edits made to the text produced by the ASR program by operators at the workbench stations, as feedback used to improve an accuracy of the ASR program. 16. The method of claim 1, further comprising the steps of: determining a confidence level for the portions of the verbal messages recognized by the ASR program, the confidence level being indicative of a likely accuracy of the text output by the ASR program;giving priority to assigning the portions of the verbal messages and the corresponding text that were automatically recognized having a lower confidence level to the operators at the workbench stations for editing over the portions of the verbal messages and the corresponding text that were automatically recognized having a higher confidence level, so that more of the difficult portions of the verbal messages will be edited and transcribed by the operators, compared to easier portions;assessing a demand for transcribing the verbal messages to determine a transcribing load on available operators at the workbench stations; andvarying a percentage of the final output text messages that comprises only the automatically recognized text, relative to a remaining percentage that is output by the operators as a function of the load, so that a growing backlog of the verbal messages to be transcribed is avoided by using a greater percentage of the automatically recognized text for the final output text messages, as the load increases. 17. A system for efficiently transcribing verbal messages that are provided to the system over a network, to produce corresponding text, comprising: a plurality of processors coupled to the network, for receiving and processing verbal messages to be transcribed to text;one or more of the plurality of processors processing the verbal messages using an automatic speech recognition (ASR) program to produce automatically recognized text;one or more of the plurality of processors on corresponding one or more workbench stations each providing a graphical interface on a display to enable operators using the one or more workbench stations to review and edit the automatically recognized text, and to further transcribe the verbal messages to produce the edited text; andone or more of the plurality of processors reassembling the edited text, producing final output text messages that can be conveyed to an end user, wherein the one or more of the plurality of processors apply post processing to the automatically recognized text before producing the final output text messages corresponding to the verbal messages that were processed using the ASR program: if editing the automatically recognized text for a whole verbal message by one of the operators on the workbench station will exceed a required turn-around-time, the automatically recognized text is submitted for post processing without using any edits provided by the operator at the workbench station; andif editing segments of at least one of the processed verbal messages will exceed the required turn-around-time, then immediately submitting for post processing: any of the segments of the processed verbal message that were automatically recognized and that have a confidence rating that is greater than a predefined minimum;any of the segments of the processed verbal message that have already been edited or transcribed by one of the operators on the workbench station; andany of the segments of the processed verbal message that were moved into a workbench station queue but have not yet been edited by one of the operators at the workbench station. 18. The system of claim 17, wherein the one or more of the plurality of processors receive the verbal messages transmitted over the network and assign the verbal messages received to others of the plurality of processors based on predefined assignment rules. 19. The system of claim 18, wherein the one or more of the plurality of processors validate an audio format and check for a return address to a location on the network for each of the verbal messages that have been received, terminate processing of any verbal message that has an invalid audio format or lacks a return address, queue the verbal messages that are found to have a valid audio format in a new verbal message queue, and assign the verbal messages in the new verbal message queue to one or more of the others of the plurality of processors based on at least one of: a content type of the verbal messages;an availability of the other processors; anda priority level of the verbal messages. 20. The system of claim 17, wherein the one or more of the plurality of processors input the verbal messages to the ASR program and also add the verbal messages to a workbench queue for manual processing by the one or more operators. 21. The system of claim 17, wherein the one or more of the plurality of processors identify patterns in the verbal messages and in the automatically recognized text to determine a confidence rating for the segments of the verbal messages. 22. The system of claim 21, wherein if the confidence rating for one of the segments is above a predefined level, the one or more of the plurality of processors do not submit the segment for further processing by one of the operators at the workbench station, but instead submit the segment for final assembly into an edited text message. 23. The system of claim 21, wherein the one or more of the plurality of processors break up at least one of the verbal messages into the segments based on predefined rules, including one or more predefined rules selected from the group consisting of: breaking the verbal message into successive segments at points in the verbal message where silence is detected between the successive segments;breaking the verbal message into the segments so that the segments have a predefined maximum duration; andbreaking the verbal message into the segments so that the segments have between a predefined minimum and a predefined maximum number of words. 24. The system of claim 17, wherein the ASR program is provided input of both audio data and metadata comprising the verbal messages, to improve an accuracy with which the text is automatically recognized when processing the verbal messages with the ASR program. 25. The system of claim 24, wherein the metadata for each verbal message includes at least one or more elements selected from the group consisting of: proper nouns;a caller name, if the verbal message is a voice mail; anda name of a person being called, if the verbal message is a voice mail. 26. The system of claim 17, wherein the segments of one or more of the verbal messages having a lower quality are assigned to the workbench stations for editing and transcription by the operators before the segments having a higher quality, to ensure the segments having lower quality are manually transcribed to achieve greater accuracy, in a time allotted to transcribe each of the verbal messages, and wherein different segments of the verbal message may be assigned to different workbench stations for editing and transcription by a plurality of different operators. 27. The system of claim 17, wherein the workbench station includes a display on which a graphical representation of an audio waveform is displayed for at least a part of one or more of the verbal messages to be transcribed by one of the operators of the workbench station, with the segment of the verbal message to be transcribed visually indicated. 28. The system of claim 17, wherein the one or more processors apply post processing to the text before producing the output text corresponding to the verbal messages that were transcribed, for correcting minor errors in the text. 29. The system of claim 17, wherein the final output text messages are made available to an end user by transmitting the final output text messages to the end user in connection with one of: an email message transmitted over the network;a short message service message transmitted over the network and through a telephone system;a file transmitted over the network to a program interface; anda file transmitted over the network to a web portal. 30. The system of claim 17, wherein the edits made by the operators at the workbench stations to the automatically recognized text produced by the ASR program are employed as feedback for use in improving an accuracy of the ASR program. 31. The system of claim 17, wherein: a confidence level is determined for portions of the verbal messages recognized by the ASR program, the confidence level being indicative of a likely accuracy of the text output by the ASR program;priority is given to assigning the segments of the verbal messages and the automatically recognized text having a lower confidence level to the operators at the workbench stations for editing over the segments of the verbal messages and the text that were automatically recognized having a higher confidence level, so that more of the difficult segments of the verbal messages will be edited and transcribed by the operators, compared to easier portions;a current demand for transcribing the verbal message is assessed to determine a transcribing load on available operators at the workbench stations; andvarying a percentage of the final output text messages that comprise only the automatically recognized text, relative to a remaining percentage that is output by the operators as a function of the load, so that a growing backlog of the verbal messages to be transcribed by the system is avoided by using a greater percentage of the automatically recognized text for the final output text messages, as the load increases.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (60)
Roland Kuhn ; Jean-Claude Junqua, Adaptation system and method for E-commerce and V-commerce applications.
Syed S. Ali ; Joseph M. Cannon ; James A. Johanson ; Joseph A. Sopko, Apparatus and method for grouping and prioritizing voice messages for convenient playback.
Emerson William D. (Boulder CO) Hill Deborah J. (Denver CO) Loeb Karen C. (Englewood CO) Mizrahi Albert (Boulder CO) Schlegel Charles T. (Boulder CO) Scott Lowell C. (Old Bridge NJ), Integrated message service system.
James R. Lewis ; Kerry A. Ortega ; Ronald E. Van Buskirk ; Huifang Wang ; Amado Nassiff ; Barbara E. Ballard, Method and apparatus for improving speech command recognition accuracy using event-based constraints.
Bruckner Markus (Basel CH) Guanella Gustav (Zurich CH) Vouga Claude Andre (Baden CH), Method and apparatus for the secret transmission of speech signals.
Julia Skladman ; Robert J. Thornberry, Jr. ; Bruce A. Chatterley ; Alexander Siu-Kay Ng CA; Bruce L. Peterson, Method and system for interfacing systems unified messaging with legacy systems located behind corporate firewalls.
Grajski,Kamil, Method of and apparatus for improving productivity of human reviewers of automatically transcribed documents generated by media conversion systems.
Matsuura Yoshihiro (Funabashi OR JPX) Skinner Toby (Beaverton OR), Speaker independent speech recognition system and method using neural network and DTW matching technique.
Suzuki Matsumi (Ebina JA) Morino Tetsuro (Ebina JA) Yokota Shozo (Ebina JA), Speech recognition method and apparatus adapted to a plurality of different speakers.
Cheston ; III Frank C. ; Hatton Patricia V., Voice mail system for obtaining forwarding number information from directory assistance systems having speech recognition.
Bayer, Theodore F.; Adkins, Donald R.; Corbin, Gregory A., Method and system for converting audio text files originating from audio files to searchable text and for processing the searchable text.
Weng, Fuliang; Yan, Baoshi; Shen, Zhongnan; Feng, Zhe; Xu, Kui; Li, Katrina, System and method for interacting with live agents in an automated call center.
Thatcher, Gregory Garland; Jacobson, Joshua Robert Russell; Cort, Frank J.; Smith, Adam Michael, Systems and methods to provide assistance during user input.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.