Audio-visual speech recognition with scattering operators
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G10L-015/25
G10L-015/16
G06K-009/46
G06K-009/00
G06K-009/66
G06K-009/52
G10L-021/02
G06T-007/60
출원번호
US-0639149
(2017-06-30)
등록번호
US-10181325
(2019-01-15)
발명자
/ 주소
Marcheret, Etienne
Vopicka, Josef
Goel, Vaibhava
출원인 / 주소
Nuance Communications, Inc.
대리인 / 주소
Colandero, Brian J.
인용정보
피인용 횟수 :
0인용 특허 :
3
초록▼
Aspects described herein are directed towards methods, computing devices, systems, and computer-readable media that apply scattering operations to extracted visual features of audiovisual input to generate predictions regarding the speech status of a subject. Visual scattering coefficients generated
Aspects described herein are directed towards methods, computing devices, systems, and computer-readable media that apply scattering operations to extracted visual features of audiovisual input to generate predictions regarding the speech status of a subject. Visual scattering coefficients generated according to one or more aspects described herein may be used as input to a neural network operative to generate the predictions regarding the speech status of the subject. Predictions generated based on the visual features may be combined with predictions based on audio input associated with the visual features. In some embodiments, the extracted visual features may be combined with the audio input to generate a combined feature vector for use in generating predictions.
대표청구항▼
1. A method comprising: receiving, by a computing device, audiovisual input comprising audio input and video input associated with a subject;extracting, by the computing device, visual features from the video input;applying, by the computing device, a scattering operation to the extracted visual fea
1. A method comprising: receiving, by a computing device, audiovisual input comprising audio input and video input associated with a subject;extracting, by the computing device, visual features from the video input;applying, by the computing device, a scattering operation to the extracted visual features to generate a vector of scattering coefficients;providing the vector of scattering coefficients as input to a first neural network for visual processing;providing the audio input to a second neural network for audio processing;combining, by the computing device, a first output of the first neural network with a second output of the second neural network to generate a fused audiovisual feature vector based on the audiovisual input;providing the fused audiovisual feature vector to a third neural network for audiovisual processing; andgenerating, by the computing device and using the third neural network, a first prediction regarding a speech status of the subject based on the fused audiovisual feature vector. 2. The method of claim 1, wherein providing the vector of scattering coefficients as input to the first neural network for visual processing comprises: normalizing the vector of scattering coefficients to generate a first normalized vector of scattering coefficients;aggregating a plurality of normalized vectors of scattering coefficients including the first normalized vector of scattering coefficients to generate a set of aggregated visual feature vectors; andproviding the set of aggregated visual feature vectors to the first neural network for visual input processing. 3. The method of claim 1, wherein the video input is sampled at a first frequency and the audio input is sampled at a different second frequency. 4. The method of claim 1, wherein the vector of scattering coefficients is in a first dimensional space, and wherein applying the scattering operation to the extracted visual features to generate the vector of scattering coefficients comprises: applying the scattering operation to the extracted visual features to generate a second vector of scattering coefficients in a second dimensional space; andprojecting the second vector of scattering coefficients into the first dimensional space to generate the vector of scattering coefficients in the first dimensional space,wherein the second dimensional space is of a higher dimensionality than the first dimensional space. 5. The method of claim 1, wherein applying the scattering operation to the extracted visual features to generate a vector of scattering coefficients comprises generating first order scattering coefficients and second order scattering coefficients. 6. The method of claim 1, further comprising: generating, by the computing device and using the first neural network, a second prediction regarding the speech status of the subject based on the vector of scattering coefficients; andgenerating, by the computing device and using the second neural network, a third prediction regarding the speech status of the subject based on the audio input; andcombining the first prediction, the second prediction, and the third prediction to generate a combined prediction regarding the speech status of the subject. 7. The method of claim 1, wherein the first prediction regarding the speech status of the subject is used to recognize speech content of the audiovisual input. 8. The method of claim 1, further comprising: determining, based on the first prediction regarding the speech status of the subject, a synchrony state of the audio input and the video input, wherein the synchrony state indicates whether the audio input is in-sync with the video input. 9. The method of claim 1, further comprising: determining, based on the first prediction regarding the speech status of the subject, whether speech in the audio input originates from a foreground source or a background source. 10. One or more non-transitory computer readable media comprising instructions that, when executed by one or more processors, cause the one or more processors to perform steps comprising: receiving audiovisual input comprising audio input and video input associated with a subject;extracting visual features from the video input;applying a scattering operation to the extracted visual features to generate a vector of scattering coefficients in a first dimensional space;providing the vector of scattering coefficients as input to a first neural network for visual processing;providing the audio input to a second neural network for audio processing;combining a first output of the first neural network with a second output of the second neural network to generate a fused audiovisual feature vector based on the audiovisual input;providing the fused audiovisual feature vector to a third neural network for audiovisual processing; andgenerating, using the third neural network, a prediction regarding a speech status of the subject based on the fused audiovisual feature vector. 11. The one or more non-transitory computer readable media of claim 10, wherein providing the vector of scattering coefficients as input to the first neural network for visual processing comprises: normalizing the vector of scattering coefficients to generate a first normalized vector of scattering coefficients;aggregating a plurality of normalized vectors of scattering coefficients including the first normalized vector of scattering coefficients to generate a set of aggregated visual feature vectors; andproviding the set of aggregated visual feature vectors to the first neural network for visual input processing. 12. The one or more non-transitory computer readable media of claim 10, wherein the video input is sampled at a first frequency and the audio input is sampled at a different second frequency. 13. The one or more non-transitory computer readable media of claim 10, wherein applying the scattering operation to the extracted visual features to generate the vector of scattering coefficients comprises: applying the scattering operation to the extracted visual features to generate a second vector of scattering coefficients in a second dimensional space; andprojecting the second vector of scattering coefficients into the first dimensional space to generate the vector of scattering coefficients in the first dimensional space,wherein the second dimensional space is of a higher dimensionality than the first dimensional space. 14. The one or more non-transitory computer readable media of claim 10, wherein the prediction regarding the speech status of the subject is used to recognize speech content of the audiovisual input. 15. The one or more non-transitory computer readable media of claim 10, wherein the prediction regarding the speech status of the subject is used to determine a synchrony state of the audio input and the video input, wherein the synchrony state indicates whether the audio input is in-sync with the video input. 16. The one or more non-transitory computer readable media of claim 10, wherein the prediction regarding the speech status of the subject is used to determine whether speech in the audio input originates from a foreground source or a background source. 17. An apparatus comprising: one or more processors; andmemory storing instructions that, when executed by the one or more processors, cause the apparatus to: receive audiovisual input comprising audio input and video input associated with a subject;extract visual features from the video input;apply a scattering operation to the extracted visual features to generate a vector of scattering coefficients;provide the vector of scattering coefficients as input to a first neural network for visual processing;provide the audio input to a second neural network for audio processing;combine a first output of the first neural network with a second output of the second neural network to generate a fused audiovisual feature vector based on the audiovisual input;provide the fused audiovisual feature vector to a third neural network for audiovisual processing; andgenerate, and using the third neural network, a first prediction regarding a speech status of the subject based on the fused audiovisual feature vector. 18. The apparatus of claim 17, wherein the video input is sampled at a first frequency and the audio input is sampled at a different second frequency. 19. The apparatus of claim 17, wherein the vector of scattering coefficients is in a first dimensional space, and wherein the instructions, when executed by the one or more processors, further cause the apparatus to: generate, using the first neural network, a second prediction regarding the speech status of the subject based on the vector of scattering coefficients; andgenerate, using the second neural network, a third prediction regarding the speech status of the subject based on the audio input; andcombine the first prediction, the second prediction, and the third prediction to generate a combined prediction regarding the speech status of the subject. 20. The apparatus of claim 17, wherein the instructions, when executed by the one or more processors, further cause the apparatus to: determine, based on the first prediction regarding the speech status of the subject, whether speech in the audio input originates from a foreground source or a background source.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (3)
Chu,Stephen Mingyu; Goel,Vaibhava; Marcheret,Etienne; Potamianos,Gerasimos, Method for likelihood computation in multi-stream HMM based speech recognition.
Stork David G. (Stanford CA) Wolff Gregory J. (Mountain View CA), Neural network acoustic and visual speech recognition system training method and apparatus.
Chu, Stephen Mingyu; Goel, Vaibhava; Marcheret, Etienne; Potamianos, Gerasimos, System and method for likelihood computation in multi-stream HMM based speech recognition.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.