Audio-visual speech recognition with scattering operators
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/62
G10L-015/25
G10L-015/16
G10L-021/02
G06K-009/66
G06K-009/52
G06T-007/60
G06K-009/00
G06K-009/46
출원번호
US-0835319
(2015-08-25)
등록번호
US-9697833
(2017-07-04)
발명자
/ 주소
Marcheret, Etienne
Vopicka, Josef
Goel, Vaibhava
출원인 / 주소
Nuance Communications, Inc.
대리인 / 주소
Banner & Witcoff, Ltd.
인용정보
피인용 횟수 :
0인용 특허 :
3
초록▼
Aspects described herein are directed towards methods, computing devices, systems, and computer-readable media that apply scattering operations to extracted visual features of audiovisual input to generate predictions regarding the speech status of a subject. Visual scattering coefficients generated
Aspects described herein are directed towards methods, computing devices, systems, and computer-readable media that apply scattering operations to extracted visual features of audiovisual input to generate predictions regarding the speech status of a subject. Visual scattering coefficients generated according to one or more aspects described herein may be used as input to a neural network operative to generate the predictions regarding the speech status of the subject. Predictions generated based on the visual features may be combined with predictions based on audio input associated with the visual features. In some embodiments, the extracted visual features may be combined with the audio input to generate a combined feature vector for use in generating predictions.
대표청구항▼
1. A method comprising: receiving, by a computing device, audiovisual input comprising audio input and video input associated with a subject;extracting, by the computing device, visual features from the video input;applying, by the computing device, a scattering operation to the extracted visual fea
1. A method comprising: receiving, by a computing device, audiovisual input comprising audio input and video input associated with a subject;extracting, by the computing device, visual features from the video input;applying, by the computing device, a scattering operation to the extracted visual features to generate a vector of scattering coefficients in a first dimensional space;providing the vector of scattering coefficients as input to a first neural network for visual processing;generating, by the computing device and using the first neural network, a first prediction regarding a speech status of the subject based on the vector of scattering coefficients;providing the audio input to a second neural network for audio processing;generating, by the computing device and using the second neural network, a second prediction regarding the speech status of the subject based on the audio input;comparing, by the computing device, a first output of the first neural network with a second output of the second neural network to determine a synchrony state of the audio input and the video input, wherein the synchrony state indicates whether the audio input is in-sync with the video input; andstoring the synchrony state of the audio input and the video input. 2. The method of claim 1, wherein providing the vector of visual scattering features as input to the first neural network for visual processing comprises: normalizing the vector of scattering coefficients to generate a first normalized vector of scattering coefficients;aggregating a plurality of normalized vectors of scattering coefficients including the first normalized vector of scattering coefficients to generate a set of aggregated visual feature vectors; andproviding the set of aggregated visual feature vectors to the first neural network for visual input processing. 3. The method of claim 1, further comprising: combining, by the computing device, the first output of the first neural network with the second output of the second neural network to generate a fused audiovisual feature vector based on the audiovisual input;providing the fused audiovisual feature vector to a third neural network for audiovisual processing; andgenerating, by the computing device and using the third neural network, a third prediction regarding the speech status of the subject based on the fused audiovisual feature vector. 4. The method of claim 1, wherein the video input is sampled at a first frequency and the audio input is sampled at a different second frequency. 5. The method of claim 1, wherein applying the scattering operation to the extracted visual features to generate a vector of visual scattering features in a first dimensional space comprises: applying the scattering operation to the extracted visual features to generate a second scattering vector in a second dimensional space; andprojecting the second scattering vector into the first dimensional space to generate the vector of visual scattering features in the first dimensional space,wherein the second dimensional space is of a higher dimensionality than the first dimensional space. 6. The method of claim 1, wherein applying the scattering operation to the extracted visual features to generate a vector of visual scattering features in a first dimensional space comprises generating first order scattering coefficients and second order scattering coefficients. 7. The method of claim 1, wherein the first prediction regarding the speech status of the subject is used to recognize speech content of the audiovisual input. 8. One or more non-transitory computer readable media comprising instructions that, when executed by a processor, cause the processor to perform steps comprising: receiving, by a computing device, audiovisual input comprising audio input and video input associated with a subject;extracting, by the computing device, visual features from the video input;applying, by the computing device, a scattering operation to the extracted visual features to generate a vector of scattering coefficients in a first dimensional space;providing the vector of scattering coefficients as input to a first neural network for visual processing;providing the audio input to a second neural network for audio processing;comparing, by the computing device, a first output of the first neural network with a second output of the second neural network to determine a synchrony state of the audio input and the video input, wherein the synchrony state indicates whether the audio input is in-sync with the video input; andstoring the synchrony state of the audio input and the video input. 9. The computer readable media of claim 8, wherein providing the vector of visual scattering features as input to the first neural network for visual processing comprises: normalizing the vector of scattering coefficients to generate a first normalized vector of scattering coefficients;aggregating a plurality of normalized vectors of scattering coefficients including the first normalized vector of scattering coefficients to generate a set of aggregated visual feature vectors; andproviding the set of aggregated visual feature vectors to the first neural network for visual input processing. 10. The computer readable media of claim 8, wherein the instructions, when executed by the processor, further cause the processor to perform steps comprising: combining, by the computing device, the first output of the first neural network with the second output of the second neural network to generate a fused audiovisual feature vector based on the audiovisual input;providing the fused audiovisual feature vector to a third neural network for audiovisual processing; andgenerating, by the computing device and using the third neural network, a third prediction regarding the speech status of the subject based on the fused audiovisual feature vector. 11. The computer readable media of claim 8, wherein the video input is sampled at a first frequency and the audio input is sampled at a different second frequency. 12. The computer readable media of claim 8, wherein applying the scattering operation to the extracted visual features to generate a vector of visual scattering features in a first dimensional space comprises: applying the scattering operation to the extracted visual features to generate a second scattering vector in a second dimensional space; andprojecting the second scattering vector into the first dimensional space to generate the vector of visual scattering features in the first dimensional space,wherein the second dimensional space is of a higher dimensionality than the first dimensional space. 13. The computer readable media of claim 8, wherein applying the scattering operation to the extracted visual features to generate a vector of visual scattering features in a first dimensional space comprises generating first order scattering coefficients and second order scattering coefficients. 14. The computer readable media of claim 8, wherein the instructions, when executed by the processor, further cause the processor to perform steps comprising: generating, by the computing device and using the first neural network, a first prediction regarding a speech status of the subject based on the vector of scattering coefficients,wherein the first prediction regarding the speech status of the subject is used to recognize speech content of the audiovisual input. 15. A system comprising: a processor; andmemory storing instructions that, when executed by the processor, cause the processor to: receive audiovisual input comprising audio input and video input associated with a subject;extract visual features from the video input;apply a scattering operation to the extracted visual features to generate a vector of scattering coefficients in a first dimensional space;provide the vector of scattering coefficients as input to a first neural network for visual processing;generate, using the first neural network, a first prediction regarding a speech status of the subject based on the vector of scattering coefficients;normalize the audio input to generate normalized audio input;combine the vector of scattering coefficients with the normalized audio input to generate a fused audiovisual feature vector;provide the fused audiovisual feature vector to a second neural network for audiovisual processing; andgenerate, using the second neural network, a second prediction regarding the speech status of the subject based on a second output of the second neural network. 16. The system of claim 15, wherein the instructions, when executed by the processor, cause the processor to provide the vector of visual scattering features as input to the first neural network for visual processing by causing the processor to: normalize the vector of scattering coefficients to generate a first normalized vector of scattering coefficients;aggregate a plurality of normalized vectors of scattering coefficients including the first normalized vector of scattering coefficients to generate a set of aggregated visual feature vectors; andprovide the set of aggregated visual feature vectors to the first neural network for visual input processing. 17. The system of claim 15, wherein the instructions, when executed by the processor, further cause the processor to: combine a first output of the first neural network with a second output of the second neural network to generate a fused audiovisual feature vector based on the audiovisual input;providing the fused audiovisual feature vector to a third neural network for audiovisual processing; andgenerating, by the computing device and using the third neural network, a third prediction regarding the speech status of the subject based on the fused audiovisual feature vector. 18. The system of claim 15, wherein the video input is sampled at a first frequency and the audio input is sampled at a different second frequency. 19. The system of claim 15, wherein the instructions, when executed by the processor, cause the processor to apply the scattering operation to the extracted visual features to generate a vector of visual scattering features in a first dimensional space by causing the processor to: apply the scattering operation to the extracted visual features to generate a second scattering vector in a second dimensional space; andproject the second scattering vector into the first dimensional space to generate the vector of visual scattering features in the first dimensional space,wherein the second dimensional space is of a higher dimensionality than the first dimensional space. 20. The system of claim 15, wherein the first prediction regarding the speech status of the subject is used to recognize speech content of the audiovisual input.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (3)
Chu,Stephen Mingyu; Goel,Vaibhava; Marcheret,Etienne; Potamianos,Gerasimos, Method for likelihood computation in multi-stream HMM based speech recognition.
Stork David G. (Stanford CA) Wolff Gregory J. (Mountain View CA), Neural network acoustic and visual speech recognition system training method and apparatus.
Chu, Stephen Mingyu; Goel, Vaibhava; Marcheret, Etienne; Potamianos, Gerasimos, System and method for likelihood computation in multi-stream HMM based speech recognition.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.