A platform that is configured to be removably placed symmetrically on or about a user's head has at least a first transducer configured to capture vibration of the user's skull or facial movement generated by the user's voice activity and to detect the user's speaking activity. This first transducer
A platform that is configured to be removably placed symmetrically on or about a user's head has at least a first transducer configured to capture vibration of the user's skull or facial movement generated by the user's voice activity and to detect the user's speaking activity. This first transducer converts the vibration or facial movement into a first electrical audio signal. The electrical audio signal from the first transducer is processed by circuitry or embodied software as voiced frames and/or as unvoiced frames, in which the voiced frames and/or the unvoiced frames are defined based at least on the first electrical audio signal. Multiple embodiments follow from this: where the first transducer is a vibration sensor; where voice is captured by an air microphone and filtering adaptation differs for the voiced versus unvoiced frames as defined by the first transducer, and another with at least three air microphones.
대표청구항▼
1. An apparatus comprising: a platform configured to be removably placed symmetrically on or about a user's head;at least one first transducer configured to capture vibration of the user's skull or facial movement generated by the user's voice activity and to detect the user's speaking activity, whe
1. An apparatus comprising: a platform configured to be removably placed symmetrically on or about a user's head;at least one first transducer configured to capture vibration of the user's skull or facial movement generated by the user's voice activity and to detect the user's speaking activity, wherein the at least one first transducer converts the vibration or facial movement into a first electrical audio signal;at least one of one or more circuitries and one or more memories including computer program code for processing the first electrical audio signal from the at least one first transducer received at the platform into voiced frames and unvoiced frames, wherein the voiced frames and the unvoiced frames are defined based at least on the first electrical audio signal; andat least one second transducer, said at least one second transducer being an air microphone, said at least one second transducer producing at least one second electrical audio signal, wherein said at least one second electrical audio signal received at the platform from said at least one second transducer are differentially processed by at least one of the one or more circuitries and the one or more memories including computer program code into voiced and unvoiced frames,wherein an equalizing function, said equalizing function being a first transfer function based on clean voice content captured by said at least one first transducer and said at least one second transducer, is computed, andwherein at least one of the one or more circuitries and the one or more memories including computer program code processes the unvoiced frames as noise-only frames for updating a noise profile obtained only from the first electrical audio signal from the at least one first transducer, and processes the voiced frames by subtracting the noise profile there from and applying the equalizing function to enhance the output spectrum. 2. The apparatus according to claim 1, wherein the first electrical audio signal received at the platform is from the at least one first transducer, said at least one first transducer being a vibration sensor, and wherein at least one of the one or more circuitries and the one or more memories including computer program code processes the voiced frames by low-pass filtering and artificially extending a bandwidth thereof. 3. The apparatus according to claim 1, wherein the first electrical audio signal received at the platform is from the at least one first transducer, said at least one first transducer being a vibration sensor, and wherein at least one of the one or more circuitries and the one or more memories including computer program code processes the unvoiced frames as noise-only frames for updating a noise profile and processes the voiced frames by spectrally subtracting the noise profile therefrom. 4. The apparatus according to claim 1, wherein said equalizing function is computed by a separate training process of at least one of the one or more circuitries and the one or more memories including computer program code. 5. The apparatus according to claim 1, wherein two equalizing functions, said two equalizing functions being the first transfer function between the clean voice content captured by said at least one first transducer and said at least one second transducer, and a second transfer function between ambient noise content captured by said at least one first transducer and said at least one second transducer and an estimate of electronic noise of said at least one first transducer, are computed by a separate training process of at least one of the one or more circuitries and the one or more memories including computer program code; wherein the ambient noise content captured by said at least one second transducer is estimated by utilizing results of the training process; andwherein the circuitry or embodied software processes the voiced frames to estimate a speech signal by separating there from the ambient noise content estimated from the output signals of said at least one second transducer. 6. The apparatus according to claim 1, wherein the apparatus further comprises at least three air microphones spatially disposed about the platform; and wherein at least one of the one or more circuitries and the one or more memories including computer program code is configured to output an adaptively filtered noise signal from at least inputs from side-mounted ones of the air microphones, wherein the adaptive filtering produces an error signal, said error signal remaining after subtracting the filtered noise signal from an output signal of a forward-mounted one of the air microphones; andwherein the adaptive filtering is dynamically adaptive only during the unvoiced frames and static during the voiced frames. 7. The apparatus according to claim 6, wherein said at least one first transducer is one of a vibration sensor, a downward facing camera, an ultrasonic sensor and an infrared sensor. 8. The apparatus according to claim 1, wherein the platform comprises one of eyeglasses, sunglasses, a helmet, and a headband. 9. A method comprising: determining from at least a first electrical audio signal from a first transducer voiced frames indicating when a user is speaking and unvoiced frames indicating when the user is not speaking, wherein the first transducer is disposed on a platform configured to be removably placed symmetrically on or about a user's head and is configured to capture vibration of the user's skull or facial movement generated by the user's voice activity and to detect the user's speaking activity;processing the first electrical audio signal received at the platform into the voiced frames and the unvoiced frames; andprocessing a second electrical audio signal received from at least a second transducer disposed on the platform, said second transducer being an air microphone, into voiced and unvoiced frames,wherein an equalizing function, said equalizing function being a first transfer function based on clean voice content captured by the first and the second transducers, is computed, andwherein processing the first and second electrical audio signals comprises processing the unvoiced frames as noise-only frames for updating a noise profile obtained only from the first electrical audio signal, and processing the voiced frames by subtracting the noise profile therefrom and applying the equalizing function to enhance the output spectrum. 10. The method according to claim 9, wherein the first electrical audio signal received at the platform is from the first transducer, said first transducer being a vibration sensor, and wherein processing the first electrical audio signal comprises processing the voiced frames by low-pass filtering and artificially extending a bandwidth thereof. 11. The method according to claim 9, wherein the first electrical audio signal received at the platform is from the first transducer, said first transducer being a vibration sensor, and wherein processing the signals comprises processing the unvoiced frames as noise-only frames for updating a noise profile and processing the voiced frames by spectrally subtracting the noise profile therefrom. 12. The method according to claim 9, wherein said equalizing function is computed by a separate training process. 13. The method according to claim 9, wherein two equalizing functions, said two equalizing functions being the first transfer function between clean voice content captured by the first and the second transducers and a second transfer function between the ambient noise content captured by the two transducers, and an estimate of the electronic noise of the first transducer, are computed by a separate training process; wherein the ambient noise content captured by the second transducer is estimated by utilizing results of the training process; andwherein processing the first and second electrical audio signals comprises processing the voiced frames to estimate a speech signal by separating therefrom the ambient noise estimated from the output signals of only the second transducer. 14. The method according to claim 9, wherein the platform comprises at least three air microphones spatially disposed about the platform; and wherein processing the first electrical audio signal received at the platform comprises outputting an adaptively filtered noise signal from at least inputs from side-mounted ones of the air microphones, wherein the adaptive filtering produces an error signal, said error signal remaining after subtracting the filtered noise signal from an output signal of a forward-mounted one of the air microphones; andwherein the adaptive filtering is dynamically adaptive only during the unvoiced frames and static during the voiced frames. 15. The method according to claim 14, wherein the first transducer is one of a vibration sensor, a downward facing camera, an ultrasonic sensor and an infrared sensor. 16. The method according to claim 9, wherein the platform comprises one of eyeglasses, sunglasses, a helmet, and a headband. 17. A non-transitory memory storing a program of computer readable instructions which when executed by at least one processor result in actions comprising: determining from at least a first electrical audio signal from a first transducer voiced frames indicating when a user is speaking and unvoiced frames indicating when the user is not speaking, wherein the first transducer is disposed on a platform configured to be removably placed symmetrically on or about a user's head and is configured to capture vibration of the user's skull or facial movement generated by the user's voice activity and to detect the user's speaking activity;processing the first electrical audio signal received at the platform into the voiced frames and the unvoiced frames; andprocessing a second electrical audio signal received from at least a second transducer disposed on the platform, said second transducer being an air microphone, into voiced and unvoiced frames,wherein an equalizing function, said equalizing function being a transfer function based on clean voice content captured by the first and the second transducers, is computed, andwherein processing the first and second electrical audio signals comprises processing the unvoiced frames as noise-only frames for updating a noise profile obtained only from the first electrical audio signal, and processing the voiced frames by subtracting the noise profile therefrom and applying the equalizing function to enhance the output spectrum.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (21)
Lee Chang-ho (Seoul KRX), Audio data input device for multi-media computer.
Lopez Meyer, Paulo; Cordourier Maruri, Hector A.; Zamora Esquivel, Julio C.; Ibarra Von Borstel, Alejandro; Camacho Perez, Jose R., System for voice capture via nasal vibration sensing.
Lopez Meyer, Paulo; Cordourier Maruri, Hector Alfonso; Zamora Esquivel, Julio Cesar; Ibarra Von Borstel, Alejandro; Camacho Perez, Jose Rodrigo; Romero Aragon, Jorge Carlos, User command determination based on a vibration pattern.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.