Systems and methods for capturing and interpreting audio
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G10H-007/00
G10H-001/18
G10H-003/14
출원번호
US-0866169
(2015-09-25)
등록번호
US-9536509
(2017-01-03)
발명자
/ 주소
Esparza, Tlacaelel Miguel
출원인 / 주소
Sunhouse Technologies, Inc.
대리인 / 주소
Myers Wolin, LLC
인용정보
피인용 횟수 :
2인용 특허 :
34
초록▼
As part of a system, a device is provided as part of a system, the device being for capturing vibrations produced by an object such as a musical instrument. The device has a first sensor placed in contact with a surface of the drum, such as a rim of the drum, and a second sensor placed at a fixed lo
As part of a system, a device is provided as part of a system, the device being for capturing vibrations produced by an object such as a musical instrument. The device has a first sensor placed in contact with a surface of the drum, such as a rim of the drum, and a second sensor placed at a fixed location relative to the drum, but not touching the drum. A method may be provided for interpreting the output of the sensors within the system, the method comprising identifying the onset of an audio event in audio data, selecting a window in the data for analysis, applying transforms to generate a representation of the audio event, and comparing that representation to expected representations in a model.
대표청구항▼
1. A device for capturing vibrations produced by an object, the device comprising: a first sensor in contact with a surface of the object;a second sensor spaced apart from the object and the first sensor and located relative to the object; anda fixation element for fixing the device to the object, w
1. A device for capturing vibrations produced by an object, the device comprising: a first sensor in contact with a surface of the object;a second sensor spaced apart from the object and the first sensor and located relative to the object; anda fixation element for fixing the device to the object, wherein the first sensor is placed in contact with the object by the fixation element and wherein the second sensor is located relative to the object by the fixation element,wherein the first sensor is a piezoelectric sensor element which captures vibration in the object at the fixation element, andwherein the second sensor is a coil sensor. 2. The device of claim 1 wherein the coil sensor is fixed relative to a magnet placed on a surface of the musical instrument. 3. The device of claim 1 further comprising mixing circuitry for mixing a signal from the first sensor with a signal from the second sensor and an output for transmitting a mixed signal. 4. A system for capturing vibrations produced by a drum, the system comprising: a first sensor in contact with a surface of the drum;a second sensor spaced apart from the first sensor and located relative to but not touching the drum;a fixation element for fixing the device to the drum, wherein the first sensor is placed in contact with the drum by the fixation element and wherein the second sensor is located relative to the drum by the fixation element; anda processor for identifying an audio event based on signals captured at the first sensor and the second sensor,wherein the first sensor contacts a rim of the drum and the second sensor is suspended over a head of the drum. 5. The system of claim 4 further comprising an audio output for outputting a sound based on the audio event identified by the processor. 6. A method for producing audio from electrical signals within a data processing device, the method comprising: receiving, at the data processing device, a stream of audio data;identifying, in the audio data, an onset of an audio event;selecting a discrete analysis window from the audio data based on the location of the onset of the audio event in the audio data;generating, by the data processing device, an n-dimensional representation of the audio event by evaluating the contents of the analysis window and recording the results of the analysis;classifying the audio event by comparing the n-dimensional representation of the audio event to expected representations of audio events along a plurality of the n dimensions;outputting a sound selected based on the classification of the audio event. 7. The method of claim 6 wherein at least one dimension of the n-dimensional representation corresponds to a timbre characteristic of the audio event. 8. The method of claim 6 wherein the audio data is a time domain audio signal and the evaluation of the contents of the analysis window is transformed to the frequency domain using a Constant-Q transform to separate the audio data into frequency bins. 9. The method of claim 6 wherein the n-dimensional representation is compared geometrically to a plurality of audio zones defined by expected signal parameters in at least two of the n dimensions and associated with a sample audio event, and wherein when the n-dimensional representation is within one of the audio zones, the sound is a sample sound of a corresponding audio zone. 10. The method of claim 9 wherein when the n-dimensional representation is not within one of the audio zones, the sound is a combination of elements of the sample sounds of a plurality of the audio zones based on a geometric distance between the n-dimensional representation and the audio zones along the at least two dimensions. 11. The method of claim 10 wherein the n-dimensional representation is compared geometrically to a plurality of audio zones defined by expected signal parameters in at least two of the n dimensions, wherein each audio zone is associated with a sample sound, and wherein the sound is a combination of elements of the sample sounds of a plurality of the audio zones based on a geometric distance between the n-dimensional representation and the centers of the plurality of the audio zones along the at least two dimensions. 12. The method of claim 11 wherein the audio zones correspond to sample signals corresponding to different playable surfaces of a drum or different modes of striking a drum. 13. The method of claim 12 wherein a first of the audio zones corresponds to a sample recorded at a center of a drum head and a second of the audio zones corresponds to a sample recorded at a radial distance from the center of the drum head. 14. The method of claim 12 wherein the sound is generated by blending the sample sounds from a plurality of audio zones based on a ratio of the geometric distances from each of the audio zones. 15. A device for capturing vibrations produced by a drum, the device comprising: a first sensor in contact with a surface of the drum;a second sensor spaced apart from the drum and the first sensor and located relative to the drum; anda fixation element for fixing the device to the drum, wherein the first sensor is placed in contact with the drum by the fixation element and wherein the second sensor is located relative to the drum by the fixation element,wherein the first sensor contacts a rim of the drum and the second sensor is suspended over a head of the drum. 16. The device of claim 15 wherein the fixation element is a clamp, and wherein the clamp has a first clamping surface for interfacing with a top surface of the rim of the drum and a second surface for interfacing with a bottom surface of the rim of the drum, and wherein the top surface contains the first sensor and the bottom surface has a first segment for gripping a first type of drum rim and a second segment for gripping a second type of drum rim.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (34)
Kamijima, Yuujirou; Yoshino, Kiyoshi, Acoustic instrument triggering device and method.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.