IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0258304
(2014-04-22)
|
등록번호 |
US-9275645
(2016-03-01)
|
발명자
/ 주소 |
- Hearing, Brian
- Franklin, John
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
5 인용 특허 :
12 |
초록
▼
A system, method, and apparatus for drone detection and classification are disclosed. An example method includes receiving a sound signal in a microphone and recording, via a sound card, a digital sound sample of the sound signal, the digital sound sample having a predetermined duration. The method
A system, method, and apparatus for drone detection and classification are disclosed. An example method includes receiving a sound signal in a microphone and recording, via a sound card, a digital sound sample of the sound signal, the digital sound sample having a predetermined duration. The method also includes processing, via a processor, the digital sound sample into a feature frequency spectrum. The method further includes applying, via the processor, broad spectrum matching to compare the feature frequency spectrum to at least one drone sound signature stored in a database, the at least one drone sound signature corresponding to a flight characteristic of a drone model. The method moreover includes, conditioned on matching the feature frequency spectrum to one of the drone sound signatures, transmitting, via the processor, an alert.
대표청구항
▼
1. A method for detecting drones comprising: receiving a sound signal in a microphone;recording, via a sound card, a digital sound sample of the sound signal, the digital sound sample having a predetermined duration;processing, via a processor, the digital sound sample into a feature frequency spect
1. A method for detecting drones comprising: receiving a sound signal in a microphone;recording, via a sound card, a digital sound sample of the sound signal, the digital sound sample having a predetermined duration;processing, via a processor, the digital sound sample into a feature frequency spectrum;applying, via the processor, broad spectrum matching to compare the feature frequency spectrum to at least one drone sound signature stored in a database, the at least one drone sound signature corresponding to a flight characteristic of a drone model; andconditioned on matching the feature frequency spectrum to one of the drone sound signatures, transmitting, via the processor, an alert. 2. The method of claim 1, wherein: applying broad spectrum matching includes determining a Wasserstein metric for each of the drone sound signatures compared to the feature frequency spectrum; andmatching the feature frequency spectrum includes determining at least one of the determined Wasserstein metrics is below a threshold value. 3. The method of claim 2, further comprising: recording, via the sound card, a number of consecutive digital sound samples from the sound signal, each of the consecutive digital sound samples having the predetermined duration;processing, via the processor, each of the digital sound samples into a respective feature frequency spectrum;for each of the feature frequency spectrums: applying, via the processor, broad spectrum matching to compare the feature frequency spectrum to the at least one sound signature by determining a Wasserstein metric for each of the drone sound signatures compared to the feature frequency spectrum,selecting, via the processor, a number of the Wasserstein metrics that have the lowest values compared to all of the Wasserstein metrics for the feature frequency spectrum,determining, via the processor, a drone class for each of the selected number of the Wasserstein metrics based on the corresponding drone sound signature referenced to at least one of the drone model or the drone class, andapplying, via the processor, a hit classification to the feature frequency spectrum conditioned on substantially all of the selected number of the Wasserstein metrics corresponding to the same drone class; andconditioned on determining, via the processor, a number of consecutive hit classifications, transmitting the alert. 4. The method of claim 1, wherein processing the sound sample into the digital sound sample includes: partitioning the digital sound sample into equal-sized non-overlapping segments;converting each of the segments into a vector of frequency amplitudes by determining an absolute value of a Fast Fourier Transform applied to the segment;smoothing the vectors of each of the segments using a sliding median filter;forming a composite frequency vector by averaging the smoothed vectors; andnormalizing the composite frequency vector to have a unit sum to create the feature frequency spectrum. 5. The method of claim 4, further comprising transmitting the at least one of the feature frequency spectrum and the digital sound sample to a remotely located server including the indication of the drone. 6. The method of claim 1, wherein transmitting the alert includes transmitting at least one of a text message and an e-mail to a user. 7. The method of claim 1, further comprising: receiving an indication from a user that the alert is false;determining the matched drone sound signature used to transmit the alert; andstoring a false signature indication to the matched drone sound signature. 8. The method of claim 1, further comprising: determining the feature frequency spectrum does not match the at least one drone sound signature;receiving an indication from a user that the feature frequency spectrum corresponds to a drone class; andstoring the feature frequency spectrum as a drone sound signature and an indication of the drone class. 9. The method of claim 8, further comprising: receiving at least one of the drone model and the drone flight characteristic from the user; andstoring the at least one of the drone model and the drone flight characteristic in conjunction with the feature frequency spectrum stored as the drone sound signature. 10. The method of claim 1, wherein the flight characteristics include at least one of retreating, sideways translating, rotating, hovering, inverting, ascending, and descending. 11. The method of claim 1, wherein the drone model includes at least one of a brand name, a part number, and a rotor configuration. 12. The method of claim 1, further comprising, before applying broad spectrum matching, incorporating, via the processor, a background spectrum into each of the drone sound signatures, the background spectrum corresponding to specific acoustic conditions at a location of the microphone. 13. An apparatus for detecting drones comprising: a microphone configured to receive a sound signal;a sound card configured to record a digital sound sample of the sound signal, the digital sound sample having a predetermined duration; anda sample processor configured to: process the digital sound sample into a feature frequency spectrum,process at least one drone sound sample into a drone sound signature,use broad spectrum matching to compare the feature frequency spectrum to the at least one drone sound signature, the at least one drone sound signature corresponding to a drone class and a flight characteristic of a drone, andconditioned on matching the feature frequency spectrum to one of the drone sound signatures, transmit an alert. 14. The apparatus of claim 13, wherein the sample processor is configured to process the at least one drone sound sample into the drone sound signature after activation of the apparatus. 15. The apparatus of claim 13, wherein the sample processor is configured to be calibrated for local acoustic characteristics by: receiving a calibration digital sound sample recorded by the sound card, the calibration digital sound sample being transmitted by a user device and sensed by the microphone;processing the calibration digital sound sample into a calibration frequency spectrum;determining differences between the calibration frequency spectrum and a calibration sound signature;determining variable values of an algorithm used to process the calibration digital sound sample into the calibration frequency spectrum such that the calibration frequency spectrum substantially matches the calibration sound signature; andapplying the determined variable values to the algorithm for normal operation. 16. The apparatus of claim 13, further comprising at least one of: a light source configured to illuminate responsive to receiving the alert from the sample processor; ora speaker configured to output an audible tone responsive to receiving the alert from the sample processor, wherein the audible tone is included within the alert. 17. The apparatus of claim 16, wherein the at least one of the light source or the speaker is remote from the apparatus and wirelessly communicatively coupled to the sample processor. 18. The apparatus of claim 13, further comprising a relay configured to switch to a closed state responsive to receiving the alert from the sample processor. 19. A non-transitory machine-accessible device having instructions stored thereon that are configured, when executed, to cause a machine to at least: receive a digital sound sample from a sound card;partition the digital sound sample into equal-sized non-overlapping segments;convert each of the segments into a frequency amplitude vector by determining an absolute value of a Fast Fourier Transform applied to the segment;smooth the each of the frequency amplitude vectors of each of the segments using a sliding median filter;create a composite frequency vector by averaging the smoothed vectors;create a feature frequency spectrum by normalizing the composite frequency vector to have a unit sum;apply broad spectrum matching to compare the feature frequency spectrum to at least one drone sound signature stored in a database, the at least one drone sound signature corresponding to a drone class and a flight characteristic of a drone; andconditioned on matching the feature frequency spectrum to at least one of the drone sound signatures, transmit an alert including the drone class, a time of detection, and a date of detection. 20. The non-transitory machine-accessible device of claim 19, further comprising instructions stored thereon that are configured when executed to cause the machine to: determine the drone class corresponds to a friend drone class, the drone class previously being specified by a user; andconditioned on the drone class corresponding to the friend drone class, determine the alert is not to be transmitted. 21. The non-transitory machine-accessible device of claim 19, further comprising instructions stored thereon that are configured when executed to cause the machine to: determine a time period during which digital sound samples from the drone are received;determine distances of the drone during the time period from a microphone that sensed sound signals corresponding to the digital sound samples;determine headings of the drone during the time period based on the digital sound samples;determine a flight path of the drone during the time period based on the determined distances, the headings, and the flight characteristics; andcause a graphical representation of the flight path to be displayed by a user device to a user. 22. The non-transitory machine-accessible device of claim 19, further comprising instructions stored thereon that are configured when executed to cause the machine to: transmit a first context of the alert to user device; andtransmit a second context of the alert, different from the first context, to a management server causing the management server to display the alert at a geographic location on map in conjunction with alerts from other users.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.