Noises that are to be emitted by an aerial vehicle during operations may be predicted using one or more machine learning systems, algorithms or techniques. Anti-noises having equal or similar intensities and equal but out-of-phase frequencies may be identified and generated based on the predicted no
Noises that are to be emitted by an aerial vehicle during operations may be predicted using one or more machine learning systems, algorithms or techniques. Anti-noises having equal or similar intensities and equal but out-of-phase frequencies may be identified and generated based on the predicted noises, thereby reducing or eliminating the net effect of the noises. The machine learning systems, algorithms or techniques used to predict such noises may be trained using emitted sound pressure levels observed during prior operations of aerial vehicles, as well as environmental conditions, operational characteristics of the aerial vehicles or locations of the aerial vehicles during such prior operations. Anti-noises may be identified and generated based on an overall sound profile of the aerial vehicle, or on individual sounds emitted by the aerial vehicle by discrete sources.
대표청구항▼
1. An method comprising: capturing, by at least one sensor of a first unmanned aerial vehicle, information regarding a first noise emitted by at least one component of the first unmanned aerial vehicle at a first time, wherein the information regarding the first noise is captured while the first unm
1. An method comprising: capturing, by at least one sensor of a first unmanned aerial vehicle, information regarding a first noise emitted by at least one component of the first unmanned aerial vehicle at a first time, wherein the information regarding the first noise is captured while the first unmanned aerial vehicle travels on a first course at a first speed and at a first altitude over at least a first location, and wherein the information regarding the first noise comprises the first course, the first speed, the first altitude and the first location;determining, based at least in part on the information regarding the first noise, at least a first sound pressure level of the first noise and a first frequency of the first noise;selecting, by at least one computer processor, a second sound pressure level of a second noise and a second frequency of the second noise based at least in part on: the first sound pressure level of the first noise;the first frequency of the first noise, wherein the second frequency of the second noise is approximately one hundred eighty degrees out of phase with the first frequency of the first noise; andat least one of: the first course;the first speed;the first altitude; orthe first location;determining, by at least one sensor of a second unmanned aerial vehicle, that the second unmanned aerial vehicle is traveling at one or more of: on the first course;at the first speed;at the first altitude; orover at least the first location; andin response to determining that the second unmanned aerial vehicle is traveling at the one or more of on the first course, at the first speed, at the first altitude or over at least the first location, causing the second unmanned aerial vehicle to emit the second noise at the second sound pressure level and at the second frequency. 2. The method of claim 1, wherein the at least one computer processor is associated with at least one server, wherein the at least one server is either ground-based or airborne, andwherein selecting the sound pressure level of the second sound and the second frequency of the second sound comprises: transmitting at least some of the information regarding the first sound by the first aerial vehicle to the at least one server over at least one communications network; andreceiving the at least some of the information regarding the first sound by the at least one server;transmitting information regarding the second sound to the second aerial vehicle over the at least one computer network, wherein the information regarding the second sound associates the second sound with at least one of the first course, the first speed, the first altitude or the first location; andreceiving the information regarding the second sound by the second aerial vehicle. 3. The method of claim 2, wherein the at least one server is configured to operate at least one machine learning system trained to identify at least one anti-noise to be emitted by an aerial vehicle based at least in part on a noise emitted by the aerial vehicle and at least one of a course, a speed, an altitude or a location, and wherein selecting the second sound pressure level of the second noise and the second frequency of the second noise further comprises: providing the at least some of the information regarding the first noise by the first aerial vehicle as an input to the machine learning system; andreceiving an output from the machine learning system,wherein the second sound pressure level and the second frequency are selected based at least in part on the output. 4. The method of claim 1, wherein the at least one computer processor is provided aboard the first aerial vehicle, and wherein selecting the sound pressure level of the second sound and the second frequency of the second sound comprises: transmitting at least some of the information regarding the second sound to the second aerial vehicle over the at least one computer network, wherein the information regarding the second sound associates the second sound with at least one of the first course, the first speed, the first altitude or the first location; andreceiving the information regarding the second sound by the second aerial vehicle. 5. An unmanned aerial vehicle (UAV) comprising: a frame;at least one motor mounted to the frame, wherein the at least one motor is rotatably coupled to at least one propeller;a first sensor;a transceiver;a sound emitting device mounted to at least one of the frame or the at least one motor; anda computing device having a memory and one or more computer processors,wherein the one or more computer processors are configured to at least: determine, by the first sensor, information regarding at least one of a course, a speed, an altitude or a position of the UAV during an operation of the UAV;provide at least some of the information regarding the at least one of the course, the speed, the altitude or the position of the UAV during the operation of the UAV as a first input to at least one machine learning system operated by the one or more computer processors;determine an output from the at least one machine learning system based at least in part on the first input;determine, based at least in part on the output, information regarding at least a first sound based at least in part on the output, wherein the information regarding the first sound comprises at least a first sound pressure level of the first sound and at least a first frequency of the first sound; andemitting at least the first sound by the sound emitting device during the operation of the UAV. 6. The UAV of claim 5, wherein the UAV further comprises a second sensor, and wherein the one or more computer processors are further configured to at least:capture, by the second sensor, information regarding at least a second sound emitted by at least one component of the UAV during the operation of the UAV, wherein the information regarding the second sound comprises at least a second sound pressure level of the second sound and at least a second frequency of the second sound; andprovide at least some of the information regarding at least the second sound emitted by the at least one component of the UAV during the operation of the UAV as a second input to the at least one machine learning system operated by the one or more computer processors,wherein the output is determined based at least in part on the first input and the second input. 7. The UAV of claim 6, wherein the one or more computer processors are configured to at least: transmit, by the transceiver over a communications network, at least some of the information regarding at least the first sound to at least one server, wherein the information regarding at least the first sound comprises:the first sound pressure level;the first frequency;the second sound pressure level;the second frequency; andthe at least one of the course, the speed, the altitude or the position. 8. The UAV of claim 5, wherein the UAV further comprises a second sensor, and wherein the one or more computer processors are further configured to at least:capture, by the second sensor, information regarding a first plurality of sounds emitted by components of the UAV during the operation of the UAV;provide at least some of the information regarding the first plurality of the sounds emitted by the components of the UAV during the operation of the UAV as a second input to the at least one machine learning system operated by the one or more computer processors, wherein the output is determined based at least in part on the first input and the second input;determine, based at least in part on the output, information regarding a second plurality of sounds, wherein the information regarding the second plurality of sounds comprises at least a sound pressure level of each of the second plurality of sounds and a frequency of each of the second plurality of sounds, and wherein the first sound is one of the second plurality of sounds;define, based at least in part on the output, a weighted superposition of each of the second plurality of sounds; andemitting, in accordance with the weighted superposition, each of the second plurality of sounds by the sound emitting device during the operation of the UAV. 9. The UAV of claim 5, wherein the first sensor is at least one of: a speedometer;an anemometer;a compass;a Global Positioning System sensor;an altimeter. 10. The UAV of claim 5, wherein the UAV further comprises a second sensor, and wherein the one or more computer processors are further configured to at least:determine, by the second sensor, information regarding at least one environmental condition during the operation of the UAV, wherein the at least one environmental condition is at least one of a temperature, a barometric pressure, a wind speed, a humidity, a level of cloud coverage, a level of sunshine or a surface condition; andprovide at least some of the information regarding the at least one environmental condition as a second input to the at least one machine learning system operated by the one or more computer processors,wherein the output is determined based at least in part on the first input and the second input. 11. The UAV of claim 5, wherein the UAV further comprises a second sensor, and wherein the one or more computer processors are further configured to at least:determine, by the second sensor, information regarding at least one operational characteristic of the UAV during the operation of the UAV, wherein the at least one operational characteristic is at least one of a rotating speed of the at least one motor, a rotating speed of the at least one propeller, a climb rate, a descent rate, a turn rate or an acceleration; andprovide at least some of the information regarding the at least one operational characteristic as a second input to the at least one machine learning system operated by the one or more computer processors,wherein the output is determined based at least in part on the first input and the second input. 12. The UAV of claim 5, wherein the sound emitting device is one of an audio speaker, a piezoelectric sound emitter or a vibration source mounted to the at least one of the frame or the at least one motor. 13. The UAV of claim 5, wherein the one or more computer processors are further configured to at least: determine a noise threshold within a vicinity of the position of the UAV during the operation of the UAV; anddetermine the information regarding at least the first sound based at least in part on the output and the noise threshold. 14. The UAV of claim 5, wherein the machine learning system is configured to perform one or more of: an artificial neural network;a conditional random field;a cosine similarity analysis;a factorization method;a K-means clustering analysis;a latent Dirichlet allocation;a latent semantic analysis;a log likelihood similarity analysis;a nearest neighbor analysis;a support vector machine; ora topic model analysis. 15. The UAV of claim 5, wherein the one or more computer processors are further configured to at least: receive, by the transceiver over a communications network, intrinsic data regarding prior operations of each of a plurality of aerial vehicles, wherein the intrinsic data comprises at least one of a course, a speed, an altitude, or a position of each of the aerial vehicles;receive, by the transceiver over the communications network, extrinsic data regarding prior operations of each of the plurality of aerial vehicles, wherein the extrinsic data comprises at least one of an air temperature, an air pressure, a humidity, a wind speed, a wind direction, a time of day, a day of a week, a month of a year, a measure of cloud coverage, a measure of sunshine, or a measure of a ground condition;receive, by the transceiver over the communications network, information regarding sound emitted during the prior operations of the plurality of aerial vehicles;define a set of training inputs comprising intrinsic data and extrinsic data determined during the prior operations of at least some of the plurality of aerial vehicles;define a set of training outputs comprising information regarding the sound emitted during the prior operations of the at least some of the plurality of aerial vehicles; andtrain the at least one machine learning system based at least in part on the set of training inputs and the set of training outputs. 16. A method comprising: identifying information regarding a first transit plan for a first unmanned aerial vehicle, wherein the information regarding the first transit plan comprises at least a first leg extending between an origin, a destination or at least one intervening waypoint between the origin and the destination, and wherein the information regarding the first transit plan comprises at least one of a first course, a first speed, a first altitude or at least a first ground location associated with the first leg;predicting at least one characteristic of the first unmanned aerial vehicle during operation of the first unmanned aerial vehicle in association with the first leg, wherein the at least one characteristic is at least one of an operational characteristic of the first unmanned aerial vehicle or an environmental characteristic of the first unmanned aerial vehicle;providing at least some of the information regarding the first transit plan and the at least one predicted characteristic of the first unmanned aerial vehicle as inputs to a sound model operating on at least one computer device, wherein the sound model is trained to predict at least one of a sound pressure level or a frequency of a sound emitted by an aerial vehicle during operation;receiving at least one output from the sound model;predicting information regarding at least a first sound emitted by the first unmanned aerial vehicle during operation in association with the first leg, wherein the information regarding at least the first sound comprises a first sound pressure level of the first sound and a first frequency of the first sound;defining a second sound based at least in part on at least the first sound, wherein the second sound comprises a second sound pressure level and a second frequency, and wherein the second sound is approximately one hundred eighty degrees out of phase with the first frequency of the first sound;causing the first unmanned aerial vehicle to travel in accordance with the transit plan;determining that the first unmanned aerial vehicle is at least one of: traveling on the first course;traveling at the first speed;traveling at the first altitude; ortraveling over the first ground location; andcausing the first unmanned aerial vehicle to emit the second sound by at least one sound emitting device provided on the first unmanned aerial vehicle. 17. The method of claim 16, further comprising: training the sound model, wherein training the sound model comprises: determining, during operations of a plurality of aerial vehicles, intrinsic data regarding each of the plurality of aerial vehicles, wherein the intrinsic data comprises at least one of a course, a speed, an altitude, a climb rate, a turn rate, an acceleration, a dimension, a number of operating motors or a rotating speed of at least one of the operating motors;determining, during the operations of the plurality of the aerial vehicles, extrinsic data regarding each of the plurality of aerial vehicles, wherein the extrinsic data comprises at least one of an air temperature, an air pressure, a humidity, a wind speed, a wind direction, a time of day, a day of a week, a month of a year, a measure of cloud coverage, a measure of sunshine, or a measure of a ground condition;capturing, by at least one acoustic sensor provided aboard each of the plurality of the aerial vehicles, information regarding sound emitted during the operations of the plurality of aerial vehicles;defining a set of training inputs comprising intrinsic data and extrinsic data determined during operations of at least some of the plurality of aerial vehicles;defining a set of training outputs comprising information regarding the sound emitted during the operations of the at least some of the plurality of aerial vehicles; andtraining the sound model based at least in part on the set of training inputs and the set of training outputs. 18. The method of claim 16, wherein the computer device is provided external to the first aerial vehicle and is associated with at least one of: a ground-based facility; ora second aerial vehicle. 19. The method of claim 16, further comprising: capturing, by at least one acoustic sensor provided aboard the first unmanned aerial vehicle, information regarding a third sound emitted during the operation of the first unmanned aerial vehicle in association with the first leg, wherein the information regarding the third sound emitted during the operation of the first unmanned aerial vehicle comprises a third sound pressure level and a third frequency;defining a fourth sound based at least in part on at least the third sound, wherein the fourth sound comprises a fourth sound pressure level and a fourth frequency, and wherein the fourth sound is approximately one hundred eighty degrees out of phase with the third frequency of the third sound; andcausing the first unmanned aerial vehicle to emit the fourth sound by at least one sound emitting device. 20. The method of claim 16, wherein the at least one sound emitting device is at least one of an audio speaker, a piezoelectric sound emitter or a vibration source provided on the first aerial vehicle.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (9)
Warnaka, Glenn E.; Zalas, John M., Active attenuation of noise in a closed structure.
Burdisso Ricardo (Blacksburg VA) Fuller Chris R. (Blacksburg VA) O\Brien Walter F. (Blacksburg VA) Thomas Russell H. (Blacksburg VA) Dungan Mary E. (Malden SC), Active control of aircraft engine inlet noise using compact sound sources and distributed error sensors.
Johnson, Samuel Alan; Burkard, William Dennis; Mimlitch, III, Robert H.; Mimlitch, Jr., Robert Henry; Norman, David Anthony, Aerial robot with dispensable conductive filament.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.