Systems and methods for estimating audio at a requested location are presented. In one embodiment, the method includes receiving from a client device a request for audio at a requested location. The method further includes determining a location of a plurality of audio sensors, where the plurality o
Systems and methods for estimating audio at a requested location are presented. In one embodiment, the method includes receiving from a client device a request for audio at a requested location. The method further includes determining a location of a plurality of audio sensors, where the plurality of audio sensors are coupled to head-mounted devices in which a location of each of the plurality of audio sensors varies. The method further includes, based on the requested location and the location of the plurality of audio sensors, determining an ad hoc array of audio sensors, receiving audio sensed from audio sensors in the ad hoc array, and processing the audio sensed from audio sensors in the ad hoc array to produce an output substantially estimating audio at the requested location.
대표청구항▼
1. A method, comprising: receiving from a client device a request for audio at a requested location;determining a location of a plurality of audio sensors, wherein the plurality of audio sensors are coupled to head-mounted devices in which a location of each of the plurality of audio sensors varies;
1. A method, comprising: receiving from a client device a request for audio at a requested location;determining a location of a plurality of audio sensors, wherein the plurality of audio sensors are coupled to head-mounted devices in which a location of each of the plurality of audio sensors varies;based on the requested location and the location of the plurality of audio sensors, determining an ad hoc array of audio sensors, wherein determining the ad hoc array comprises: selecting from a plurality of predefined environments a predefined environment in which the requested location is located;identifying audio sensors in the plurality of audio sensors that are currently associated with the selected predefined environment;determining a separation distance of the audio sensors currently associated with the selected predefined environment, wherein the separation distance for an audio sensor comprises a distance between the location of the audio sensor and the requested location; andselecting for the ad hoc array audio sensors having a separation distance below a predetermined threshold;receiving audio sensed from audio sensors in the ad hoc array; andprocessing the audio sensed from audio sensors in the ad hoc array to produce an output substantially estimating audio at the requested location. 2. The method of claim 1, wherein receiving the request comprises receiving a set of coordinates identifying the requested location. 3. The method of claim 1, wherein determining the location of an audio sensor comprises at least one of querying the audio sensor for the location and receiving the location from the audio sensor. 4. The method of claim 1, wherein the location of an audio sensor comprises a location of the audio sensor relative to a known location. 5. The method of claim 1, wherein processing the audio sensed from audio sensors in the ad hoc array comprises processing the audio based on the location of each audio sensor in the ad hoc array. 6. The method of claim 5, wherein processing the audio based on the location of each audio sensor in the ad hoc array comprises: for each audio sensor in the ad hoc array, delaying audio sensed by the audio sensor based on the separation distance of the audio sensor to produce a delayed audio signal; andcombining the delayed audio signals from each of the audio sensors in the ad hoc array. 7. The method of claim 1, wherein processing the audio sensed from audio sensors in the ad hoc array comprises using a beamforming process. 8. The method of claim 1, further comprising: determining for audio sensors in the ad hoc array whether sensed audio may be received based on permissions set for the audio sensor. 9. The method of claim 1, further comprising: receiving audio sensed by each audio sensor of the plurality of audio sensors; andstoring in memory the sensed audio, a corresponding location of the audio sensor where the audio was sensed, and a corresponding time at which the audio was sensed. 10. The method of claim 9, wherein the request further includes a time at which the audio at the requested location was sensed. 11. The method of claim 1, further comprising periodically determining an updated location of each audio sensor in the ad hoc array. 12. A server, comprising: a first input interface configured to receive from a client device a request for audio at a requested location;a second input interface configured to receive audio from audio sensors;at least one processor; anddata storage comprising selection logic and processing logic, wherein the selection logic is executable by the at least one processor to: determine a location of a plurality of audio sensors, wherein the plurality of audio sensors are coupled to head-mounted devices in which a location of each of the plurality of audio sensors varies;based on the requested location and the location of the plurality of audio sensors, determine an ad hoc array of audio sensors, wherein determining the ad hoc array comprises: selecting from a plurality of predefined environments a predefined environment in which the requested location is located;identifying audio sensors in the plurality of audio sensors that are currently associated with the selected predefined environment;determining a separation distance of the audio sensors currently associated with the selected predefined environment, wherein the separation distance for an audio sensor comprises a distance between the location of the audio sensor and the requested location; andselecting for the ad hoc array audio sensors having a separation distance below a predetermined threshold,wherein the processing logic is executable by the at least one processor to process the audio sensed from audio sensors in the ad hoc array to produce an output substantially estimating audio at the requested location. 13. The server of claim 12, wherein one or both of the first input interface and the second input interface is a wireless interface. 14. The server of claim 12, wherein the processing logic is further executable to process the audio based on the location of each audio sensor in the ad hoc array. 15. The server of claim 12, wherein the processing logic is further executable to request a given audio sensor in the ad hoc array to provide audio sensed from the audio sensor. 16. The server of claim 12, wherein the processing logic is further executable to: receive audio sensed by each audio sensor of the plurality of audio sensors; andstore in the data storage the sensed audio, a corresponding location of the audio sensor where the audio was sensed, and a corresponding time at which the audio was sensed. 17. The server of claim 12, wherein the processing logic is further executable to periodically determine an updated location of each audio sensor in the ad hoc array. 18. The server of claim 12, wherein the server is configured to provide an instruction to control a direction of audio sensors in the ad hoc array. 19. The server of claim 12, further comprising an output interface configured to provide the output to the client device. 20. A non-transitory computer readable medium having stored therein instructions executable by a computing device to cause the computing device to perform the functions of: receiving from a client device a request for audio at a requested location;determining a location of a plurality of audio sensors, wherein the plurality of audio sensors are coupled to head-mounted devices in which a location of each of the plurality of audio sensors varies;based on the requested location and the location of the plurality of audio sensors, determining an ad hoc array of audio sensors, wherein determining the ad hoc array comprises: selecting from a plurality of predefined environments a predefined environment in which the requested location is located;identifying audio sensors in the plurality of audio sensors that are currently associated with the selected predefined environment;determining a separation distance of the audio sensors currently associated with the selected predefined environment, wherein the separation distance for an audio sensor comprises a distance between the location of the audio sensor and the requested location; andselecting for the ad hoc array audio sensors having a separation distance below a predetermined threshold;receiving audio sensed from audio sensors in the ad hoc array; andprocessing the audio sensed from audio sensors in the ad hoc array to produce an output substantially estimating audio at the requested location.
Osterhout, Ralph F.; Haddick, John D.; Lohse, Robert Michael; Cella, Charles; Nortrup, Robert J.; Nortrup, Edward H., AR glasses with event and sensor triggered AR eyepiece interface to external devices.
Osterhout, Ralph F.; Haddick, John D.; Lohse, Robert Michael; Cella, Charles; Nortrup, Robert J.; Nortrup, Edward H., AR glasses with event and sensor triggered control of AR eyepiece applications.
Osterhout, Ralph F.; Haddick, John D.; Lohse, Robert Michael; Cella, Charles; Nortrup, Robert J.; Nortrup, Edward H., AR glasses with event and user action control of external applications.
Lord, Richard T.; Lord, Robert W.; Myhrvold, Nathan P.; Tegreene, Clarence T.; Hyde, Roderick A.; Wood, Jr., Lowell L.; Ishikawa, Muriel Y.; Wood, Victoria Y. H.; Whitmer, Charles; Bahl, Paramvir; Burger, Douglas C.; Chandra, Ranveer; Gates, III, William H.; Holman, Paul; Kare, Jordin T.; Mundie, Craig J.; Paek, Tim; Tan, Desney S.; Zhong, Lin; Dyor, Matthew G., Determining threats based on information from road-based devices in a transportation-related context.
Osterhout, Ralph F.; Haddick, John D.; Lohse, Robert Michael; Border, John N.; Miller, Gregory D.; Stovall, Ross W., Eyepiece with uniformly illuminated reflective display.
Miller, Gregory D.; Border, John N.; Osterhout, Ralph F., Grating in a light transmissive illumination system for see-through near-eye display glasses.
Miller, Gregory D.; Border, John N.; Osterhout, Ralph F., Optical imperfections in a light transmissive illumination system for see-through near-eye display glasses.
Border, John N.; Bietry, Joseph; Osterhout, Ralph F., See-through near-eye display glasses including a curved polarizing film in the image source, a partially reflective, partially transmitting optical element and an optically flat film.
Border, John N.; Haddick, John D.; Osterhout, Ralph F., See-through near-eye display glasses including a partially reflective, partially transmitting optical element.
Border, John N.; Osterhout, Ralph F., See-through near-eye display glasses including an auto-brightness control for the display brightness based on the brightness in the environment.
Border, John N.; Bietry, Joseph; Osterhout, Ralph F., See-through near-eye display glasses wherein image light is transmitted to and reflected from an optically flat film.
Border, John N.; Osterhout, Ralph F., See-through near-eye display glasses with a fast response photochromic film system for quick transition from dark to clear.
Border, John N.; Haddick, John D.; Osterhout, Ralph F., See-through near-eye display glasses with a light transmissive wedge shaped illumination system.
Border, John N.; Haddick, John D.; Lohse, Robert Michael; Osterhout, Ralph F., See-through near-eye display glasses with the optical assembly including absorptive polarizers or anti-reflective coatings to reduce stray light.
Millington, Nicholas A. J., Systems and methods for synchronizing operations among a plurality of independently clocked digital data processing devices that independently source digital data.
Millington, Nicholas A. J.; Ericson, Michael, Systems and methods for synchronizing operations among a plurality of independently clocked digital data processing devices without a voltage controlled crystal oscillator.
Bates, Paul; Keyser-Allen, Lee; Lang, Jonathan P.; Roberts, Diane; Millington, Nicholas A. J., Systems, methods, apparatus, and articles of manufacture to provide guest access.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.