Encoding spatial data in a multi-channel sound file for an object in a virtual environment
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06F-017/00
A63F-013/00
출원번호
UP-0840196
(2004-05-06)
등록번호
US-7818077
(2010-11-08)
발명자
/ 주소
Bailey, Kelly Daniel
출원인 / 주소
Valve Corporation
대리인 / 주소
Frommer Lawrence & Haug LLP
인용정보
피인용 횟수 :
17인용 특허 :
9
초록▼
A method for recording and playing back spatial sound data associated with an object in a scene of a virtual environment from the perspective of a character controlled by a user. Different types of spatial sound data can be encoded for different types of objects, e.g., fast moving, directional, slow
A method for recording and playing back spatial sound data associated with an object in a scene of a virtual environment from the perspective of a character controlled by a user. Different types of spatial sound data can be encoded for different types of objects, e.g., fast moving, directional, slow moving and stationary objects. Based on at least the position, distance, and direction of the object in regard to the character, at least two channels of an audio file can be recorded with spatial sound data for subsequent playback in the virtual environment.
대표청구항▼
What is claimed as new and desired to be protected by Letters Patent of the United States is: 1. A method for providing spatial sound data associated with a fast moving object in a scene for a virtual environment, comprising: determining if the object is currently moving fast through the scene base
What is claimed as new and desired to be protected by Letters Patent of the United States is: 1. A method for providing spatial sound data associated with a fast moving object in a scene for a virtual environment, comprising: determining if the object is currently moving fast through the scene based on at least one of position, distance and direction for the object in regard to a point of view in the scene; providing pre-recorded spatial sound data in at least two channels of a single audio file associated with the determined object moving fast through the scene, wherein the pre-recorded spatial sound data includes at least spatial approaching sound data recorded in a first channel of the audio file and spatial retreating sound data recorded in a second channel of the audio file; and consecutively playing the pre-recorded spatial sound data for each of the at least two channels of the audio file associated with the object as it moves past the point of view in the scene, wherein the consecutive playing of the pre-recorded spatial sound data simulates approaching and retreating sound associated with the object moving past the point of view in the scene. 2. The method of claim 1, wherein the point of view is at least one of a character in the scene, a third person perspective, and another character in the scene. 3. The method of claim 1, further comprising determining a type of the object based at least in part on the point of view in the scene. 4. The method of claim 1, wherein the spatial approaching sound data is played in one sound amplification device and the spatial retreating sound data is played in another sound amplification device. 5. The method of claim 1, further comprising cross fading at least two channels of the audio file. 6. The method of claim 1, wherein the audio file further includes a format of at least one of Windows Audio Video (WAV), Audio Interchange File Format (AIFF), MPEG (MPX), Sun Audio (AU), Real Networks (RN), Musical Instrument Digital Interface (MIDI), QuickTime Movie (QTM), and AC3. 7. The method of claim 1, wherein the virtual environment is at least one of a video game, chat room, and a virtual world. 8. The method of claim 1, wherein playing the pre-recorded spatial sound data comprises switching from playing the first channel of the audio file to playing the second channel of the audio file when the object passes from a forward position to a rearward position, or from a rearward position to a forward position, relative to the point of view. 9. A method for playing spatial sound data associated with a fast moving object in a scene for a virtual environment, comprising: determining if the object is currently moving fast through the scene based on at least one of position, distance and direction for the object in regard to a point of view in the scene; providing pre-recorded spatial sound data in at least two channels of a single audio file associated with the determined object moving fast through the scene, wherein the pre-recorded spatial sound data includes spatial approaching sound data recorded in a first channel of the audio file and spatial retreating sound data recorded in a second channel of the audio file; and consecutively playing the pre-recorded spatial sound data for each of the at least two channels of the audio file associated with the object as it moves past the point of view in the scene, wherein the consecutive playing of the pre-recorded spatial sound data is based at least in part on distance, position and direction of the object in regard to the point of view in the scene, and wherein the playing of the pre-recorded spatial sound data enables the simulation of approaching and retreating sound associated with the object moving past the point of view in the scene. 10. The method of claim 9, wherein playing the pre-recorded spatial sound data comprises switching from playing the first channel of the audio file to playing the second channel of the audio file when the object passes from a forward position to a rearward position, or from a rearward position to a forward position, relative to the point of view. 11. A server for enabling the playing of spatial sound data associated with a fast moving object in a scene in a virtual environment, comprising: a memory for storing data; and an audio engine for performing actions, including: enabling the determining if the object is currently moving fast through the scene based on at least one of position, distance and direction for the object in regard to at least a point of view in the scene and a type of the object; enabling the providing of pre-recorded spatial sound data in at least two channels of a single audio file associated with the determined object moving fast through the scene, wherein the pre-recorded spatial sound data includes at least spatial approaching sound data recorded in a first channel of the audio file and spatial retreating sound data recorded in a second channel of the audio file; and enabling the consecutive playing of the pre-recorded spatial sound data for each of the at least two channels of the audio file associated with the object, wherein the consecutive playing of the pre-recorded spatial sound data simulates approaching and retreating sound associated with the object moving past the point of view in the scene. 12. The server of claim 11, wherein the actions performed by the audio engine further comprise switching from playing the first channel of the audio file to playing the second channel of the audio file when the object passes from a forward position to a rearward position, or from a rearward position to a forward position, relative to the point of view. 13. A client for enabling the playing of spatial sound data associated with a fast moving object in a scene in a virtual environment, comprising: a memory for storing data; and an audio engine for performing actions, including: enabling determining if the object is currently moving fast through the scene based on at least one of position, distance and direction for the object in regard to at least a point of view in the scene and a type of the object; enabling the providing of pre-recorded spatial sound data in at least two channels of a single audio file associated with the determined object moving fast through the scene, wherein the pre-recorded spatial sound data includes at least spatial approaching sound data recorded in a first channel of the audio file and spatial retreating sound data recorded in a second channel of the audio file; and enabling the consecutive playing of the pre-recorded spatial sound data for each of the at least two channels of the audio file associated with the object, wherein the consecutive playing of the pre-recorded spatial sound data simulates approaching and retreating sound associated with the object moving past the point of view in the scene. 14. The client of claim 13, wherein the actions performed by the audio engine further comprise switching from playing the first channel of the audio file to playing the second channel of the audio file when the object passes from a forward position to a rearward position, or from a rearward position to a forward position, relative to the point of view. 15. A computer readable storage medium with instructions for performing actions stored thereon, the instructions comprising: determining if the object is currently moving fast through the scene based on at least one of position, distance and direction for the object in regard to a point of view in the scene; providing pre-recorded spatial sound data in at least two channels of a single audio file associated with the determined object moving fast through the scene, wherein the pre-recorded spatial sound data includes at least spatial approaching sound data recorded in a first channel of the audio file and spatial retreating sound data recorded in a second channel of the audio file; and consecutively playing the pre-recorded spatial sound data for each of the at least two channels of the audio file associated with the object as it moves past the point of view in the scene, wherein the consecutive playing of the pre-recorded spatial sound data simulates approaching and retreating sound associated with the object moving past the point of view in the scene. 16. The computer readable storage medium of claim 15, wherein the instructions further comprise switching from playing the first channel of the audio file to playing the second channel of the audio file when the object passes from a forward position to a rearward position, or from a rearward position to a forward position, relative to the point of view.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (9)
Cascone, Kim; Petkevich, Daniel T.; Scandalis, Gregory P.; Stilson, Timothy S.; Taylor, Kord F.; Van Duyne, Scott A., Apparatus and methods for synthesis of internal combustion engine vehicle sounds.
Lee, Robert Ernest; Aldridge, David M., Apparatus, method, and computer readable media to perform transactions in association with participants interacting in a synthetic environment.
Lee, Robert Ernest; Aldridge, David M.; Farina, Bryan Joseph; Van Caneghem, Jon Edward, Distributed network architecture for introducing dynamic content into a synthetic environment.
Lee, Robert Ernest; Aldridge, David M.; Farina, Bryan Joseph; Van Caneghem, Jon Edward, Distributed network architecture for introducing dynamic content into a synthetic environment.
Lee, Robert Ernest; Maltzen, Jason A.; Aldridge, David M.; Farina, Bryan Joseph; Van Caneghem, Jon Edward, Distributed network architecture for introducing dynamic content into a synthetic environment.
Lee, Robert Ernest; Maltzen, Jason A.; Aldridge, David M.; Farina, Bryan Joseph; Van Caneghem, Jon Edward, Distributed network architecture for introducing dynamic content into a synthetic environment.
Vilermo, Miikka Tapani; Tammi, Mikko Tapio; Lehtiniemi, Arto Juhani; Laaksonen, Lasse Juhani, Method and apparatus for synchronizing audio and video signals.
Lee, Robert Ernest; Van Caneghem, Jon Edward; Farina, Bryan Joseph; Turner, Erin E.; Huang, Peter Chi-Hao, Synthetic environment character data sharing.
Lee, Robert Ernest; Van Caneghem, Jon Edward; Farina, Bryan Joseph; Turner, Erin E.; Huang, Peter Chi-Hao, Synthetic environment character data sharing.
Lee, Robert E.; Van Caneghem, Jon E.; Farina, Bryan J.; Huang, Peter C.; Turner, Erin, Web client data conversion for synthetic environment interaction.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.