Simulating a 3D audio environment, including receiving a visual representation of an object at a location in a scene, wherein the location represents a point in 3D space, receiving a sound element, and binding the sound element to the visual representation of the object such that a characteristic of
Simulating a 3D audio environment, including receiving a visual representation of an object at a location in a scene, wherein the location represents a point in 3D space, receiving a sound element, and binding the sound element to the visual representation of the object such that a characteristic of the sound element is dynamically modified coincident with a change in location in the scene of the visual representation of the object in 3D space.
대표청구항▼
1. A method to simulate a three-dimensional (3D) audio environment, comprising: receiving, by an audiovisual framework, a visual representation of an object at a location in a scene, wherein the location represents a point in 3D space;receiving, by the audiovisual framework, a sound element;binding,
1. A method to simulate a three-dimensional (3D) audio environment, comprising: receiving, by an audiovisual framework, a visual representation of an object at a location in a scene, wherein the location represents a point in 3D space;receiving, by the audiovisual framework, a sound element;binding, by the audiovisual framework, the sound element to the visual representation of the object by mapping an audio node that corresponds to the sound element to a graphics node that is in a scene graph and that corresponds to the object in the scene, wherein a characteristic of the sound element is automatically and dynamically modified so as to coincide with a change in location in the scene of the visual representation of the object in 3D space based on the mapping between the audio node and the graphics node; andoutputting, for audio playback, the sound element with the modified characteristic. 2. The method of claim 1, further comprising: displaying the visual representation of the object on a display at the location in the scene; andcausing the sound element to generate a sound independently of an application program while the visual representation of the object is moving away from the location in the scene on the display. 3. The method of claim 1, wherein the sound element corresponds to a sound source, and wherein an apparent location of the sound source moves coincident with the change in location in the scene of the visual representation of the object in 3D space, based on the binding and the mapping. 4. The method of claim 1, wherein the sound element corresponds to a sound listener, and wherein audio characteristics of remote sounds are dynamically modified, independently of an application program, based on the change in location in the scene of the visual representation of the object, based on the binding and the mapping. 5. The method of claim 1, wherein receiving a sound element corresponding to a sound comprises loading the sound from an audio file. 6. The method of claim 1, wherein receiving a sound element corresponding to a sound comprises creating the audio node. 7. The method of claim 1, wherein the scene is part of a predefined 3D environment. 8. A system for simulating a three-dimensional (3D) audio environment, comprising: one or more processors; anda memory operatively coupled to a digital image sensor and comprising computer code configured to cause the one or more processors to: receive, by an audiovisual framework, a visual representation of an object at a location in a scene, wherein the location represents a point in 3D space;receive, by the audiovisual framework, a sound element;bind, by the audiovisual framework, the sound element to the visual representation of the object by mapping an audio node that corresponds to the sound element to a graphics node that is in a scene graph and that corresponds to the object in the scene, wherein a characteristic of the sound element is automatically and dynamically modified so as to coincide with a change in location in the scene of the visual representation of the object in 3D space based on the mapping between the audio node and the graphics node; andoutput, for audio playback, the sound element with the modified characteristic. 9. The system of claim 8, wherein the computer code is further configured to cause the one or more processors to: display the visual representation of the object on a display at the location in the scene; andcause the sound element to generate a sound independently of an application program while the visual representation of the object is moving away from the location in the scene on the display. 10. The system of claim 8, wherein the sound element corresponds to a sound source, and wherein an apparent location of the sound source moves coincident with the change in location in the scene of the visual representation of the object in 3D space, based on the binding and the mapping. 11. The system of claim 8, wherein the sound element corresponds to a sound listener, and wherein audio characteristics of remote sounds are dynamically modified, independently of an application program, based on the change in location in the scene of the visual representation of the object, based on the binding and the mapping. 12. The system of claim 8, wherein receiving a sound element corresponding to a sound comprises loading the sound from an audio file. 13. The system of claim 8, wherein receiving a sound element corresponding to a sound comprises creating the audio node. 14. The system of claim 8, wherein the scene is part of a predefined 3D environment. 15. The system of claim 8, wherein the one or more processors comprise one or more graphical processing units (GPUs) configured to process a graphics request. 16. A non-transitory computer readable storage device comprising computer code for simulating a three-dimensional (3D) audio environment, the computer code executable by one or more processors to: receive, by an audiovisual framework, a visual representation of an object at a location in a scene, wherein the location represents a point in 3D space;receive, by the audiovisual framework, a sound element;bind, by the audiovisual framework the sound element to the visual representation of the object by mapping an audio node that corresponds to the sound element to a graphics node that is in a scene graph and that corresponds to the object in the scene, wherein a characteristic of the sound element is automatically and dynamically modified so as to coincide with a change in location in the scene of the visual representation of the object in 3D space based on the mapping between the audio node and the graphics node; andoutput, for audio playback, the sound element with the modified characteristic. 17. The non-transitory computer readable storage device of claim 16, wherein the computer code is further executable by one or more processors to: display the visual representation of the object on a display at the location in the scene; andcause the sound element to generate a sound independently of an application program while the visual representation of the object is moving away from the location in the scene on the display. 18. The non-transitory computer readable storage device of claim 16, wherein the sound element corresponds to a sound source, and wherein an apparent location of the sound source moves coincident with the change in location in the scene of the visual representation of the object in 3D space, based on the binding and the mapping. 19. The non-transitory computer readable storage device of claim 16, wherein the sound element corresponds to a sound listener, and wherein audio characteristics of remote sounds are dynamically modified, independently of an application program, based on the change in location in the scene of the visual representation of the object, based on the binding and the mapping. 20. The non-transitory computer readable storage device of claim 16, wherein the computer code executable by one or more processors to receive a sound element corresponding to a sound comprises computer code executable by one or more processors to load the sound from an audio file. 21. The non-transitory computer readable storage device of claim 16, wherein the computer code executable by one or more processors to receive a sound element corresponding to a sound comprises computer code executable by one or more processors to create the audio node. 22. The non-transitory computer readable storage device of claim 16, wherein the scene is part of a predefined 3D environment. 23. The method of claim 1, wherein the audiovisual framework supports an interface between a graphics framework and an audio framework. 24. The method of claim 23, further comprising mapping one or more graphics nodes of the graphics framework to one or more audio nodes of the audio framework.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.