최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0262674 (2014-04-25) |
등록번호 | US-9294785 (2016-03-22) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 0 인용 특허 : 424 |
A system method and computer program product for creating a composited video frame sequence for an application. A current scene state for the application is compared to a previous scene state wherein each scene state includes a plurality of objects. A video construction engine determines if properti
A system method and computer program product for creating a composited video frame sequence for an application. A current scene state for the application is compared to a previous scene state wherein each scene state includes a plurality of objects. A video construction engine determines if properties of one or more objects have changed based upon a comparison of the scene states. If properties of one or more objects have changed based upon the comparison, the delta between the object's states is determined and this information is used by a fragment encoding module if the fragment has not been encoded before. The information is used to define, for example, the motion vectors for use by the fragment encoding module in construction of the fragments to be used by the stitching module to build the composited video frame sequence.
1. A method for creating a composited video frame sequence, comprising: at a system including one or more processors and memory storing instructions for execution by the processor: comparing a current scene state with a previous scene state, wherein the current scene state includes a first plurality
1. A method for creating a composited video frame sequence, comprising: at a system including one or more processors and memory storing instructions for execution by the processor: comparing a current scene state with a previous scene state, wherein the current scene state includes a first plurality of objects having respective properties, and wherein the previous scene state includes a second plurality of objects having respective properties;detecting a difference between the respective properties of the first plurality of objects and the respective properties of the second plurality of objects;in accordance with the difference between the respective properties being detected, retrieving one or more pre-encoded first video fragments based on the detected difference, wherein each of the one or more pre-encoded first video fragments is a portion of a full frame of video; andcompositing the video frame sequence, wherein the video frame sequence includes at least one of the one or more pre-encoded first video fragments. 2. The method of claim 1, wherein the one or more pre-encoded first video fragments are retrieved from a memory. 3. The method of claim 2, wherein the memory is non-volatile memory. 4. The method of claim 2, wherein the memory is volatile memory. 5. The method of claim 1, wherein the difference detected between the respective properties of the first plurality of objects and the respective properties of the second plurality of objects corresponds to at least one property from a group consisting of: a position, transformation matrix, texture, and translucency, of a respective object. 6. The method of claim 1, wherein detecting the difference between the respective properties of the first plurality of objects and the respective properties of the second plurality includes: tessellating a first bounding rectangle, corresponding to at least one object of the first plurality of objects, with a second bounding rectangle, corresponding to at least one object of the second plurality of objects. 7. The method of claim 1, the method further comprising: in accordance with the difference between the respective properties being detected, encoding one or more second video fragments based on the detected difference, wherein the video frame sequence further includes at least one of the one or more encoded second video fragments. 8. A computer system for creating a composited video frame sequence for an application, comprising: one or more processors;memory; andone or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: comparing a current scene state with a previous scene state, wherein the current scene state includes a first plurality of objects having respective properties, and wherein the previous scene state includes a second plurality of objects having respective properties;detecting a difference between the respective properties of the first plurality of objects and the respective properties of the second plurality of objects;in accordance with the difference between the respective properties being detected, retrieving one or more pre-encoded first video fragments based on the detected difference, wherein each of the one or more pre-encoded first video fragments is a portion of a full frame of video; andcompositing the video frame sequence, wherein the video frame sequence includes at least one of the one or more pre-encoded first video fragments. 9. A non-transitory computer readable storage medium, storing one or more programs for execution by one or more processors of a computer system, the one or more programs including instructions for: comparing a current scene state with a previous scene state, wherein the current scene state includes a first plurality of objects having respective properties, and wherein the previous scene state includes a second plurality of objects having respective properties;detecting a difference between the respective properties of the first plurality of objects and the respective properties of the second plurality of objects;in accordance with the difference between the respective properties being detected, retrieving one or more pre-encoded first video fragments based on the detected difference, wherein each of the one or more pre-encoded first video fragments is a portion of a full frame of video; andcompositing the video frame sequence, wherein the video frame sequence includes at least one of the one or more pre-encoded first video fragments.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.