IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0770986
(2004-02-03)
|
발명자
/ 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
1 인용 특허 :
5 |
초록
▼
A method for encoding video information is presented, where a piece of current video information is segmented into macroblocks and a certain number of available macroblock segmentations for segmenting a macroblock into blocks is defined. Furthermore, for each available macroblock segmentation at lea
A method for encoding video information is presented, where a piece of current video information is segmented into macroblocks and a certain number of available macroblock segmentations for segmenting a macroblock into blocks is defined. Furthermore, for each available macroblock segmentation at least one available prediction method is defined, each of which prediction methods produces prediction motion coefficients for blocks within said macroblock, resulting in a certain finite number of available macroblock-segmentation?prediction-method pairs. For a macroblock one of the available macroblock-segmentation?prediction-method pairs is selected, and thereafter the macroblock is segmented into blocks and prediction motion coefficients for the blocks within said macroblock are produced using the selected macroblock-segmentation?prediction-method pair. A corresponding decoding method, an encoder and a decoder are also presented.
대표청구항
▼
1. An encoder for performing motion compensated encoding of video information, said encoder being arranged to derive prediction motion coefficients for blocks within a macroblock of a video frame being encoded from motion coefficients of at least one prediction block that is a previously encoded mac
1. An encoder for performing motion compensated encoding of video information, said encoder being arranged to derive prediction motion coefficients for blocks within a macroblock of a video frame being encoded from motion coefficients of at least one prediction block that is a previously encoded macroblock or block within said video frame, the encoder being further arranged to:specify a certain number of available macroblock segmentations that define possible ways in which a macroblock can be segmented into blocks; specify at least one available prediction method for each available macroblock segmentation, thereby providing a certain finite number of available macroblock-segmentation?prediction-method pairs, each prediction method defining a method for deriving prediction motion coefficients for blocks within a macroblock using motion coefficients of at least one prediction block; select a macroblock-segmentation?prediction method pair among the available macroblock-segmentation?prediction method pairs; segment a macroblock using the macroblock segmentation specified by the selected macroblock-segmentation?prediction-method pair; and derive prediction motion coefficients for blocks within said macroblock using the prediction method specified by the selected macroblock-segmentation?prediction-method pair. 2. An encoder according to claim 1, the encoder being further arranged to:store a reference video frame; estimate a motion field of blocks in the frame of video information using at least the reference video frame; produce motion coefficients describing the estimated motion fields; and produce difference motion coefficients using the motion coefficients and the prediction motion coefficients. 3. A decoder for performing motion compensated decoding of encoded video information, said decoder being arranged to derive prediction motion coefficients for blocks within a macroblock of a video frame being decoded from motion coefficients of at least one prediction block that is a previously decoded macroblock or block within said video frame, the decoder being further arranged to:define a certain number of available macroblock segmentations that specify possible ways in which a macroblock can be segmented into blocks; specify at least one available prediction method for each available macroblock segmentation, thereby providing a certain finite number of available macroblock-segmentation?prediction-method pairs, each prediction method defining a method for deriving prediction motion coefficients for blocks within a macroblock using motion coefficients of at least one prediction block; receive information indicating at least the macroblock-segmentation selected for a macroblock; determine the prediction-method relating to the segmentation of the macroblock with reference to the defined macroblock-segmentation?prediction-method pairs; and produce prediction motion coefficients for blocks within said macroblock using the determined prediction method. 4. A decoder according to claim 3, further arranged to:receive information about difference motion coefficients describing the motion of blocks within a macroblock, said difference motion coefficients having been obtained by estimating the motion of the blocks within the macroblock with respect to a reference video frame, representing the motion of the blocks within the macroblock with a model comprising a set of basis functions and motion coefficients and representing the motion coefficients thus obtained as a sum of prediction motion coefficients and difference motion coefficients; and reconstruct motion coefficients using the prediction motion coefficients and the difference motion coefficients. 5. A computer program for operating a computer as an encoder according to claim 1.6. A computer program according to claim 5, embodied on a computer readable medium.7. A computer program for operating a computer as a decoder according to claim 3.8. A computer program according to claim 7, embodied on a computer readable medium.9. A storage device comprising an encoder according to claim 1.10. A mobile station comprising an encoder according to claim 1.11. A mobile station comprising a decoder according to claim 3.12. A network element comprising an encoder according to claim 1.13. A network element according to claim 12, wherein the network element is a network element of a mobile telecommunication network.14. An encoder according to claim 1, wherein at least one of the available macroblock-segmentation?prediction-method pairs specifies that the prediction motion coefficients for a block within said macroblock are derived from the motion coefficients of only one prediction block.15. An encoder according to claim 1, wherein at least one of the available macroblock-segmentation?prediction-method pairs specifies that the prediction motion coefficients for a block within said macroblock are derived from the motion coefficients of at least a first prediction block and a second prediction block.16. An encoder according to claim 15, arranged to derive the prediction motion coefficients for a block from a median of the motion coefficients of at least a first prediction block and a second prediction block.17. An encoder according to claim 1, wherein at least one of the available macroblock-segmentation?prediction-method pairs specifies that the prediction motion coefficients for a block within said macroblock are derived from motion coefficients of prediction blocks within said macroblock.18. An encoder according to claim 1, arranged to derive prediction motion coefficients for a block using a prediction block that comprises a certain predetermined pixel, whose location is defined relative to said block.19. An encoder according to claim 18, wherein the location of a predetermined pixel for a first prediction block is different from the location of a predetermined pixel for a second prediction block.20. An encoder according to claim 15, wherein the number of prediction blocks used by any of the macroblock-segmentation?prediction-method pairs is limited to a predetermined maximum number.21. An encoder according to claim 20, arranged to use a maximum of three prediction blocks.22. An encoder according to claim 1, arranged to select the macroblock-segmentation?prediction-method pair for a macroblock in dependence on the macroblock segmentation of neighbouring macroblocks.23. An encoder according to claim 1, arranged to select a macroblock-segmentation?prediction-method responsive to minimizing a cost function.24. An encoder according to claim 1, arranged to define one macroblock-segmentation?prediction-method pair for each available macroblock segmentation.25. An encoder according to claim 24, further arranged to transmit information indicating the selected macroblock-segmentation to a corresponding decoder.26. An encoder according to claim 1, further arranged to transmit information indicating the selected macroblock-segmentation?prediction-method pair to a corresponding decoder.27. An encoder according to claim 1, further arranged to:estimate the motion of blocks within a macroblock with respect to a reference video frame; represent the motion of the blocks within the macroblock with a model comprising a set of basis functions and motion coefficients; and represent the motion coefficients thus obtained as a sum of the prediction motion coefficients and difference motion coefficients. 28. An encoder according to claim 27, arranged to select a macroblock-segmentation?prediction-method pair by minimizing a cost function that includes at least a measure of a reconstruction error relating to the macroblock-segmentation?prediction-method pair and a measure of an amount of information required to indicate the macroblock-segmentation?prediction-method pair and to represent the difference motion coefficients of the blocks within said macroblock.29. An encoder according to claim 27, wherein the model used to represent the motion of a block is a translational motion model.30. An encoder according to claim 28, further arranged to transmit information indicating the selected macroblock-segmentation?prediction-method pair and information about the difference motion coefficients to a decoder.31. An encoder according to claim 27, further arranged to:reconstruct the motion of the blocks using the motion coefficients, basis functions and information about the macroblock segmentation; determine predicted video information using the reference video frame and the reconstructed motion of the blocks; determine corresponding prediction error video information based on a difference between the predicted video information and the video information of the macroblock; code the prediction error video information and represent it with prediction error coefficients; and transmit information about the prediction error coefficients to a corresponding decoder. 32. A decoder according to claim 3, arranged to receive an indication of a selected macroblock-segmentation?prediction method pair.33. A decoder according to claim 4, further arranged to:receive information about prediction error coefficients describing prediction error video information, and determine a decoded piece of current video information using at least the reconstructed motion coefficients and the prediction error video information. 34. A video signal encoded at least in part using motion compensated prediction by deriving prediction motion coefficients for blocks within a macroblock of a video frame from motion coefficients of at least one prediction block that is a previously encoded macroblock or block within said video frame, the prediction motion coefficients having been derived by:specifying a certain number of available macroblock segmentations that define possible ways in which a macroblock can be segmented into blocks; specifying at least one available prediction method for each available macroblock segmentation, thereby providing a certain finite number of available macroblock-segmentation?prediction-method pairs, each prediction method defining a method for deriving prediction motion coefficients for blocks within a macroblock using motion coefficients of at least one prediction block; selecting a macroblock-segmentation?prediction method pair among the available macroblock-segmentation?prediction method pairs; segmenting a macroblock using the macroblock segmentation specified by the selected macroblock-segmentation?prediction-method pair; and deriving prediction motion coefficients for blocks within said macroblock using the prediction method specified by the selected macroblock-segmentation?prediction-method pair, the video signal including information indicating the selected macroblock-segmentation. 35. A video signal encoded at least in part using motion compensated prediction by deriving prediction motion coefficients for blocks within a macroblock of a video frame from motion coefficients of at least one prediction block that is a previously encoded macroblock or block within said video frame, the prediction motion coefficients having been derived by:specifying a certain number of available macroblock segmentations that define possible ways in which a macroblock can be segmented into blocks; specifying at least one available prediction method for each available macroblock segmentation, thereby providing a certain finite number of available macroblock-segmentation?prediction-method pairs, each prediction method defining a method for deriving prediction motion coefficients for blocks within a macroblock using motion coefficients of at least one prediction block; selecting a macroblock-segmentation?prediction method pair among the available macroblock-segmentation?prediction method pairs; segmenting a macroblock using the macroblock segmentation specified by the selected macroblock-segmentation?prediction-method pair; and deriving prediction motion coefficients for blocks within said macroblock using the prediction method specified by the selected macroblock-segmentation?prediction-method pair, the video signal including information indicating the selected macroblock-segmentation?prediction-method pair. 36. A video signal encoded at least in part using motion compensated prediction by:estimating the motion of blocks within a macroblock of a video frame with respect to a reference video frame; representing the motion of the blocks within the macroblock with a model comprising a set of basis functions and motion coefficients; deriving prediction motion coefficients for blocks within the macroblock of the video frame from motion coefficients of at least one prediction block that is a previously encoded macroblock or block within said video frame, the prediction motion coefficients being derived by: specifying a certain number of available macroblock segmentations that define possible ways in which a macroblock can be segmented into blocks; specifying at least one available prediction method for each available macroblock segmentation, thereby providing a certain finite number of available macroblock-segmentation?prediction-method pairs, each prediction method defining a method for deriving prediction motion coefficients for blocks within a macroblock using motion coefficients of at least one prediction block; selecting a macroblock-segmentation?prediction method pair among the available macroblock-segmentation?prediction method pairs; segmenting a macroblock using the macroblock segmentation specified by the selected macroblock-segmentation?prediction-method pair; and deriving prediction motion coefficients for blocks within said macroblock using the prediction method specified by the selected macroblock-segmentation?prediction-method pair; and representing the obtained motion coefficients as a sum of the prediction motion coefficients and difference motion coefficients, the video signal including information enabling identification of the selected macroblock-segmentation?prediction-method pair and information about the difference motion coefficients.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.