System and method for image and video encoding artifacts reduction and quality improvement
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/40
H04N-005/00
출원번호
US-0221011
(2008-07-29)
등록번호
US-8139883
(2012-03-20)
발명자
/ 주소
Zhang, Ximin
Liu, Ming-Chang
출원인 / 주소
Sony Corporation
대리인 / 주소
Haverstock & Owens LLP
인용정보
피인용 횟수 :
7인용 특허 :
18
초록▼
Reducing artifacts and improving quality for image and video encoding is performed in one pass to preserve natural edge smoothness and sharpness. To reduce artifacts and improve quality, several steps are implemented including spatial variation extraction, determining if a block is flat or texture/e
Reducing artifacts and improving quality for image and video encoding is performed in one pass to preserve natural edge smoothness and sharpness. To reduce artifacts and improve quality, several steps are implemented including spatial variation extraction, determining if a block is flat or texture/edge, classifying the pixels as texture or noise, detecting a dominant edge, checking the spatial variation of neighboring blocks, generating base weights, generating filter coefficients, filtering pixels and adaptive enhancement. A device which utilizes the method of reducing artifacts and improving quality achieves higher quality images and/or video with reduced artifacts.
대표청구항▼
1. A method of reducing image/video encoding artifacts and improving quality within a computing device, the method comprising: a. classifying each block in an image/video using pixel intensity range-based block classification;b. detecting a dominant edge using vector difference-based dominant edge d
1. A method of reducing image/video encoding artifacts and improving quality within a computing device, the method comprising: a. classifying each block in an image/video using pixel intensity range-based block classification;b. detecting a dominant edge using vector difference-based dominant edge detection;c. generating content adaptive base weights using content adaptive base weight generation;d. generating content adaptive filter weights using content adaptive final filter weight generation; ande. enhancing textures and edges using adaptive texture and edge enhancement. 2. The method of claim 1 wherein classifying each block includes classifying a block as a flat block by comparing an intensity range with a threshold and determining the intensity range is less than or equal to the threshold. 3. The method of claim 1 wherein classifying each block includes classifying a block as a texture/edge block by comparing an intensity range with a threshold and determining the intensity range is greater than the threshold. 4. The method of claim 1 wherein an intensity range of the pixel intensity range-based block classification includes: a. determining a maximum intensity pixel value within a current block;b. determining a minimum intensity pixel value within the current block; andc. calculating a spatial variation by subtracting the minimum intensity pixel value from the maximum intensity pixel value. 5. The method of claim 1 further comprising classifying each pixel in a current block. 6. The method of claim 5 wherein classifying each pixel comprises: a. calculating a median of an intensity range within the current block;b. attributing a pixel in the current block to a first class if the pixel's intensity value is larger than the median; andc. attributing the pixel in the current block to a second class if the pixel's intensity value is less than or equal to the median. 7. The method of claim 6 wherein classifying each pixel further comprises comparing a pixel's class with a class of neighboring pixels and denoting the pixel as noisy if the pixel's class and the class of the neighboring pixels is the same. 8. The method of claim 1 wherein the vector difference-based dominant edge detection includes calculating a vector difference between two pixels. 9. The method of claim 1 wherein the vector difference-based dominant edge detection includes: a. determining a minimum difference among four directional differences;b. selecting a maximum difference with a direction orthogonal to the minimum difference direction;c. calculating an edge difference; andd. determining if a pixel belongs to a dominant edge. 10. The method of claim 1 wherein the content adaptive base weight generation includes calculating base weights for noisy pixels, edge pixels and texture pixels. 11. The method of claim 1 wherein the content adaptive base weight generation includes calculating base weights for a texture pixel with flat neighbor blocks. 12. The method of claim 1 wherein the content adaptive base weight generation includes adjusting base weights for chrominance blocks. 13. The method of claim 1 wherein the content adaptive final filter weight generation includes calculating a final weight for a texture/edge block. 14. The method of claim 1 wherein the content adaptive final filter weight generation includes calculating a final weight for a flat block. 15. The method of claim 1 wherein the adaptive texture and edge enhancement includes applying sharpness enhancement on denoted edge/texture pixels. 16. The method of claim 1 wherein the adaptive texture and edge enhancement includes: a. not applying sharpness enhancement if a current pixel belongs to a flat block;b. not applying sharpness enhancement if the current pixel is a noisy pixel;c. enhancing sharpness if the current pixel belongs to a dominant edge; andd. enhancing sharpness if the current pixel has not been filtered. 17. The method of claim 1 wherein the computing device is selected from the group consisting of a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, an iPod®, a video player, a DVD writer/player, a television and a home entertainment system. 18. A method of reducing image/video encoding artifacts and improving quality within a computing device, the method comprising: a. classifying each block within an image/video;b. determining filter coefficients for each pixel in a current block, wherein determining the filter coefficients includes docketing a dominant edge and classifying pixel intensity for a texture/edge block;c. filtering each pixel with the filter coefficients to remove noise; andd. applying an adaptive enhancement to enhance texture and edge pixels. 19. The method of claim 18 wherein classifying each block includes classifying a block as one of a flat block and a texture/edge block. 20. The method of claim 19 wherein classifying each block includes classifying a block as the flat block by comparing an intensity range with a threshold and determining the intensity range is less than or equal to the threshold. 21. The method of claim 19 wherein classifying each block includes classifying a block as the texture/edge block by comparing an intensity range with a threshold and determining the intensity range is greater than the threshold. 22. The method of claim 21 wherein determining the intensity range of the pixel includes: a. determining a maximum intensity pixel value within a current block;b. determining a minimum intensity pixel value within the current block; andc. calculating a spatial variation by subtracting the minimum intensity pixel value from the maximum intensity pixel value. 23. The method of claim 18 further comprising classifying each pixel comprising: a. calculating a median of an intensity range within the current block;b. attributing a pixel in the current block to a first class if the pixel's intensity value is larger than the median; andc. attributing the pixel in the current block to a second class if the pixel's intensity value is less than or equal to the median. 24. The method of claim 23 wherein classifying each pixel further comprises comparing a pixel's class with a class of neighboring pixels and denoting the pixel as noisy if the pixel's class and the class of the neighboring pixels is the same. 25. The method of claim 18 wherein determining the filter coefficients includes determining intensity difference between the current pixel and neighboring pixels for a flat block. 26. The method of claim 18 wherein detecting the dominant edge includes calculating a vector difference between two pixels. 27. The method of claim 18 wherein detecting the dominant edge includes: a. determining a minimum difference among four directional differences;b. selecting a maximum difference with a direction orthogonal to the minimum difference direction;c. calculating an edge difference; andd. determining if a pixel belongs to the dominant edge. 28. The method of claim 18 wherein applying the adaptive enhancement includes applying sharpness enhancement on denoted edge/texture pixels. 29. A method of reducing image/video encoding artifacts and improving quality within a computing device, the method comprising: a. classifying each block within an image/video;b. determining filter coefficients for each pixel in a current block;c. filtering each pixel with the filter coefficients to remove noise; andd. applying an adaptive enhancement to enhance texture and edge pixels, wherein applying the adaptive enhancement includes: i. not applying sharpness enhancement if a current pixel belongs to a flat block;ii. not applying sharpness enhancement if the current pixel is a noisy pixel;iii. enhancing sharpness if the current pixel belongs to a dominant edge; andiv. enhancing sharpness if the current pixel has not been filtered. 30. The method of claim 18 wherein the computing device is selected from the group consisting of a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, an iPod®, a video player, a DVD writer/player, a television and a home entertainment system. 31. A system for reducing image/video encoding artifacts and improving quality implemented with a computing device, the system comprising: a. a spatial variation extraction module configured for extracting spatial variation information for each block of an image/video;b. a block detection module operatively coupled to the spatial variation extraction module, the block detection module configured for classifying each block using the spatial variation information;c. a classification module operatively coupled to the spatial variation extraction module, the classification module configured for classifying each pixel in a current block using the spatial variation information;d. a dominant edge detection module operatively coupled to the block detection module, the dominant edge detection module configured for detecting a dominant edge;e. a neighbor block check module operatively coupled to the dominant edge detection module, the neighbor block check module configured for checking spatial variation of neighboring blocks of the current block;f. a base weight generation module operatively coupled to the neighbor block check module, the base weight generation module configured for generating base weights by combining results from detecting the dominant edge and checking the spatial variation of neighboring blocks;g. a filter coefficients generation module operatively coupled to the base weight generation module, the filter coefficients generation module configured for generating filter coefficients;h. a pixel-based filtering module operatively coupled to the filter coefficients generation module, the pixel-based filtering module configured for filtering pixels; andi. an adaptive enhancement module operatively coupled to the pixel-based filtering module, the adaptive enhancement module configured for applying an adaptive enhancement to enhance texture and edge pixels. 32. The system of claim 31 wherein the block detection module classifies a block as a flat block by comparing an intensity range with a threshold and determining the intensity range is less than or equal to the threshold. 33. The system of claim 31 wherein the block detection module classifies a block as a texture/edge block by comparing an intensity range with a threshold and determining the intensity range is greater than the threshold. 34. The system of claim 31 wherein the spatial variation information comprises an intensity range. 35. The system of claim 34 wherein determining the intensity range of the pixel includes: a. determining a maximum intensity pixel value within a current block;b. determining a minimum intensity pixel value within the current block; andc. calculating a spatial variation by subtracting the minimum intensity pixel value from the maximum intensity pixel value. 36. The system of claim 31 wherein the classification module classifies each pixel by: a. calculating a median of an intensity range within the current block;b. attributing a pixel in the current block to a first class if the pixel's intensity value is larger than the median; andc. attributing the pixel in the current block to a second class if the pixel's intensity value is less than or equal to the median. 37. The system of claim 36 wherein classifying each pixel further comprises comparing a pixel's class with a class of neighboring pixels and denoting the pixel as noisy if the pixel's class and the class of the neighboring pixels is the same. 38. The system of claim 31 wherein the dominant edge detector module detects the dominant edge by calculating a vector difference between two pixels. 39. The system of claim 31 wherein the dominant edge detector module detects the dominant edge by: a. determining a minimum difference among four directional differences;b. selecting a maximum difference with a direction orthogonal to the minimum difference direction;c. calculating an edge difference; andd. determining if a pixel belongs to the dominant edge. 40. The system of claim 31 wherein the base weight generation module calculates the base weights for noisy pixels, edge pixels and texture pixels. 41. The system of claim 31 wherein the base weight generation module calculates the base weights for a texture pixel with flat neighbor blocks. 42. The system of claim 31 wherein the base weight generation module adjusts the base weights for chrominance blocks. 43. The system of claim 31 wherein the adaptive enhancement modules applies sharpness enhancement on denoted edge/texture pixels. 44. The system of claim 31 wherein applying the adaptive enhancement includes: a. not applying sharpness enhancement if a current pixel belongs to a flat block;b. not applying sharpness enhancement if the current pixel is a noisy pixel;c. enhancing sharpness if the current pixel belongs to a dominant edge; andd. enhancing sharpness if the current pixel has not been filtered. 45. The system of claim 31 wherein the spatial variation extraction module, the block detection module, the classification module, the dominant edge detection module, the neighbor block check module, the base weight generation module, the filter coefficients generation module, the pixel-based filtering module and the adaptive enhancement module are implemented in software. 46. The system of claim 31 wherein the spatial variation extraction module, the block detection module, the classification module, the dominant edge detection module, the neighbor block check module, the base weight generation module, the filter coefficients generation module, the pixel-based filtering module and the adaptive enhancement module are implemented in hardware. 47. The system of claim 31 wherein the spatial variation extraction module, the block detection module, the classification module, the dominant edge detection module, the neighbor block check module, the base weight generation module, the filter coefficients generation module, the pixel-based filtering module and the adaptive enhancement module are implemented in at least one of software, firmware and hardware. 48. The system of claim 31 wherein the computing device is selected from the group consisting of a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, an iPod®, a video player, a DVD writer/player, a television and a home entertainment system. 49. A device comprising: a. a memory for storing an application, the application configured for: i. extracting spatial variation information for each block of an image/video;ii. classifying each block using the spatial variation information;iii. classifying each pixel in a current block using the spatial variation information;iv. detecting a dominant edge;v. checking spatial variation of neighboring blocks of the current block;vi. generating base weights by combining results from detecting the dominant edge and checking the spatial variation of neighboring blocks;vii. generating filter coefficients;viii. filtering pixels using the filter coefficients; andix. applying an adaptive enhancement to enhance texture and edge pixels; andb. a processing component coupled to the memory, the processing component configured for processing the application. 50. The device of claim 49 wherein classifying each block includes classifying a block as a flat block by comparing an intensity range with a threshold and determining the intensity range is less than or equal to the threshold. 51. The device of claim 49 wherein classifying each block includes classifying a block as a texture/edge block by comparing an intensity range with a threshold and determining the intensity range is greater than the threshold. 52. The device of claim 49 wherein the spatial variation information comprises an intensity range. 53. The device of claim 52 wherein determining the intensity range of the pixel includes: a. determining a maximum intensity pixel value within a current block;b. determining a minimum intensity pixel value within the current block; andc. calculating a spatial variation by subtracting the minimum intensity pixel value from the maximum intensity pixel value. 54. The device of claim 49 wherein classifying each pixel comprises: a. calculating a median of an intensity range within the current block;b. attributing a pixel in the current block to a first class if the pixel's intensity value is larger than the median; andc. attributing the pixel in the current block to a second class if the pixel's intensity value is less than or equal to the median. 55. The device of claim 54 wherein classifying each pixel further comprises comparing a pixel's class with a class of neighboring pixels and denoting the pixel as noisy if the pixel's class and the class of the neighboring pixels is the same. 56. The device of claim 49 wherein detecting the dominant edge includes calculating a vector difference between two pixels. 57. The device of claim 49 wherein detecting the dominant edge includes: a. determining a minimum difference among four directional differences;b. selecting a maximum difference with a direction orthogonal to the minimum difference direction;c. calculating an edge difference; andd. determining if a pixel belongs to the dominant edge. 58. The device of claim 49 wherein generating the base weights includes calculating the base weights for noisy pixels, edge pixels and texture pixels. 59. The device of claim 49 wherein generating the base weights includes calculating the base weights for a texture pixel with flat neighbor blocks. 60. The device of claim 49 wherein generating the base weights includes adjusting the base weights for chrominance blocks. 61. The device of claim 49 wherein applying the adaptive enhancement includes applying sharpness enhancement on denoted edge/texture pixels. 62. The device of claim 49 wherein applying the adaptive enhancement includes: a. not applying sharpness enhancement if a current pixel belongs to a flat block;b. not applying sharpness enhancement if the current pixel is a noisy pixel;c. enhancing sharpness if the current pixel belongs to a dominant edge; andd. enhancing sharpness if the current pixel has not been filtered. 63. The device of claim 49 wherein the device is selected from the group consisting of a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, an iPod®, a video player, a DVD writer/player, a television and a home entertainment system.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (18)
Katayama Tatsushi,JPX ; Takiguchi Hideo,JPX ; Yano Kotaro,JPX ; Hatori Kenji,JPX, Apparatus and method for combining a plurality of images.
Drexler Michael (Hemmingen DEX) Keesen Heinz-Werner (Hanover DEX) Herpel Carsten (Hanover DEX), Method of making a hierarchical estimate of image motion in a television signal.
Christophe Chevance FR; Edouard Fran.cedilla.ois FR; Dominique Thoreau FR; Thierry Viellard FR, Process for estimating a dominant motion between two frames.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.