Methods and systems for digitally re-mastering of 2D and 3D motion pictures for exhibition with enhanced visual quality
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-007/12
H04N-019/51
G06T-003/40
H04N-019/527
H04N-019/59
G06T-007/20
H04N-013/00
H04N-019/577
H04N-005/253
H04N-007/01
출원번호
US-0160313
(2007-01-29)
등록번호
US-8842730
(2014-09-23)
국제출원번호
PCT/IB2007/000188
(2007-01-29)
§371/§102 date
20080822
(20080822)
국제공개번호
WO2007/085950
(2007-08-02)
발명자
/ 주소
Zhou, Samuel
Judkins, Paul
Ye, Ping
출원인 / 주소
IMAX Corporation
대리인 / 주소
Kilpatrick Townsend & Stockton LLP
인용정보
피인용 횟수 :
4인용 특허 :
104
초록▼
The present invention relates to methods and systems for the exhibition of a motion picture with enhanced perceived resolution and visual quality. The enhancement of perceived resolution is achieved both spatially and temporally. Spatial resolution enhancement creates image details using both tempor
The present invention relates to methods and systems for the exhibition of a motion picture with enhanced perceived resolution and visual quality. The enhancement of perceived resolution is achieved both spatially and temporally. Spatial resolution enhancement creates image details using both temporal-based methods and learning-based methods. Temporal resolution enhancement creates synthesized new image frames that enable a motion picture to be displayed at a higher frame rate. The digitally enhanced motion picture is to be exhibited using a projection system or a display device that supports a higher frame rate and/or a higher display resolution than what is required for the original motion picture.
대표청구항▼
1. A method for enhancing the quality of a motion picture image sequence, the method comprising: receiving an original motion picture image sequence comprising digital data of a plurality of image frames;creating additional image details at multiple levels of image details and generating a first enh
1. A method for enhancing the quality of a motion picture image sequence, the method comprising: receiving an original motion picture image sequence comprising digital data of a plurality of image frames;creating additional image details at multiple levels of image details and generating a first enhanced image sequence by applying a spatial resolution enhancement process to the original image sequence; andgenerating a second enhanced image sequence by applying a temporal resolution enhancement process to the first enhanced image sequence using frame interpolation by adding at least one synthesized image frame to the first enhanced image sequence, wherein the at least one synthesized image frame is created based on computed local motion vectors determined by a voting-based method applied to multiple initial local motion estimates for every pixel at each level of a multi-level representation of each image frame of the first enhanced image sequence, wherein the temporal resolution enhancement process includes creating synthesized image frames based on motion estimates calculated using a local motion estimation process that comprises: calculating an edge mask map and a color segmentation map for each image frame;warping the edge mask map and the color segmentation map using a global motion estimate for every pixel of each image frame;generating multiple motion vectors for each pixel of each image frame using multiple local motion estimation methods;computing forward and backward motion vectors for each pixel; andapplying a voting process to select a motion vector for each pixel,wherein the second enhanced image sequence has a greater frame rate than the original image sequence and the second enhanced image sequence has greater image detail than the original image sequence. 2. The method of claim 1, further comprising: synchronizing the second enhanced image sequence to an audio track for the original image sequence. 3. The method of claim 1, wherein the original motion picture image sequence and the second enhanced image sequence are two-dimensional (2D) sequences. 4. The method of claim 1, wherein the original motion picture image sequence and the second enhanced image sequence are three-dimensional (3D) sequences. 5. The method of claim 1, wherein the original motion picture image sequence is in 3D and the second enhanced image sequence is in 2D. 6. The method of claim 1, further comprising: dividing the original motion picture image sequence into shots; andperforming the enhancement processes on each shot. 7. The method of claim 1, wherein the original motion picture image sequence comprises a single shot. 8. The method of claim 1, further comprising: formatting the second enhanced image sequence to a display presentation format; andsynchronizing the formatted enhanced image sequence to an audio track for the original image sequence. 9. The method of claim 1, wherein the spatial resolution enhancement process comprises: a motion-based spatial resolution enhancement process; anda learning-based resolution enhancement process. 10. The method of claim 9, further comprising: applying the motion-based spatial resolution process to a three-dimensional (3D) image sequence, wherein applying the motion-based spatial resolution process comprises: disparity estimation;disparity map regulation; anddetail discovery. 11. The method of claim 1, wherein the spatial resolution enhancement process is a learning-based spatial resolution enhancement process that comprises: generating a codebook comprising codewords, each codeword being associated with a pattern having a higher resolution than the plurality of image frames of the original motion picture image sequence;applying a clustering analysis to reduce the size of the codebook;upsizing an image frame of the original motion picture image sequence to the higher resolution, the image frame comprising a plurality of pixels;matching each pixel of the upsized image to a codeword; andgenerating an enhanced pixel by replacing a pixel by a central pixel of the pattern associated with a matching codeword. 12. The method of claim 1, wherein the spatial resolution enhancement process is a learning-based spatial resolution enhancement process that comprises: generating a codebook comprising codewords, each codeword being associated with a pattern having a higher resolution than the plurality of images of the original motion picture image sequence;applying a clustering analysis to reduce the size of the codebook;upsizing an image frame of the original motion picture image sequence to the higher resolution, the image frame comprising at least one block of pixels;matching each block of pixels of the upsized image to a codeword;replacing the block of pixels by the pattern associated with the matched codeword using a transformation process to create an enhanced block of pixels;applying a blending process to the enhanced block of pixels; andapplying a temporal filtering process. 13. The method of claim 1, wherein the temporal resolution enhancement process further comprises: pre-processing;global motion estimation;local motion estimation;half-motion vector generation; andartifact repair by temporal consistency check. 14. The method of claim 13, wherein each image frame comprises a plurality of pixels, the global motion estimation comprising: a. computing a gradient cross correlation matrix for each pixel;b. calculating at least one feature point for each pixel based on the gradient cross correlation matrix;c. matching at least one of the calculated feature point to a feature point of a next frame in the image sequence;d. selecting at least four matched feature points;e. estimating global motion based on the selected feature points; andf. interactively repeating steps d and e by selecting different matched feature points until a global motion estimate is obtained for each pixel. 15. The method of claim 13, wherein artifact repair by temporal consistency check comprises: identifying artifact pixels by half accuracy masks;grouping artifact pixels by visibility;checking temporal consistency based on grouping; andautomatically repairing temporal inconsistent artifact pixels. 16. The method of claim 13, further comprising a layered frame interpolation process. 17. The method of claim 1, wherein each image frame comprises a plurality of pixels; and wherein the voting-based method comprises: applying at least one local motion estimation method to each pixel;computing forward and backward motion vectors for each local motion estimation method; andselecting a motion vector for each pixel from the forward and backward motion vectors using the voting process. 18. The method of claim 1, wherein the at least one local motion estimation method comprises at least one of: a block matching method;a feature propagation method;a modified optical flow method; oran integral block matching method. 19. The method of claim 1, wherein the voting-based method is a pyramid voting-based method in which image data is represented in a multi-level structure and voting at a higher level of the multi-level structure produces a coarse version of the motion vectors that are refined to a lower level of the multi-level structure using interpolation. 20. The method of claim 1, wherein the voting process comprises at least one of: selecting a motion vector by different edge values;selecting a motion vector by coherence check;selecting a motion vector by minimum matching errors;selecting a motion vector through forward and backward checks; orselecting a motion vector using color segmentation maps. 21. The method of claim 1, wherein creating the synthesized image frames comprises: generating half-motion vectors;determining a time interval of a synthesized image frame between a first image frame and a second image frame;assigning half-forward motion vectors and half-backward motion vectors to each pixel of the synthesized frame; andgenerating a half-accuracy mask corresponding to the synthesized frame, the half-accuracy mask marking a status of each pixel. 22. The method of claim 21, wherein the frame interpolation further comprises: receiving the half-motion vectors and half-accuracy masks generated from at least two image frames;creating the at least one synthesized image frame based, at least in part, on the half-motion vectors and half-accuracy masks;generating synthesized frame pixels by interpolation and averaging;inserting the synthesized frame pixels into the synthesized image frame; andgenerating the second enhanced image sequence having the at least one synthesized image frame. 23. The method of claim 22, further comprising: maintaining synchronization with an audio track of the original image sequence by adding the at least one synthesized image frame. 24. The method of claim 22, wherein the at least one synthesized image frame is created using more than two of the plurality of images. 25. The method of claim 1, further comprising removing artifacts by user interaction. 26. The method of claim 25, wherein removing artifacts comprises merging different versions of synthesized frames. 27. The method of claim 1, wherein the global motion estimates are computed from a global motion estimation process that comprises: a. computing a gradient cross correlation matrix for each pixel;b. calculating at least one feature point for each pixel based on the gradient cross correlation matrix;c. matching at least one of the calculated feature point to a feature point of a next frame in the image sequence;d. selecting at least four matched feature points;e. estimating global motion based on the selected feature points; andf. interactively repeating steps d and e by selecting different matched feature points until the global motion estimates are obtained for each pixel. 28. A system for enhancing the quality of a motion picture image sequence, the system comprising: a back-end subsystem comprising: a central data storage for storing an original motion picture image sequence comprising digital data of a plurality of image frames;a render client configured to: create additional image details at multiple levels of image details and generate a first enhanced image sequence by applying a spatial resolution process to the original motion picture image sequence; andgenerate a second enhance image sequence by applying a temporal resolution enhancement process to the first enhanced image sequence using frame interpolation by adding at least one synthesized image frame to the first enhanced image sequence, wherein the render client is configured to create the least one synthesized image frame based on computed local motion vectors determined by a voting-based method applied to multiple initial local motion estimates for every pixel at each level of a multi-level representation of each image frame of the first enhanced image sequence, wherein the temporal resolution enhancement process includes creating synthesized image frames based on motion estimates calculated using a local motion estimation process that comprises: calculating an edge mask map and a color segmentation map for each image frame;warping the edge mask map and the color segmentation map using a global motion estimate for every pixel of each image frame;generating multiple motion vectors for each pixel of each image frame using multiple local motion estimation methods;computing forward and backward motion vectors for each pixel; andapplying a voting process to select a motion vector for each pixel; andan intelligent controller for controlling the render client and accessing the central data storage,wherein the second enhanced image sequence is configured to have greater frame rate and greater detail than the original motion picture image sequence. 29. The system of claim 28, further comprising: a front-end subsystem comprising a workstation for communicating with the intelligent controller, the workstation being adapted to receive user input and interaction in repairing artifacts in the second enhanced image sequence, performing a quality control check of the second enhanced image sequence, and segmenting the original motion picture image sequence. 30. The system of claim 29, wherein the workstation comprises multiple workstations. 31. The system of claim 30, wherein at least one of the multiple workstations comprises a quality control workstation. 32. The system of claim 28, wherein the original motion picture image sequence and the second enhanced image sequence are in 2D. 33. The system of claim 28, wherein the original motion picture image sequence and the second enhanced image sequence are in 3D. 34. The system of claim 28, wherein the original motion picture image sequence is in 3D and the second enhanced imaged sequence is in 2D. 35. The system of claim 28, wherein the render client is adapted to examine the quality of the second enhanced image sequence. 36. The system of claim 28, wherein the intelligent controller comprises a processor and a memory comprising executable code, the executable code comprising: a scheduler;a auto launch;a file setup; anda tape writer. 37. The system of claim 28, wherein the render client comprises multiple render clients. 38. The system of claim 37, wherein the intelligent controller is adapted to detect a system failure and shut down each render client and re-assign a job to each render client. 39. The system of claim 37, wherein the intelligent controller is adapted to monitor the render clients to prevent re-rendering of data. 40. A method for enhancing the quality of an original motion picture image sequence, the method comprising: receiving a three-dimensional (3D) original motion picture image sequence;applying a spatial resolution enhancement process to the 3D original image sequence to create an enhanced image sequence, the spatial resolution enhancement process comprising: a motion-based spatial resolution enhancement process; anda learning-based resolution enhancement process comprising: generating a codebook comprising codewords, each codeword being associated with a pattern having a higher resolution than the original image sequence;applying a clustering analysis to reduce the size of the codebook;up sizing an original image of the original image sequence to the higher resolution, the original image comprising a plurality of pixels;matching each pixel of the upsized image to a codeword; andreplacing each pixel by a central pixel of the pattern associated with the matched codeword,wherein the enhanced image sequence has greater image detail than the original image sequence. 41. The method of claim 40, further comprising: applying the motion-based spatial resolution process to a 3D image sequence, wherein applying the motion-based spatial resolution process comprises:disparity estimation;disparity map regulation; anddetail discovery. 42. A method for enhancing the quality of an original motion picture image sequence, the method comprising: receiving a three-dimensional (3D) original motion picture image sequence;applying a spatial resolution enhancement process to the 3D original image sequence to create an enhanced image sequence, the spatial resolution enhancement process comprising: a motion-based spatial resolution enhancement process; anda learning-based resolution enhancement process comprising: generating a codebook comprising codewords, each codeword being associated with a pattern having a higher resolution than the original image sequence;applying a clustering analysis to reduce the size of the codebook;upsizing an original image of the original image sequence to a higher resolution, the original image comprising at least one block of pixels;matching each block of pixels of the upsized image to a codeword;replacing the block of pixels by the pattern associated with the matched codeword using a transformation process to create an enhanced block of pixels;applying a blending process to the enhanced block of pixels; andapplying a temporal filtering process,wherein the enhanced image sequence has greater image detail than the original image sequence.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (104)
Sohn,Young wook, Apparatus and method for converting frame rate.
Geshwind, David Michael, Interactive computer system for creating three-dimensional image information and for converting two-dimensional image information for three-dimensional display systems.
Lubin Jeffrey ; Brill Michael Henry ; De Vries Aalbert ; Finard Olga, Method and apparatus for assessing the visibility of differences between two image sequences.
Lubin Jeffrey ; Brill Michael H. ; Pica Albert P. ; Crane Roger L. ; Paul Walter B. ; Gendel Gary A., Method and apparatus for assessing the visibility of differences between two signal sequences.
Hanna Keith James ; Kumar Rakesh ; Bergen James Russell ; Sawhney Harpreet Singh ; Lubin Jeffrey, Method and apparatus for enhancing regions of aligned images using flow estimation.
Black, Michael J.; Ju, Xuan; Minneman, Scott; Kimber, Donald G., Method and apparatus for generating a condensed version of a video sequence including desired affordances.
Lubin Jeffrey ; Peterson Heidi A. ; Spence Clay D. ; de Vries Aalbert, Method and apparatus for training a neural network to learn and use fidelity metric as a control mechanism.
Kaye, Michael C.; Best, Charles J. L.; Haynes, Robby R., Method and system for creating realistic smooth three-dimensional depth contours from two-dimensional images.
Kumar Rakesh ; Hanna Keith James ; Bergen James R. ; Anandan Padmanabhan ; Irani Michal, Method and system for image combination using a parallax-based technique.
Kumar, Rakesh; Hanna, Keith James; Bergen, James R.; Anandan, Padmanabhan; Williams, Kevin; Tinker, Mike, Method and system for rendering and combining images to form a synthesized view of a scene containing image information from a second image.
Burt Peter J. (Mercer County NJ) van der Wal Gooitzen S. (Mercer NJ) Kolczynski Raymond J. (Mercer NJ) Hingorani Rajesh (Mercer NJ), Method for fusing images and apparatus therefor.
Burt Peter J. (Princeton NJ) van der Wal Gooitzen S. (Hopewell Borough ; Mercer County NJ) Kolczynski Raymond J. (Hamilton Township ; Mercer County NJ) Hingorani Rajesh (West Windsor Township ; Merce, Method for fusing images and apparatus therefor.
Weisgerber Robert C. (245 E. 93d St. ; Suite 32A New York NY 10128), Method for imparting both high-impact cinematic images and conventional cinematic images on the same strip of motion pic.
Sezan Muhammed I. (Rochester NY) Ozkan Mehmet K. (Rochester NY) Fogel Sergei V. (Rochester NY), Method for temporally adaptive filtering of frames of a noisy image sequence using motion estimation.
Weisgerber Robert C. (246 E. 93rd St. New York NY 10128), Method for transitioning between two different frame rates during a showing of a single motion picture film.
Geshwind David M. (184-14 Midland Pkwy. Jamaica NY 11432) Handal Anthony H. (Blue Chip La. Westport CT 06880), Method to convert two dimensional motion pictures for three-dimensional systems.
Brill Michael Henry (Morrisville PA) Lubin Jeffrey (Middlesex NJ), Methods and apparatus for assessing the visibility of differences between two image sequences.
Boice Charles E. ; Hall Barbara A. ; Kaczmarczyk John M. ; Ngai Agnes Yee ; Pokrinchak Stephen P., Real-time encoding of video sequence employing two encoders and statistical analysis.
Burt Peter J. (Princeton NJ) Irani Michal (Princeton Junction NJ) Hsu Stephen Charles (East Windsor NJ) Anandan Padmanabhan (Lawrenceville NJ) Hansen Michael W. (New Hope PA), System for automatically aligning images to form a mosaic image.
Zhou, Samuel; Ye, Ping; Judkins, Paul, Systems and methods for digitally re-mastering or otherwise modifying motion pictures or other image sequences data.
Quinones Armando J. (Cincinnati OH) Rieck ; Jr. Harold P. (West Chester OH) Albrecht Richard W. (Fairfield OH) Sullivan Michael A. (Ballston Spa NY) Weisgerber Robert H. (Loveland OH) Plemmons Larry , Turbine disk cooling system.
Meade Robert J. (West Chester OH) Albrecht Richard W. (Fairfield OH) Weisgerber Robert H. (Loveland OH), Turbine disk interstage seal axial retaining ring.
Weisgerber Robert H. (Loveland OH) Albrecht Richard W. (Fairfield OH) Glynn Christopher C. (Hamilton OH) Kutney ; Jr. John T. (Cincinnati OH) Meade Robert J. (West Chester OH), Turbine disk interstage seal system.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.