Primary and auxiliary image capture devices for image processing and related methods
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-015/00
H04N-013/02
출원번호
US-0115589
(2011-05-25)
등록번호
US-8274552
(2012-09-25)
발명자
/ 주소
Dahi, Bahram
McNamer, Michael
Izzat, Izzat H.
Markas, Tassos
출원인 / 주소
3DMedia Corporation
대리인 / 주소
Olive Law Group, PLLC
인용정보
피인용 횟수 :
17인용 특허 :
129
초록▼
Disclosed herein are primary and auxiliary image capture devices for image processing and related methods. According to an aspect, a method may include using primary and auxiliary image capture devices to perform image processing. The method may include using the primary image capture device to capt
Disclosed herein are primary and auxiliary image capture devices for image processing and related methods. According to an aspect, a method may include using primary and auxiliary image capture devices to perform image processing. The method may include using the primary image capture device to capture a first image of a scene, the first image having a first quality characteristic. Further, the method may include using the auxiliary image capture device to capture a second image of the scene. The second image may have a second quality characteristic. The second quality characteristic may be of lower quality than the first quality characteristic. The method may also include adjusting at least one parameter of one of the captured images to create a plurality of adjusted images for one of approximating and matching the first quality characteristic. Further, the method may include utilizing the adjusted images for image processing.
대표청구항▼
1. A method for using primary and auxiliary image capture devices, each including an image sensor and a lens, to perform image processing, the method comprising: using the primary image capture device to capture a first image of a scene, the first image having a first quality characteristic;using th
1. A method for using primary and auxiliary image capture devices, each including an image sensor and a lens, to perform image processing, the method comprising: using the primary image capture device to capture a first image of a scene, the first image having a first quality characteristic;using the auxiliary image capture device to capture a second image of the scene, the second image having a second quality characteristic, and the second quality characteristic being of lower quality than the first quality characteristic;adjusting at least one parameter of one of the captured images to create at least one adjusted image for one of approximating and matching a quality characteristic of the other image;forming an adjusted image pair that includes two images from a first group of images including the adjusted and captured images;extracting image features using one or more pixels from the images in the adjusted image pair;matching extracted image features between the images in the adjusted image pair;computing a transformation to align at least one of the matched image features between the images in the adjusted image pair;applying the transformation to at least one of the images in the adjusted image pair to align the images in the adjusted image pair;forming a transformed image pair by selecting two images from a second group of images including the transformed, adjusted, and captured images;and utilizing the transformed image pair tocreate another image segment utilizing lower quality pixels obtained directly from their corresponding locations from the transformed image pair and higher quality pixels obtained byutilizing disparity information and higher quality pixels from the transformed image pair to replace corresponding pixels on a lower quality image in response to determining that a corresponding image segment of the lower quality image does not meet predefined quality criteria, wherein the replaced corresponding pixels are non-occluded pixels. 2. The method of claim 1, wherein adjusting at least one parameter comprises adjusting one of size and scaling of one of the captured images to match a resolution and approximate a field of view of other captured images. 3. The method of claim 2, further comprising using focal lengths and optical properties of the primary and auxiliary image capture devices to adjust one of a size, and scaling of the second image to match a resolution and approximate a field of view of other captured images. 4. The method of claim 2, further comprising: identifying distortion characteristics of the image capture devices; andperforming a transformative procedure to correct distortion for equalizing the first and second images. 5. The method of claim 1, wherein adjusting the at least one parameter comprises adjusting a color of one image to one of match and approximate a color of the other image. 6. The method of claim 5, further comprising: identifying regions of an overlapping field of view of the image capture devices;extracting color properties of the regions; andperforming color matching and correction operations to equalize the first and second images. 7. The method of claim 1, further comprising performing one of a registration process and rectification process on one of the captured and adjusted images combined. 8. The method of claim 1, further comprising generating a disparity map for a horizontal positional offset of pixels of the at least one stereoscopic still image. 9. The method of claim 8, further comprising using one of the disparity map, optical properties of the primary image capture device to, and one of the set of captured and adjusted images calculate a predetermined stereo base for capturing side-by-side images using the primary image capture device. 10. The method of claim 1, further comprising: using the disparity information to generate a depth map for the scene; andusing the depth map to apply a depth image based rendering (DIBR) technique for generating a second view for a stereoscopic image pair. 11. The method of claim 10, further comprising scaling depth values of the depth map to create a stereoscopic pair representing a virtual stereo base being different from a true separation of the primary and auxiliary image capture devices. 12. The method of claim 1, further comprising: evaluating the disparity map information to identify objects with large disparity;performing an object manipulation technique to position the identified objects at a predetermined depth and location using data from the auxiliary image capture device; andfilling occlusion zones using the data from the auxiliary image capture device. 13. The method of claim 1, further comprising: using the auxiliary image capture device to capture a plurality of other images of the scene;determining a motion blur kernel based on the plurality of other images; andapplying the motion blur kernel to remove blur from the first image of the scene. 14. The method of claim 1, wherein the first image and the second image are each a frame of video captured by the primary image capture device and the auxiliary image capture device, respectively, and wherein the method further comprises matching the frames of the video based on a time of capture. 15. The method of claim 1, wherein using the primary image capture device comprises using the primary image capture device to capture the first image with a first predetermined exposure level on a main point-of-interest in the scene; wherein using the auxiliary image capture device comprises using the auxiliary image capture device to capture the second image with a second predetermined exposure level on one of a dark area and a bright area of the scene;adjusting at least one parameter of the second image to one of approximate and match a quality characteristic of the second image with the first image; and generating a still image based on the first image and the adjusted second image, to create a single image with higher dynamic range. 16. The method of claim 1, wherein using the primary image capture device comprises using the primary image capture device to capture the first image of a scene with a predetermined focal distance on a main point-of-interest;wherein using the auxiliary image capture device comprises using the auxiliary image capture device to capture the second image of the scene with a different focal distance than the predetermined focal distance; andwherein the method further comprises adjusting at least one parameter of the captured images to create a focus stacking image. 17. The method of claim 1, wherein the primary and auxiliary image capture devices are components of a mobile telephone, wherein the auxiliary image capture device is configured to face in a first position towards a user and to face in a second position towards the scene, and wherein the auxiliary image capture device includes a mechanism for directing a path of light from the scene towards the auxiliary image capture device such that the primary image capture device and the auxiliary image capture device captures the first and second images of the scene. 18. The method of claim 1, further comprising: determining a quality measurement of the stereoscopic image pair; anddetermining whether the quality measurement meets one or more quality criteria. 19. The method of claim 18, wherein determining the quality measurement includes determining the quality measurement of at least one image portion of the stereoscopic image pair, and wherein the method further comprises identifying the at least one image portion in response to determining that the quality measurement does not meet the one or more quality criteria. 20. The method of claim 19, further comprising replacing, in response to determining that the quality measurement does not meet the one or more quality criteria, pixels of the at least one image portion with pixels synthesized using the depth of the corresponding location and pixels from a higher quality image to meet the one or more quality criteria. 21. The method of claim 19, further comprising replacing, in response to determining that the quality measurement does not meet the one or more quality criteria, pixels of the at least one image portion with pixels copied from a higher quality image to meet the one or more quality criteria. 22. The method of claim 19, further comprising scaling depth values of the depth map to create another stereoscopic image pair representing a virtual stereo base, different from a true separation of the primary and auxiliary image capture devices, to meet the one or more quality criteria. 23. The method of claim 19, further comprising: performing an object manipulation technique to reposition identified image portions that do not meet the one or more quality criteria at a predetermined depth and location for meeting the one or more quality criteria; andfilling empty areas in the identified image portions using the data from one of the two images of the stereoscopic image pair. 24. The method of claim 1, wherein the primary and auxiliary image capture devices are components of one of a digital camera and a camcorder, wherein the primary capture device includes a high-quality sensor and a lens assembly with variable focal length, and the auxiliary image capture device includes a lower quality sensor and a smaller lens with fixed focal length. 25. The method of claim 1, wherein the replaced corresponding pixels are matched pixels. 26. The method of claim 1, wherein creation of the other image segment is based on one of: image characteristics of the primary and auxiliary image capture devices; a depth budget of an image pair; and differences of pixel values.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (129)
Sullivan, Steve; Trombla, Alan D.; Callari, Francesco G., 2D to 3D image conversion.
Kim Man-bae,KRX ; Song Mun-sup,KRX ; Kim Do-kyoon,KRX, Apparatus and method for converting two-dimensional image sequence into three-dimensional image using conversion of motion disparity into horizontal disparity and post-processing method during genera.
Bacs ; Jr. Aron (Burke VA) Mayhew Christopher A. (Oakton VA) Fernekes Leo M. (New York NY) Buchroeder Richard A. (Tucson AZ) Rublowsky Stefan J. (Brooklyn NY), Autostereoscopic imaging apparatus and method using a parallax scanning lens aperture.
Jayavant, Rajeev; Nuechterlein, David W., Circuitry and systems for performing two-dimensional motion compensation using a three-dimensional pipeline and methods of operating the same.
Haruhiko Murata JP; Yukio Mori JP; Shuugo Yamashita JP; Akihiro Maenaka JP; Seiji Okada JP; Kanji Ihara JP, Device and method for converting two-dimensional video into three-dimensional video.
Hanna Keith James ; Kumar Rakesh ; Bergen James Russell ; Sawhney Harpreet Singh ; Lubin Jeffrey, Method and apparatus for enhancing regions of aligned images using flow estimation.
Herman ; deceased Joshua Randy ; Bergen James Russell ; Peleg Shmuel,ILX ; Paragano Vincent ; Dixon Douglas F. ; Burt Peter J. ; Sawhney Harpreet ; Gendel Gary A. ; Kumar Rakesh ; Brill Michael H., Method and apparatus for mosaic image construction.
Azarbayejani Ali (Cambridge MA) Galyean Tinsley (Cambridge MA) Pentland Alex (Cambridge MA), Method and apparatus for three-dimensional, textured models from plural video images.
Thier Uri (West Hartford CT) Thier Oren (West Hartford CT) Woodbury William (Gainesville FL), Method for controlling a 3D patch-driven special effects system.
Wong, Earl Q.; Nakamura, Makibi; Kushida, Hidenori; Triteyaprasert, Soroj, Method of and apparatus for generating a depth map utilized in autofocusing.
Nakagawa Yasuo (Chigasaki PA JPX) Nayer Shree K. (Pittsburgh PA), Method of detecting solid shape of object with autofocusing and image detection at each focus level.
Kaye,Michael C.; Best,Charles J. L., Method of hidden surface reconstruction for creating accurate three-dimensional images converted from two-dimensional images.
Choquet Bruno (Rennes FRX) Pele Danielle (Rennes FRX) Chassaing Francoise (La Chapelle des Fougeretz FRX), Method of processing and transmitting over a “MAC”type channel a sequence of pairs of sterescopic television images.
Yukinori Matsumoto JP; Hajime Terasaki JP; Kazuhide Sugimoto JP; Masazumi Katayama JP; Tsutomu Arakawa JP; Osamu Suzuki JP, Methods for creating an image for a three-dimensional display, for calculating depth information and for image processing using the depth information.
Rubbert,R체dger; Weise,Thomas; Sporbert,Peer; Imgrund,Hans; Kouzian,Dimitrij, Methods for registration of three-dimensional frames to create three-dimensional virtual models of objects.
Matsumura, Koichi; Baumberg, Adam Michael; Lyons, Alexander Ralph; Nagasawa, Kenichi; Saito, Takashi, Photographing apparatus, device and method for obtaining images to be used for creating a three-dimensional model.
Routhier, Nicholas; Thibeault, Claude; Belzile, Jean; Malouin, Daniel; Carpentier, Pierre Paul; Dallaire, Martin, Process and system for encoding and playback of stereoscopic video sequences.
Routhier, Nicholas; Thibeault, Claude; Belzile, Jean; Malouin, Daniel; Carpentier, Pierre-Paul; Dallaire, Martin, Process and system for encoding and playback of stereoscopic video sequences.
Zhang, Zhengyou; Anandan, Padmanabhan; Shum, Heung-Yeung, System and method for determining structure and motion using multiples sets of images from different projection models for object modeling.
Wetzel, Arthur W.; Gilbertson, II, John R.; Beckstead, Jeffrey A.; Feineigle, Patricia A.; Hauser, Christopher R.; Palmieri, Jr., Frank A., System for creating microscopic digital montage images.
McNamer, Michael; Robers, Marshall; Markas, Tassos; Hurst, Jason Paul, Methods, systems, and computer-readable storage media for selecting image capture positions to generate three-dimensional images.
Shabtay, Gal; Cohen, Noy; Geva, Nadav; Gigushinski, Oded; Goldenberg, Ephraim, Thin multi-aperture imaging system with auto-focus and methods for using same.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.