Stereoscopic display system with flexible rendering of disparity map according to the stereoscopic fusing capability of the observer
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06T-015/10
출원번호
US-0789272
(2004-02-27)
등록번호
US-8094927
(2012-01-10)
발명자
/ 주소
Jin, Elaine W.
Miller, Michael E.
Endrikhovski, Serguei
Cerosaletti, Cathleen D.
출원인 / 주소
Eastman Kodak Company
대리인 / 주소
Spaulding, Kevin E.
인용정보
피인용 횟수 :
9인용 특허 :
27
초록▼
A method is provided for customizing scene content, according to a user or a cluster of users, for a given stereoscopic display, including obtaining customization information about the user; obtaining a scene disparity map for a pair of given stereo images and/or a three-dimensional (3D) computer gr
A method is provided for customizing scene content, according to a user or a cluster of users, for a given stereoscopic display, including obtaining customization information about the user; obtaining a scene disparity map for a pair of given stereo images and/or a three-dimensional (3D) computer graphic model; and determining an aim disparity range for the user. The method of the present invention also generates a customized disparity map and/or rendering conditions for a three-dimensional (3D) computer graphic model correlating with the user's fusing capability of the given stereoscopic display; and renders or re-renders the stereo images for subsequent display.
대표청구항▼
1. A method for producing pairs of stereo images customized for individual users from an input stereoscopic image, comprising the steps of a) obtaining customization information including a first stereoscopic disparity range for a first individual user, wherein the stereoscopic disparity range for t
1. A method for producing pairs of stereo images customized for individual users from an input stereoscopic image, comprising the steps of a) obtaining customization information including a first stereoscopic disparity range for a first individual user, wherein the stereoscopic disparity range for the first individual user is the range of disparities in a stereoscopic image that the first individual user can comfortably fuse, and corresponds to a range of apparent depths in the stereoscopic image that the first individual user can comfortably view;b) obtaining a scene disparity map for the input stereoscopic image, wherein the input stereoscopic image includes at least one of a given pair of stereo images or a given three-dimensional (3D) computer graphic model;c) determining a first aim disparity range for a first customized pair of stereo images responsive to the first stereoscopic image disparity range for the first individual user and the obtained scene disparity map;d) at least one of generating a first customized disparity map responsive to the first aim disparity range for the first individual user or generating first customized rendering conditions for a first three-dimensional (3D) computer graphic model responsive to the first aim disparity range for the first individual user;e) using a digital image processor to produce a first customized pair of stereo images for subsequent display by using the first customized disparity map or the first customized rendering conditions for the first three-dimensional (3D) computer graphic model;f) displaying the first customized pair of stereo images to the first individual user on a stereoscopic display device;g) obtaining customization information including a second stereoscopic disparity range for a second individual user, wherein the second stereoscopic disparity range for the second individual user is the range of disparities in a stereoscopic image that the second individual user can comfortably fuse, and corresponds to a range of apparent depths in the stereoscopic image that the second individual user can comfortably view, the second stereoscopic disparity range being different from the first stereoscopic disparity range;h) determining a second aim disparity range for a second customized pair of stereo images responsive to the second stereoscopic image disparity range for the second individual user and the obtained scene disparity map;i) at least one of generating a second customized disparity map responsive to the second aim disparity range for the second individual user or generating second customized rendering conditions for a second three-dimensional (3D) computer graphic model responsive to the second aim disparity range for the second individual user;j) using a digital image processor to produce a second customized pair of stereo images for subsequent display by using the second customized disparity map or the second customized rendering conditions for the second three-dimensional (3D) computer graphic model, wherein the second customized pair of stereo images are different from the first customized pair of stereo images; andk) displaying the second customized pair of stereo images to the second individual user on a stereoscopic display device. 2. The method claimed in claim 1, wherein the step of obtaining the scene disparity map includes obtaining a scene convergence point and depth information from the 3D computer graphics model. 3. The method claimed in claim 1, wherein the step of generating the first customized disparity map or the second customized disparity map includes applying a predetermined mapping function to modify the scene disparity map. 4. The method claimed in claim 3, wherein the predetermined mapping function is dependent on a region of interest. 5. The method claimed in claim 4, wherein the region of interest is dynamic. 6. The method claimed in claim 1, wherein the step of generating the first customized disparity map or the second customized disparity map is accomplished by applying a linear transformation to the corresponding first scene disparity map or second scene disparity map. 7. The method claimed in claim 1, wherein the step of generating the first customized disparity map or the second customized disparity map is accomplished by applying a non-linear transformation to the corresponding first scene disparity map or second scene disparity map. 8. The method claimed in claim 4 wherein the region of interest is based upon a measurement of fixation position. 9. The method claimed in claim 4, wherein the region of interest is based upon a map of probable fixations. 10. The method claimed in claim 1, wherein the step of generating the first customized rendering conditions or the second customized rendering conditions includes computing a location, an orientation, a focal distance, a magnification and a depth of field correlating to a pair of simulated cameras. 11. The method claimed in claim 1, wherein the first customized rendering conditions or the second customized rendering conditions are generated by modifying one or more of a set of correlating camera parameters including camera location, orientation, focal distance, magnification or depth of field. 12. The method of claim 1 wherein the stereoscopic disparity range for the first individual user or the second individual user is characterized by a user-specific crossed disparity upper limit and a user-specific uncrossed disparity upper limit, and wherein the crossed disparity upper limit corresponds to the image disparity for the closest apparent object distance that can be comfortably viewed by the individual user in a stereoscopic image viewed on the stereoscopic display device, and the user-specific uncrossed disparity upper limit corresponds to the image disparity for the farthest apparent object distance that can be comfortably viewed by the individual user in a stereoscopic image viewed on the stereoscopic display device. 13. The method claimed in claim 1, wherein the customization information for the first individual user or the second individual user further includes at least one of a user profile or a rendering intent subject to a predetermined task choice or skill level. 14. A stereoscopic display system customized for an individual user's perceptual characteristics for stereoscopic viewing, comprising: a) a stereoscopic image source that provides different stereoscopic images for each of a plurality of user categories, each user category corresponding to a cluster of users having common perceptual characteristics for stereoscopic viewing and being characterized by a category-specific stereoscopic disparity range limit, the stereoscopic disparity range limit being the range of disparities in a stereoscopic image that the cluster of users can comfortably fuse, wherein the stereoscopic images for each user category are rendered according to the corresponding category-specific stereoscopic disparity range;b) a stereoscopic display device; andc) a data processor for associating a first individual user with a first one of the plurality of user categories according to the individual user's perceptual characteristics for stereoscopic viewing;associating a second individual user with a second one of the plurality of user categories according to the individual user's perceptual characteristics for stereoscopic viewing;receiving first and second stereoscopic images from the stereoscopic image source corresponding to the associated first and second user categories;displaying the first received stereoscopic image on the stereoscopic display device for the first user; anddisplaying second received stereoscopic image on the stereoscopic display device for the second user. 15. The stereoscopic display system of claim 14 wherein the first or second individual user is associated with one of the plurality of user categories by characterizing the individual users's perceptual characteristics for stereoscopic viewing and determining the user category that most closely matches the user's perceptual characteristics for stereoscopic viewing.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (27)
Mario A. Perez ; Betty Z. Mei ; Michael D. Swan FR, Charged microfibers, microfibrillated articles and use thereof.
Kane, Paul J.; Cosgrove, Patrick A.; Cerosaletti, Cathleen D., Stereoscopic display system with flexible rendering for multiple simultaneous observers.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.