Method and system for generating an entirely well-focused image of a large three-dimensional scene
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G06K-009/62
출원번호
US-0680478
(2008-09-26)
등록번호
US-8331627
(2012-12-11)
국제출원번호
PCT/SG2008/000366
(2008-09-26)
§371/§102 date
20100618
(20100618)
국제공개번호
WO2009/041918
(2009-04-02)
발명자
/ 주소
Xiong, Wei
Tian, Qi
Lim, Joo Hwee
출원인 / 주소
Agency for Science, Technology and Research
대리인 / 주소
Christie, Parker & Hale, LLP.
인용정보
피인용 횟수 :
3인용 특허 :
8
초록▼
A method and system for generating an entirely well-focused image of a three-dimensional scene. The method comprises the steps of a) learning a prediction model including at least a focal depth probability density function (PDF), h(k), for all depth values k, from historical tiles of the scene; b) p
A method and system for generating an entirely well-focused image of a three-dimensional scene. The method comprises the steps of a) learning a prediction model including at least a focal depth probability density function (PDF), h(k), for all depth values k, from historical tiles of the scene; b) predicting the possible focal surfaces in subsequent tiles of the scene by applying the prediction model; c) for each value of k, examining h(k) such that if h(k) is below a first threshold, no image is acquired at the depth k′ for said one tile; and if h(k) is above or equal to a first threshold, one or more images are acquired in a depth range around said value of k for said one tile; and d) processing the acquired images to generate a pixel focus map for said one tile.
대표청구항▼
1. A method of generating an entirely well-focused image of a three-dimensional scene, the method comprising the steps of: a) learning a prediction model including at least a focal depth probability density function (PDF), h(k), for all depth values k, from one or more historical tiles of the scene;
1. A method of generating an entirely well-focused image of a three-dimensional scene, the method comprising the steps of: a) learning a prediction model including at least a focal depth probability density function (PDF), h(k), for all depth values k, from one or more historical tiles of the scene;b) predicting the possible focal surfaces in a subsequent tile of the scene by applying the prediction model;c) for each value of k, examining h(k) such that if h(k) is below a first threshold, no image is acquired at the depth k′ for said one tile; andif h(k) is above or equal to a first threshold, one or more images are acquired in a depth range around said value of k for said one tile; andd) processing the acquired images to generate a pixel focus map for said subsequent tile. 2. The method as claimed in claim 1, wherein step c) comprises examining h(k) such that if h(k) is below the first threshold, no image is acquired at the depth k′ for said one tile; andif h(k) is above or equal to the first threshold and below a second threshold, one or more images are acquired in the depth range around said value of k for said one tile using a first sampling rate; andif h(k) is above or equal to the second threshold, one or more images are acquired in the depth range around said value of k for said one tile using a second sampling rate higher than the first sampling rate. 3. The method as claimed in claim 1, further comprising updating the prediction model before steps a) to d) are applied to a next neighboring tile. 4. The method as claimed in claim 1, comprising the steps of i) for a first tile, acquiring images at equally spaced values of k and processing the acquired images to find a pixel focus map for said first tile;ii) building the PDF based on said pixel focus map for said first tile;iii) applying steps a) to d) for n consecutive neighboring tiles; andfor a (n+1)th tile, return to step i) treating the (n+1)th tile as the first tile. 5. The method as claimed in claim 1, wherein the PDF is a pre-learned model and/or a user defined model. 6. The method as claimed in claim 1, wherein the prediction model comprises a structure component and a probabilistic component. 7. The method as claimed in claim 1, wherein the acquiring of images comprises capturing images or reading stored images. 8. The method as claimed in claim 1, wherein the method is applied to microscopy or photography. 9. The method as claimed in claim 1, wherein the learning of the prediction model comprises using spatial contextual information. 10. A system for generating an entirely well-focused image of a three-dimensional scene, the system comprising: a learning unit for learning a prediction model including at least a focal depth probability density function (PDF), h(k), for all depth values k, from one or more historical tiles of the scene;a prediction unit for predicting the possible focal surfaces in a subsequent tile of the scene by applying the prediction model;a processing unit for, for each value of k, examining h(k) such that if h(k) is below a first threshold, no image is acquired at the depth k′ for said one tile; andif h(k) is above or equal to a first threshold, one or more images are acquired in a depth range around said value of k for said one tile; andd) processing the acquired images to generate a pixel focus map for said subsequent tile. 11. A computer readable data medium having stored thereon a computer code means for instructing a computer to execute a method of generating an entirely well-focused image of a three-dimensional scene, the method comprising the steps of: a) learning a prediction model including at least a focal depth probability density function (PDF), h(k), for all depth values k, from one or more historical tiles of the scene;b) predicting the possible focal surfaces in a subsequent tile of the scene by applying the prediction model;c) for each value of k, examining h(k) such that if h(k) is below a first threshold, no image is acquired at the depth k′ for said one tile; andif h(k) is above or equal to a first threshold, one or more images are acquired in a depth range around said value of k for said one tile; andd) processing the acquired images to generate a pixel focus map for said subsequent tile.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (8)
Kondo Toshiharu (Kanagawa JPX) Kikuchi Akihiro (Chiba JPX) Kohashi Takashi (Chiba JPX) Kato Fumiaki (Chiba JPX) Hirota Katsuaki (Kanagawa JPX), Digital color video camera with auto-focus, auto-exposure and auto-white balance, and an auto exposure system therefor w.
Moezzi Saied ; Katkere Arun ; Jain Ramesh, Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional v.
Subbarao Muralidhara (Port Jefferson Station NY), Method and apparatus for determining the distances between surface-patches of a three-dimensional spatial scene and a ca.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.