Method for image processing and reconstruction of images for optical tomography
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G01N-021/00
출원번호
UP-0750924
(2007-05-18)
등록번호
US-7835561
(2011-01-16)
발명자
/ 주소
Meyer, Michael G.
Rahn, J. Richard
Fauver, Mark E.
출원인 / 주소
VisionGate, Inc.
대리인 / 주소
Citadel Patent Law
인용정보
피인용 횟수 :
5인용 특허 :
90
초록▼
A method for reconstructing three-dimensional (3D) tomographic images. A set of pseudo-projection images of an object is acquired. Error corrections are applied to the set of pseudo-projection images to produce a set of corrected pseudo-projection images. The set of corrected pseudo-projection image
A method for reconstructing three-dimensional (3D) tomographic images. A set of pseudo-projection images of an object is acquired. Error corrections are applied to the set of pseudo-projection images to produce a set of corrected pseudo-projection images. The set of corrected pseudo-projection images are processed to produce (3D) tomographic images.
대표청구항▼
What is claimed is: 1. A method for reconstructing three-dimensional (3D) tomographic images comprising: acquiring a set of pseudo-projection images of at least one object; applying error corrections to the set of pseudo-projection images to produce a set of corrected pseudo-projection images, wher
What is claimed is: 1. A method for reconstructing three-dimensional (3D) tomographic images comprising: acquiring a set of pseudo-projection images of at least one object; applying error corrections to the set of pseudo-projection images to produce a set of corrected pseudo-projection images, where the error corrections correct for registration error effects by measuring a center of mass of the at least one object in each projection image, and by correcting for axial registration errors by shifting the at least one object in each pseudo-projection image so that the axial component of the center of mass for the object is aligned to a common axial position in each of the set of corrected pseudo-projection images; and processing the set of corrected pseudo-projection images to produce (3D) tomographic images. 2. The method of claim 1 wherein applying error corrections also comprises correcting for at least one of illumination error effects and extinction coefficient effects. 3. The method of claim 1 wherein applying error corrections also comprises correcting for centering errors by shifting the object in the set of pseudo-projection images so that a vertical component of the center of mass is aligned to the centerline of the image. 4. The method of claim 1 wherein the set of pseudo-projection images include at least one pair of pseudo-projection images at opposing viewing angles and wherein applying error corrections further comprises: applying a shift to a first member of the at least one pair of pseudo-projection images, relative to a second member of the at least one pair of pseudo-projection images; combining the first and second members of the at least one pair of pseudo-projection images; and measuring the combined shifted pseudo-projection images to determine whether at least a selected one of errors including centering errors and axial registration errors are a minimum. 5. The method of claim 1 wherein each of the set of pseudo-projection images is acquired from an illuminated region along an optical axis and includes a plurality of pixels, each pixel having a grey scale value, wherein applying error corrections comprises altering each grey scale value mapped to an illumination gradient representing a change of grey scale values for locations along the optical axis. 6. The method of claim 5 in which the illuminated region includes illuminated fluorophores. 7. The method of claim 5 in which the illuminated region includes saturated fluorophores. 8. The method of claim 5 wherein grey scale values in each of the set of pseudo-projection images are modified by weighting them according to the formula Ω((T)=[1−gR sin(T)]−1 so that the intensity for pixel coordinates X and Y, J(X, Y; T), used for post-acquisition processing of a pseudo-projection is given by J(X,Y;T)=I(X,Y,T)Ω(T), where g is the illumination gradient; T is the rotation angle of the tube; I(X,Y,T) is the intensity captured by the optical system for a pseudo-projection acquired at angle T, for pixel coordinates X and Y in a blank region of the field of view; Z is the distance along the optical axis from the center of the scan range to the center of the tube; and R is maximum distance from the center of the tube to the center of the scan range over the course of the angular rotation of the tube. 9. The method of claim 1 wherein each of the set of pseudo-projection images has a corrected illumination level estimated from a priori knowledge and wherein applying error corrections comprises applying the a priori knowledge of the position of the optical elements during acquisition of each of the set of pseudo-projection images. 10. The method of claim 1 wherein each of the set of pseudo-projection images has a corrected illumination level estimated from information contained within each of the set of pseudo-projection images. 11. The method of claim 1 wherein acquiring a set of pseudo-projection images of an object comprises: dividing a projection scan of the object into a plurality of sectional pseudo-projections acquired at a plurality of perspective viewing angles; and summing the plurality of sectional pseudo-projections at each of the plurality of perspective viewing angles to form the set of pseudo-projections. 12. The method of claim 11 wherein the plurality of sectional pseudo-projections are weighted. 13. The method of claim 11 wherein the object comprises a biological cell. 14. The method of claim 11 wherein the plurality of sectional pseudo-projections includes at least two outer sections and a middle section, the method further comprising weighting the at least two outer sections with a lower value weight than the middle section. 15. The method of claim 1 where acquiring a set of pseudo-projection images comprises dividing a projection scan of an object into a plurality of sectional pseudo-projections acquired at a plurality of perspective viewing angles. 16. The method of claim 15 wherein processing comprises backprojecting each of the plurality of sectional pseudo-projections in a volume of reconstruction for forming a 3D reconstruction of the object. 17. The method of claim 15 wherein the sectional pseudo-projections are weighted. 18. The method of claim 15 wherein the object comprises a biological cell. 19. The method of claim 15 wherein the plurality of sectional pseudo-projections includes at least two outer sections and a middle section, the method further comprising weighting the at least two outer sections at a lower value than the middle section. 20. The method of claim 1 where acquiring a set of pseudo-projection images comprises: scanning an objective lens through the object in an objective scan direction; measuring in-plane spatial frequency power spectra as a function of the location of the focal plane relative to the center of the object; and sectioning the scanned pseudo-projection. 21. The method of claim 20 wherein the object comprises a biological cell. 22. The method of claim 20 wherein measuring in-plane spatial frequency power spectra comprises: (a) imaging the object by scanning a focal plane to find a peak focus score; (b) acquiring an image stack through the entire object relative to a center, where the center is defined as a peak focus score; (c) calculating an in-plane spatial frequency power spectrum of each image in the stack; (d) repeating steps (a) through (d) for a plurality of objects; and (e) calculating in-plane spatial frequency power spectra for the plurality of objects to determine an average in-plane spatial frequency power spectrum for each location relative to the center. 23. The method of claim 1 where acquiring a set of pseudo-projection images comprises building a set of sectioned pseudo-projections from a plurality of sectioned scanned pseudo-projections acquired at different perspective views. 24. The method of claim 23 further comprising: filtering the set of sectioned pseudo-projections, using a plurality of filters computed by comparing off-center power spectra relative to the center power spectrum, to generate a plurality of filtered pseudo-projection data sets; passing each filtered pseudo-projection data set to a plurality of volumes of reconstruction; and summing the plurality of volumes of reconstruction to form a summed volume of reconstruction. 25. The method of claim 1 where processing the set of corrected pseudo-projection images to produce (3D) tomographic images comprises: calculating logarithmic values for the set of corrected pseudo-projection images; and applying backprojection to the logarithmic values to create a 3D tomographic reconstruction. 26. The method of claim 25 wherein applying backprojection includes applying a filtering operation to the logarithmic values of the at least one projection image. 27. The method of claim 1 wherein processing comprises weighting each pseudo-projection in the reconstructed tomographic image in inverse proportion to its departure from a baseline illumination level; and reconstructing the set of weighted pseudo-projections through backprojection and filtering of the images to create a reconstructed tomographic image. 28. A method for reducing registration errors in a projection image comprising: a) acquiring at least a pair of projection images, wherein the at least a pair of projection images are extended depth of field images; a) measuring the center of mass of at least one object of interest in each projection image; b) correcting for axial registration errors by shifting each projection image so that the axial component of the center of mass in each projection image is aligned to a common axial position; and c) correcting for centering errors by shifting each projection image so that the vertical component of the center of mass in each projection image is aligned to the centerline of the image. 29. The method of claim 28 wherein the at least a pair of projection images are pseudo-projections. 30. A method for reducing registration errors in a projection image comprising: (a) acquiring at least one pair of projection images at opposing viewing angles, wherein the at least one pair of projection images are extended depth of field images; (b) applying a shift to a member of the at least one pair of projection images at opposing viewing angles, relative to the other member; (c) combining the shifted member and the other member to produce combined shifted projection images; (d) measuring the combined shifted projection images to determine if registration errors are a minimum for at least one of registration errors including centering errors and axial registration errors; and (e) repeating steps (b) through (d) until step (d) is true. 31. The method of claim 30 wherein the at least one pair of projection images are pseudo-projections. 32. The method of claim 30 wherein the registration errors of the combined shifted projection images are measured by an entropy measure. 33. The method of claim 30 wherein combining the two members of the at least one pair of projection images comprises applying filtered backprojection to the pair of projection images. 34. The method of claim 30 wherein combining comprises averaging the shifted member and the other member. 35. The method of claim 30 wherein combining comprises subtracting one of the members of the pair of projection images from the other. 36. A method for reducing reconstruction errors due to the illumination variation in a series of pseudo-projection images comprising: (a) lighting an object in an illumination region to produce a series of pseudo-projection images at varying positions along an optical axis; and (b) altering each pseudo-projection image's grey scale value proportionate to an illumination gradient representing a change of grey scale values for locations along the optical axis, wherein each of the series of pseudo-projection images has a corrected illumination level estimated from a priori knowledge and knowledge of the position of the optical elements during acquisition of the pseudo-projection image. 37. The method of claim 36 wherein the corrected illumination level is further estimated from information contained within the pseudo-projection image. 38. The method of claim 36 wherein grey scale values in each of the series of pseudo-projection images are modified by weighting them according to the formula Ω(T)=[1−gR sin(T)]−1 so that the intensity for pixel coordinates X and Y, J(X, Y; T), used for post-acquisition processing of the pseudo-projection is given by J(X,Y;T)=I(X,Y,T)Ω(T), where g is the illumination gradient; T is the rotation angle of the tube; I(X,Y,T) is the intensity captured by the optical system for the pseudo-projection acquired at angle T, for pixel coordinates X and Y in a blank region of the field of view; Z is the distance along the optical axis from the center of the scan range to the center of the tube; and R is maximum distance from the center of the tube to the center of the scan range over the course of the angular rotation of the tube. 39. The method of claim 36 in which the illuminated region includes illuminated fluorophores. 40. The method of claim 36 in which the illuminated region includes saturated fluorophores. 41. A method for generating 3D images from an optical tomography system, where the optical tomography system includes an optical axis comprising: dividing a full projection scan of an object into a plurality of sectional pseudo-projections acquired at a plurality of perspective viewing angles, where the sectional pseudo-projections are divided into sections located along the optical axis and wherein each of the plurality of sectional pseudo-projections includes at least two outer sections and a middle section, the method further comprising weighting the at least two outer sections at a lower value than the middle section; summing the plurality of sectional pseudo-projections at each of the plurality of perspective viewing angles to form a set of full pseudo-projections; and backprojecting the set of full pseudo-projections in a volume of reconstruction to form a 3D reconstruction of the object. 42. The method of claim 41 wherein the object comprises a biological cell. 43. A method for generating 3D images from an optical tomography system comprising: dividing a full projection scan of an object into a plurality of sectional pseudo-projections acquired at a plurality of perspective viewing angles, where the sectional pseudo-projections are divided into sections located along the optical axis, wherein the plurality of sectional pseudo-projections includes at least two outer sections and a middle section, the method further comprising weighting the at least two outer sections at a lower value than the middle section; and backprojecting the plurality of sectional pseudo-projections in a volume of reconstruction for forming a 3D reconstruction of the object. 44. The method of claim 43 wherein the sectional pseudo-projections are weighted according to their position along the optical axis in the reconstructed tomographic image. 45. The method of claim 43 wherein backprojecting comprises applying filtered backprojection. 46. The method of claim 43 wherein the step of applying filtered backprojection produces at least one backprojected value that varies with its location along the optical axis. 47. The method of claim 43 wherein the object comprises a biological cell. 48. A method for scanning an object moving in an axial direction transverse to an objective scanning direction comprising: scanning an objective lens through the object in the objective scan direction; measuring in-plane spatial frequency power spectra as a function of the location of the focal plane relative to the center of the object, wherein measuring in-plane spatial frequency power spectra comprises: (a) imaging the object by scanning a focal plane to find a peak focus score; (b) acquiring an image stack through the object relative to a center, where the center is defined as a peak focus score; (c) calculating an in-plane spatial frequency power spectrum of each image in the stack; (d) repeating steps (a) through (d) for a plurality objects; (e) calculating in-plane spatial frequency power spectra for the plurality of objects to determine an average in-plane spatial frequency power spectrum for each location relative to the center; and sectioning the scanned pseudo-projection. 49. The method of claim 48 wherein the object comprises a biological cell. 50. The method of claim 48 further comprising: building a set of sectioned pseudo-projections from a plurality of sectioned scanned pseudo-projections acquired at different perspective views; filtering the set of sectioned pseudo-projections, using a plurality of filters computed by comparing off-center power spectra relative to the center power spectrum, to generate a plurality of filtered pseudo-projection data sets; passing each filtered pseudo-projection data set to a plurality of volumes of reconstruction; and summing the plurality of volumes of reconstruction to form a summed volume of reconstruction. 51. A method for generating a statistical filter for reconstruction of images for 3D reconstructions, the method comprising: creating a set of virtual phantoms; computing a phantom FFT of each phantom in the set of virtual phantoms to produce a set of phantom FFTs; creating a plurality of simulated pseudo-projections of the set of virtual phantoms from various perspectives; backprojecting the plurality of simulated pseudo-projections to form a 3D reconstruction; computing an FFT of the 3D reconstruction; and creating a filter using a ratio of a subset of the set of phantom FFTs to the FFT of the 3D reconstruction expressed by the set of backprojections. 52. The method of claim 51 wherein the set of virtual phantoms comprises a set of nucleoli phantoms having differing sizes and shapes, with differing numbers and placement of nucleoli. 53. The method of claim 52 wherein the created filter comprises at least one statistical metric. 54. A method of 3D reconstruction comprising: computing backprojection values within a sub-volume of the volume of reconstruction where 3D spatial coordinates and dimensions of the sub-volume are computed by determination of bounding box parameters for two orthogonal views of the reconstruction volume, wherein determination of bounding box parameters comprises: analyzing a first pseudo-projection (PP) at 0° by finding the edges of the region of interest to determine a bounding box from a first perspective; finding a set of coordinates X0, Z0 in PP space; finding a set of coordinates Xr, Zr in reconstruction space; analyzing PP@90° by finding the edges of the region of interest to determine a bounding box from a second perspective; finding a set of coordinates X90, Z90 in PP space; finding a set of coordinates Yr in reconstruction space; and reconstructing volume using a set of coordinates [Xr, Yr, Zr]. 55. The method of claim 54 wherein the bounding box bounded by parameters Xr (min), Xr (max), Yr (min), Yr (max), and Zr (max), Zr (min), where slices in the reconstruction space have a number equal to Zr (max)−Zr (min)+1. 56. The method of claim 54 in which the sub-volume comprises a solid 3D rectangle. 57. The method of claim 54 in which the sub-volume comprises a partially hollow 3D rectangle.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (90)
Sauer Frank ; Samarasekera Supun, Adaptive detector masking for speed-up of cone beam reconstruction.
MacAulay,Calum E.; Doudkine,Alexei K.; Garner,David M.; Flezar,Margareta; Zganec,Mario; Lavrencak,Jaka; Palcic,Branko M.; Ferguson,Gary W., Computerized methods and systems related to the detection of malignancy-associated changes (MAC) to detect cancer.
Camahort, Emilio; Holzbach, Mark E.; Sitton, Robert L., Efficient block transform including pre-processing and post processing for autostereoscopic displays.
Alfano Robert R. ; Demos Stavros G. ; Wang Wubao, Imaging of objects in turbid media based upon the preservation of polarized luminescence emitted from contrast agents.
Cheng Ping-Chin ; Snyder Donald L. ; O'Sullivan Joseph A. ; Wang Ge ; Vannier Michael W., Iterative process for reconstructing cone-beam tomographic images.
Tam Kwok C. (Schenectady NY), Method and apparatus for acquiring and processing only a necessary volume of radon data consistent with the overall shap.
Hill Henry A. ; Oglesby Paul H. ; Ziebell Douglas A., Method and apparatus for discriminating in-focus images from out-of-focus light signals from background and foreground l.
Swanson, Eric A.; Huang, David; Fujimoto, James G.; Puliafito, Carmen A.; Lin, Charles P.; Schuman, Joel S., Method and apparatus for optical imaging with means for controlling the longitudinal range of the sample.
Tam Kwok C. (Schenectady NY), Method and apparatus for pre-processing cone beam projection data for exact three dimensional computer tomographic image.
Chen Shiuh-Yung James ; Carroll John D. ; Metz Charles E. ; Hoffmann Kenneth R., Method and apparatus for three-dimensional reconstruction of coronary vessels from angiographic images.
Deckman Harry W. (Clinton NJ) Flannery Brian P. (Clinton NJ), Method and apparatus for utilizing an electro-optic detector in a microtomography system.
Sweedler Jonathan V. (Redwood City CA) Shear Jason B. (Menlo Park CA) Zare Richard N. (Stanford CA), Method and device employing time-delayed integration for detecting sample components after separation.
Nelson, Alan C., Optical projection imaging system and method for automatically detecting cells having nuclear and cytoplasmic densitometric features associated with disease.
Nelson, Alan C.; Webster, Robert W.; Chu, Chee-Wui, Optical projection imaging system and method for automatically detecting cells with molecular marker compartmentalization associated with malignancy and disease.
Schrader Bernhard (Soniusweg 20 4300 Essen DEX 15), Sample arrangement for spectrometry, method for the measurement of luminescence and scattering and application of the sa.
Palcic Branko,CAX ; MacAulay Calum Eric,CAX ; Harrison S. Alan,CAX ; Lam Stephen,CAX ; Payne Peter William,CAX ; Garner David Michael,CAX ; Doudkine Alexei,CAX, System and method for automatically detecting malignant cells and cells having malignancy-associated changes.
Hewitt,Charles W.; Doolin,Edward J.; Kesterson,John; Lauren,Peter D.; Greenberg,Gary, Tomographic microscope for high resolution imaging and method of analyzing specimens.
Alfano Robert R. ; Liu Feng ; Wang Quan-Zhen ; Ho Ping P. ; Wang Leming M. ; Liang Xiangchun, Ultrafast optical imaging of objects in or behind scattering media.
Kardos Keith W. ; Niedbala R. Sam ; Burton Jarrett Lee ; Cooper David E. ; Zarling David A. ; Rossi Michel J.,CHX ; Peppers Norman A. ; Kane James ; Faris Gregory W. ; Dyer Mark J. ; Ng Steve Y. ; Sc, Up-converting reporters for biological and other assays.
Kingston, Andrew Maurice; Sheppard, Adrian Paul; Varslot, Trond Karsten; Latham, Shane Jamie; Sakellariou, Arthur, Computed tomography imaging process and system.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.