Methods, camera, and a computer-readable medium for registering on a camera display infrared and visible light images of a target scene taken from different points of view causing a parallax error.
대표청구항▼
1. A method of displaying visible-light (VL) images and infrared (IR) images, the method comprising: providing a camera having a VL lens with a VL sensor, an IR lens with an IR sensor, and a display, the VL lens and the IR lens being located on the camera such that an optical axis of the VL lens is
1. A method of displaying visible-light (VL) images and infrared (IR) images, the method comprising: providing a camera having a VL lens with a VL sensor, an IR lens with an IR sensor, and a display, the VL lens and the IR lens being located on the camera such that an optical axis of the VL lens is offset from, and generally parallel to, an optical axis of the IR lens so the VL sensor and the IR sensor sense a VL image and an IR image, respectively, of a target scene from different points of view causing a parallax error, the IR lens being manually rotatable;displaying at least a portion of the VL image and at least a portion of the IR image on the display, the relative positions on the display of the at least the portion of the VL image and of the at least the portion of the IR image being controllable via rotation of the IR lens; andmanually rotating the IR lens until the at least the portion of the VL image and the at least the portion of the IR image are positioned in register on the display to correct the parallax error, the registration via IR lens rotation focusing the IR lens on the target scene. 2. The method of claim 1, wherein the portion of the IR image is displayed without displaying a corresponding portion of the VL image. 3. The method of claim 1, wherein the portion of the VL image is displayed without displaying a corresponding portion of the IR image. 4. The method of claim 1, wherein the portion of the IR image is blended with a corresponding portion of the VL image. 5. The method of claim 1, wherein the portion of the IR image is surrounded by the portion of the VL image to effect a picture-in-picture view of the target scene. 6. The method of claim 5, wherein the portion of the IR image is blended with a corresponding portion of the VL image. 7. The method of claim 1, wherein the VL sensor includes an array of VL sensor elements and the IR sensor includes an array of IR sensor elements, the IR image being formed by substantially fewer sensor elements than being used to form the VL image. 8. The method of claim 1, wherein the VL sensor includes an array of VL sensor elements, and wherein the at least the portion of the VL image is formed from fewer than all of the VL sensor elements. 9. The method of claim 1, wherein the IR sensor includes an array of IR sensor elements, and wherein the at least the portion of the IR image is formed from all of the IR sensor elements. 10. The method of claim 9, wherein the at least the portion of the IR image fills only part of the display, the part of the display being generally centrally located on the display. 11. The method of claim 10, wherein the at least the portion of the VL image fills the remainder of the display not filled by the at least the portion of the IR image, whereby the target scene is displayed in a picture-in-picture view. 12. The method of claim 11, wherein the at least the portion of the VL image fills the entire display, the portion of the IR image being blended with a corresponding portion of the VL image, whereby the target scene is displayed in a picture-in-picture view. 13. The method of claim 1, wherein the at least the portion of the IR image fills the entire display. 14. The method of claim 1, wherein focusing the IR lens on the target scene comprises moving the IR lens with respect to the IR sensor. 15. The method of claim 1, further including sensing the IR lens focus position, the IR lens focus position being used to bring the VL image into register with the IR image. 16. The method of claim 15, further including determining the distance to the target scene based on the IR lens focus position. 17. The method of claim 1, further including reading a lens position sensor to sense the IR lens focus position. 18. A method of displaying visible-light (VL) images and infrared (IR) images, the method comprising: providing a camera having a VL lens with a VL sensor, an IR lens with an IR sensor, and a display, the VL lens and the IR lens being located on the camera such that an optical axis of the VL lens is offset from, and generally parallel to, an optical axis of the IR lens so the VL sensor and the IR sensor sense a VL image and an IR image, respectively, of a target scene from different points of view causing a parallax error;displaying at least a portion of the VL image with at least a portion of the IR image on the display, the at least the portion of the IR image being displayed without the corresponding portion of the VL image; andregistering the VL image with the IR image on the display by displacing the VL image and the IR image relative to each other until registered to correct the parallax error via a manual adjustment mechanism, the registration via the manual adjustment mechanism focusing the IR lens on the target scene. 19. The method of claim 18, wherein the IR image has a smaller field of view than the VL image and the at least the portion of the IR image is surrounded by the at least the portion of the VL image to effect a picture-in-picture view of the target scene. 20. The method of claim 18, wherein the VL sensor includes an array of VL sensor elements and the IR sensor includes an array of IR sensor elements, the IR image being formed by substantially fewer sensor elements than being used to form the VL image. 21. The method of claim 18, wherein the VL sensor includes an array of VL sensor elements, and wherein the at least the portion of the VL image is formed from fewer than all of the VL sensor elements, the IR sensor including an array of IR sensor elements, and the at least the portion of the IR image is formed from all of the IR sensor elements. 22. The method of claim 18, wherein the at least the portion of the IR image fills only part of the display, the part of the display being generally centrally located on the display, the at least the portion of the VL image fills the remainder of the display not filled by the at least the portion of the IR image, whereby the target scene is displayed in a picture-in-picture view. 23. A camera producing visible light (VL) and infrared (IR) images, the camera comprising: a VL lens;a VL sensor associated with the VL lens and having an array of VL sensor elements that produce a VL image of a target scene;an IR lens;an IR sensor associated with the IR lens and having an array of IR sensor elements that produce an IR image of the target scene, the IR image being formed by substantially fewer sensor elements than being used to form the VL image, the VL lens with the VL sensor having a larger field of view than the IR lens with the IR sensor, the IR lens being manually rotatable, rotation of the IR lens causing an axial shift of the IR lens relative to the IR sensor to focus the IR lens;the VL lens and the IR lens being located on the camera such that an optical axis of the VL lens is offset from, and generally parallel to, an optical axis of the IR lens so the VL sensor array of pixels and the IR sensor array of pixels sense the target scene from different points of view causing a parallax error, the focusing of the IR lens registering the VL image with the IR image to correct the parallax error; anda display for concurrently displaying at least a portion of the VL image registered with at least a portion of the IR image, the displayed portion of the IR image being surrounded by the VL image to effect a picture-in-picture view of the target scene. 24. The camera of claim 23, wherein the portion of the IR image is displayed without displaying a corresponding portion of the VL image. 25. The camera of claim 23, wherein the portion of the VL image is displayed without displaying a corresponding portion of the IR image. 26. The camera of claim 23, wherein the portion of the IR image is blended with a corresponding portion of the VL image. 27. The camera of claim 23, wherein the at least the portion of the IR image fills only part of the display, the part of the display being generally centrally located on the display, the at least the portion of the VL image fills the remainder of the display not filled by the at least the portion of the IR image. 28. The camera of claim 23, wherein the at least the portion of the VL image is formed from fewer than all of the VL sensor elements, and the at least the portion of the IR image is formed from all of the IR sensor elements. 29. The camera of claim 23, further including a sensor that determines a value indicative of the axial distance between the IR lens and the IR sensor, the value being used to register the VL image with the IR image to correct the parallax error. 30. The camera of claim 29, wherein the value indicative of the axial distance provides a value indicative of the distance to the target scene. 31. A computer-readable medium programmed with instructions for performing a method of registering multiple images, the medium comprising instructions for causing a programmable processor to: receive a first image of a target scene, the first image being produced by a VL lens and a VL sensor with an array of VL sensor elements;receive a second image of the target scene, the second image having a parallax error with the first image and being produced by an IR lens and an IR sensor with an array of IR sensor elements, the IR sensor having substantially fewer sensor elements than the VL sensor;display at least portions of the first and second images on a display; andmove the first and second images relative to each other on the display in response to a user input, the user input changing the focus of the IR lens as the first and second images are moved, when the first and second images are moved into register to correct the parallax error the IR lens being focused on the target scene. 32. The computer-readable medium of claim 31, further comprising determine a value indicative of the distance between the IR lens and a target within the target scene. 33. The computer-readable medium of claim 32, wherein the value indicative of the distance between the IR lens and the target is the IR lens focus position or the distance between the IR lens and the target. 34. The computer-readable medium of claim 33, further comprising display the distance between the IR lens and the target. 35. The computer-readable medium of claim 32, further comprising receive a sensed focus position of the IR lens. 36. The computer-readable medium of claim 35, wherein the value indicative of the distance between the IR lens and the target is based on the sensed IR lens focus position. 37. The computer-readable medium of claim 35, wherein the sensed IR lens focus position is received from a lens position sensor. 38. The computer-readable medium of claim 31, wherein the portion of the second image is displayed without displaying a corresponding portion of the first image. 39. The computer-readable medium of claim 31, wherein the portion of the first image is displayed without displaying a corresponding portion of the second image. 40. The computer-readable medium of claim 31, wherein the portion of the second image is blended with a corresponding portion of the first image. 41. The computer-readable medium of claim 31, wherein the at least the portion of the VL image is formed from fewer than all of the VL sensor elements, and the at least the portion of the IR image is formed from all of the IR sensor elements. 42. The computer-readable medium of claim 31, wherein the VL lens with the VL sensor has a larger field of view than the IR lens with the IR sensor, and the displayed portion of the second image is surrounded by the first image to effect a picture-in-picture view of the target scene. 43. The computer-readable medium of claim 31, wherein the at least the portion of the second image fills only part of the display, the part of the display being generally centrally located on the display, the at least the portion of the first image fills the remainder of the display not filled by the at least the portion of the second image, whereby the target scene is displayed in a picture-in-picture view.
Lillquist Robert D. (Schenectady NY) Pimbley Joseph M. (Schenectady NY) Vogelsong Thomas L. (Schenectady NY), Composite visible/thermal-infrared imaging system.
Sadeg M. Faris, Method and apparatus for producing and displaying spectrally-multiplexed images of three-dimensional imagery for use in stereoscopic viewing thereof.
Burt Peter J. (Mercer County NJ) van der Wal Gooitzen S. (Mercer NJ) Kolczynski Raymond J. (Mercer NJ) Hingorani Rajesh (Mercer NJ), Method for fusing images and apparatus therefor.
Murakami Yoshishige (Kawasaki JPX) Hirota Kanji (Kawasaki JPX) Nakamura Masaaki (Kawasaki JPX), Video signal mixing device for infrared/visible integrated imaging.
Tertitski, Leonid M.; Chu, Schubert S.; Assaf, Shay; Vellore, Kim R.; Cong, Zhepeng, System and method to detect substrate and/or substrate support misalignment using imaging.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.