[미국특허]
Depth perceptive trinocular camera system
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-013/02
G06K-009/62
G06K-009/46
출원번호
US-0486005
(2017-04-12)
등록번호
US-9807371
(2017-10-31)
발명자
/ 주소
Salvagnini, Pietro
Stoppa, Michele
Rafii, Abbas
출원인 / 주소
Aquifi, Inc.
대리인 / 주소
Lewis Roca Rothgerber Christie LLP
인용정보
피인용 횟수 :
0인용 특허 :
2
초록▼
A method for detecting decalibration of a depth camera system including a first, second, and third cameras having overlapping fields of view in a direction includes: detecting a feature in a first image captured by the first camera; detecting the feature in a second image captured by the second came
A method for detecting decalibration of a depth camera system including a first, second, and third cameras having overlapping fields of view in a direction includes: detecting a feature in a first image captured by the first camera; detecting the feature in a second image captured by the second camera; detecting the feature in a third image captured by the third camera, the third camera being non-collinear with the first and second cameras; identifying a first conjugate epipolar line in the second image in accordance with a detected location of the feature in the first image and calibration parameters; identifying a second conjugate epipolar line in the second image in accordance with a detected location of the feature in the third image and the calibration parameters; and calculating a difference between a detected location of the feature in the second image and the first and second conjugate epipolar lines.
대표청구항▼
1. A depth perceptive camera system comprising: a first camera configured to capture infrared images;a second camera;a third camera arranged non-collinearly with the first and second cameras, the first, second, and third cameras having substantially overlapping fields of view in a direction, and at
1. A depth perceptive camera system comprising: a first camera configured to capture infrared images;a second camera;a third camera arranged non-collinearly with the first and second cameras, the first, second, and third cameras having substantially overlapping fields of view in a direction, and at least one of the second and third cameras being configured to capture visible light images, the third camera being equidistant from the first and second cameras, the third camera having a higher resolution than a resolution of the first camera by a resolution factor r; andan image signal processor configured to receive images from the first camera, the second camera, and the third camera, the image signal processor being configured to detect a decalibration of the first, second, and third cameras,wherein a first baseline extends between the second camera and the first camera,wherein a second baseline extends between the second camera and the third camera, andwherein an angle α formed between the first baseline and the second baseline is approximately αoptimal, where αoptimal=argmaxα{(2×sin(α)+r×sin(2α)3)-(tan(α)2)}. 2. The depth perceptive camera system of claim 1, wherein the resolution factor r is 1.0, and wherein the angle α is in the range of 26.0 degrees to 44.3 degrees. 3. The depth perceptive camera system of claim 2, wherein the angle α is in the range of 28.9 degrees to 41.9 degrees. 4. The depth camera system of claim 3, wherein the angle α is about 35.6 degrees. 5. The depth perceptive camera system of claim 1, wherein the resolution factor r is 2.0, and wherein the angle α is in the range of 21.4 degrees to 53.4 degrees. 6. The depth perceptive camera system of claim 1, wherein the first camera and the second camera are configured to capture invisible light, and wherein the third camera is configured to capture visible light. 7. The depth perceptive camera system of claim 1, further comprising a projection device located between the first camera and the second camera, the projection device being configured to emit a textured pattern of invisible light in the direction of the overlapping fields of view. 8. The depth perceptive camera system of claim 1, wherein the image signal processor is configured to detect the decalibration of the first, second, and third cameras by: detecting a feature in a first image captured by the first camera;detecting the feature in a second image captured by the second camera;detecting the feature in a third image captured by the third camera, the third camera being non-collinear with the first and second cameras;identifying a first conjugate epipolar line in the second image in accordance with a detected location of the feature in the first image and a plurality of calibration parameters;identifying a second conjugate epipolar line in the second image in accordance with a detected location of the feature in the third image and the plurality of calibration parameters;calculating a difference between a detected location of the feature in the second image and the first and second conjugate epipolar lines; andoutputting an indication that the depth camera system is decalibrated in response to the difference exceeding a threshold. 9. The depth perceptive camera system of claim 8, wherein the difference comprises a first difference and a second difference, and wherein the image signal processor is configured to calculate the difference by: calculating a first difference between the detected location of the feature in the second image and the first conjugate epipolar line; andcalculating a second difference between the detected location of the feature in the second image and the second conjugate epipolar line. 10. The depth perceptive camera system of claim 8, wherein the image signal processor is further configured to: calculate a location of an intersection of the first conjugate epipolar line and the second conjugate epipolar line; andcalculate the difference by calculating a distance between the detected location of the feature in the second image and the location of the intersection. 11. A mobile device comprising: a display;a first camera configured to capture infrared images, the first camera being adjacent a first edge of the display;a second camera adjacent the first edge of the display;a third camera arranged non-collinearly with the first and second cameras and adjacent a second edge of the display, the first, second, and third cameras having substantially overlapping fields of view, and at least one of the second and third cameras being configured to capture visible light images, the third camera being equidistant from the first and second cameras, the third camera having a higher resolution than a resolution of the first camera by a resolution factor r; andan image signal processor configured to control the display and to receive images from the first camera, the second camera, and the third camera, the image signal processor being configured to detect a decalibration of the first, second, and third cameras,wherein a first baseline extends between the second camera and the first camera,wherein a second baseline extends between the second camera and the third camera, andwherein an angle α formed between the first baseline and the second baseline is approximately αoptimal, where αoptimal=argmaxα{(2×sin(α)+r×sin(2α)3)-(tan(α)2)}. 12. The mobile device of claim 11, wherein the resolution factor r is 1.0, and wherein the angle α is in the range of 26.0 degrees to 44.3 degrees. 13. The mobile device of claim 12, wherein the angle α is in the range of 28.9 degrees to 41.9 degrees. 14. The mobile device of claim 11, wherein the resolution factor r is 2.0, and wherein the angle α is in the range of 21.4 degrees to 53.4 degrees. 15. The mobile device of claim 11, wherein the image signal processor is configured to detect the decalibration of the first, second, and third cameras by: detecting a feature in a first image captured by the first camera;detecting the feature in a second image captured by the second camera;detecting the feature in a third image captured by the third camera, the third camera being non-collinear with the first and second cameras;identifying a first conjugate epipolar line in the second image in accordance with a detected location of the feature in the first image and a plurality of calibration parameters;identifying a second conjugate epipolar line in the second image in accordance with a detected location of the feature in the third image and the plurality of calibration parameters;calculating a difference between a detected location of the feature in the second image and the first and second conjugate epipolar lines; andoutputting an indication that the first, second, and third cameras are decalibrated in response to the difference exceeding a threshold.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (2)
Yamanaka Kazuyuki (Fujisawa JPX), Method of measuring three dimensional shape.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.