Articulated arm coordinate measurement machine that uses a 2D camera to determine 3D coordinates of smoothly continuous edge features
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G01B-011/03
G01B-011/00
H04N-013/02
G06T-007/13
G01B-005/008
G05B-015/02
G01B-021/04
G01B-005/012
G01B-011/25
G05B-019/401
H04N-005/225
출원번호
US-0481673
(2017-04-07)
등록번호
US-9879976
(2018-01-30)
발명자
/ 주소
Bridges, Robert E.
Yadav, Joydeep
출원인 / 주소
FARO TECHNOLOGIES, INC.
대리인 / 주소
Cantor Colburn LLP
인용정보
피인용 횟수 :
0인용 특허 :
30
초록▼
A portable articulated arm coordinate measuring machine (AACMM) having an integrated camera captures 2D images of an object at three or more different poses. A processor determines 3D coordinates of a smoothly continuous edge point of the object based at least in part on the captured 2D images and p
A portable articulated arm coordinate measuring machine (AACMM) having an integrated camera captures 2D images of an object at three or more different poses. A processor determines 3D coordinates of a smoothly continuous edge point of the object based at least in part on the captured 2D images and pose data provided by the AACMM.
대표청구항▼
1. A method of determining three-dimensional (3D) coordinates of an edge point of an object, comprising: providing an articulated arm coordinate measuring machine (AACMM) that includes a base, a manually positionable arm portion having an opposed first end and second end, the arm portion being rotat
1. A method of determining three-dimensional (3D) coordinates of an edge point of an object, comprising: providing an articulated arm coordinate measuring machine (AACMM) that includes a base, a manually positionable arm portion having an opposed first end and second end, the arm portion being rotationally coupled to the base, the arm portion including a plurality of connected arm segments, each arm segment including at least one position transducer for producing a position signal, a first camera assembly coupled to the first end, an electronic circuit that receives the position signal from each of the at least one position transducer and provides data corresponding to a pose of the first camera assembly;providing a processor;in a first instance:capturing with the first camera assembly in a first pose a first image of the object;obtaining from the electronic circuit first data corresponding to the first pose;in a second instance:capturing with the first camera assembly in a second pose a second image of the object;obtaining from the electronic circuit second data corresponding to the second pose;in a third instance:capturing with the first camera assembly in a third pose a third image of the object;obtaining from the electronic circuit third data corresponding to the third pose;determining with the processor the 3D coordinates of a first edge point, the first edge point being smoothly continuous within an interval of edge points, the 3D coordinates of the first edge point determined based at least in part on the first data, the second data, the third data, the first image, the second image, and the third image; andstoring the determine 3D coordinates of the first edge point. 2. The method of claim 1 wherein the first edge point is identified by the processor as an edge point based at least in part on pixel light levels in the first image. 3. The method of claim 2 wherein the first edge point is determined by an identification method selected from the group consisting of: subpixel-edge-location-based-on-partial-area-effect technique, moment-based technique, least-squared-error-based technique, and interpolation technique. 4. The method of claim 2 wherein the 3D coordinates of the first edge point are further determined based on a smoothing of a first edge portion that contains the first edge point, the smoothing filter selected from the group that filters the first edge portion to more closely match a shape selected from the group consisting of: a straight line, a circular curve, and a polynomial curve. 5. The method of claim 2 further including determining a position of the first edge point in each of the first image, the second image, and the third image based at least in part on epipolar geometry relating the first pose, the second pose, and the third pose of the first camera assembly. 6. The method of claim 5 further including checking whether the first edge point in the first image, the first edge point in the second image, and the first edge point in the third image, as determined based at least in part on epipolar geometry, are self-consistently determined to be edge points when evaluated based on pixel light levels in the first image, the second image, and the third image, respectively. 7. The method of claim 6 wherein, in response to a determined lack of self-consistency in the 3D coordinates of the first edge point, an action is selected from the group consisting of: eliminating the first edge point from a list of determined edge points, filtering or smoothing the first edge to obtain consistency in the first edge point, and performing an optimization of the pose of the object in 3D space to obtain self-consistency in the first edge point. 8. The method of claim 2 further including adjusting with the processor the pose of a first object in 3D space to match edges in the first image, the second image, and the third image, wherein at least one of the matched edges includes the first edge point. 9. A method of determining three-dimensional (3D) coordinates of an edge point of an object, comprising: providing an articulated arm coordinate measuring machine (AACMM) that includes a base, a manually positionable arm portion having an opposed first end and second end, the arm portion being rotationally coupled to the base, the arm portion including a plurality of connected arm segments, each arm segment including at least one position transducer for producing a position signal, a first camera assembly and a second camera assembly coupled to the first end, an electronic circuit that receives the position signal from each of the at least one position transducer and provides data corresponding to a pose of the first camera assembly and the second camera assembly;providing a processor;in a first instance:capturing with the first camera assembly in a first pose a first image of the object;obtaining from the electronic circuit first data corresponding to the first pose;capturing with the second camera assembly in a second pose a second image of the object;obtaining from the electronic circuit second data corresponding to the second pose;in a second instance:capturing with the first camera assembly in a third pose a third image of the object;obtaining from the electronic circuit third data corresponding to the third pose;determining with the processor the 3D coordinates of a first edge point, the first edge point being smoothly continuous within an interval of edge points, the 3D coordinates of the first edge point determined based at least in part on the first data, the second data, the third data, the first image, the second image, and the third image; andstoring the determine 3D coordinates of the first edge point. 10. The method of claim 9 wherein the first edge point is identified by the processor as an edge point based at least in part on pixel light levels in the first image. 11. The method of claim 10 wherein the first edge point is determined by an identification method selected from the group consisting of: subpixel-edge-location-based-on-partial-area-effect technique, moment-based technique, least-squared-error-based technique, and interpolation technique. 12. The method of claim 10 further including determining a position of the first edge point in each of the first image, the second image, and the third image based at least in part on epipolar geometry relating the first pose, the second pose, and the third pose of the first camera assembly. 13. The method of claim 12 further including determining a position of the first edge point in each of the first image, the second image, and the third image based at least in part on epipolar geometry relating the first pose, the second pose, and the third pose of the first camera assembly. 14. The method of claim 13 further including checking whether the first edge point in the first image, the first edge point in the second image, and the first edge point in the third image, as determined based at least in part on epipolar geometry, are self-consistently determined to be edge points when evaluated based on pixel light levels in the first image, the second image, and the third image, respectively. 15. The method of claim 14 wherein, in response to a determined lack of self-consistency in the 3D coordinates of the first edge point, an action is selected from the group consisting of: eliminating the first edge point from a list of determined edge points, filtering or smoothing the first edge to obtain consistency in the first edge point, and performing an optimization of the pose of the object in 3D space to obtain self-consistency in the first edge point. 16. The method of claim 10 further including adjusting with the processor the pose of a first object in 3D space to match edges in the first image, the second image, and the third image, wherein at least one of the matched edges includes the first edge point.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (30)
Newell Bruce D. (Schenectady NY) Petronis Thomas J. (Clifton Park NY) Krause Lawrence R. (Niskayuna NY), Apparatus and method for connecting and exchanging remote manipulable elements to a central control source.
Markey ; Jr. Myles ; Greer Dale R. ; Hibbard Brett, Method and apparatus for calibrating a non-contact gauging sensor with respect to an external coordinate system.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.