IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
UP-0219207
(2005-09-02)
|
등록번호 |
US-7706978
(2010-05-20)
|
발명자
/ 주소 |
- Schiffmann, Jan K.
- Schwartz, David A.
|
출원인 / 주소 |
- Delphi Technologies, Inc.
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
27 인용 특허 :
19 |
초록
▼
A method for estimating unknown parameters (pan angle (ψ), instantaneous tilt angle (τ) and road geometry of an upcoming road segment) for a vehicle object detection system. The vehicle object detection system is preferably a forward looking, radar-cued vision system having a camera, a r
A method for estimating unknown parameters (pan angle (ψ), instantaneous tilt angle (τ) and road geometry of an upcoming road segment) for a vehicle object detection system. The vehicle object detection system is preferably a forward looking, radar-cued vision system having a camera, a radar sensor and an processing unit. The method first estimates the pan angle (ψ), then corrects the coordinates from a radar track so that pan angle (ψ) can be treated as zero, and finally solves a least squares problem that determines best estimates for instantaneous tilt angle (τ) and road geometry. Estimating these parameters enables the vehicle object detection system to identify, interpret and locate objects in a more accurate and efficient manner.
대표청구항
▼
We claim: 1. A method for operating an object detection system to determine a location of an object, comprising the steps of: (a) generating a vision track from data provided by a camera, said vision track including data indicative of a locations of the object, wherein a pixel coordinate is generat
We claim: 1. A method for operating an object detection system to determine a location of an object, comprising the steps of: (a) generating a vision track from data provided by a camera, said vision track including data indicative of a locations of the object, wherein a pixel coordinate is generated for each object that is detected from an image; (b) generating a radar track from data provided by a radar sensor, said radar track including data indicative of the locations of the object, wherein a range and azimuth angle coordinate is generated for each detected object; (c) generating a matched pair indicative of the location of the object based upon the data from said vision and radar tracks; (d) using the matched pair, estimating a camera pan angle (ψ) and an instantaneous tilt angle (τ) and road geometry, wherein said camera pan angle is defined as the horizontal angle between a camera boresight and a radar boresight, wherein said instantaneous tilt angle is a vertical misalignment between the camera boresight and the longitudinal axis of a planar ground patch under a vehicle, wherein said estimated camera pan angle, instantaneous tilt angle (τ), and road geometry minimize the alignment error between said matched pair, wherein said estimated camera pan angle is used to correct or reprocess each radar track so that a set of pan angle corrected matched pairs, each having a correct pan angle, is generated, and wherein said estimated instantaneous tilt angle (τ) and road geometry comprise linearizing at least one trigonometric equation about a nominal tilt angle (τ0) so that said trigonometric equation becomes a linear equation; and (e) determining the location of the object, said determining including identifying, locating or interpreting the object using the estimates of the pan angle (ψ) and the instantaneous tilt angle (τ) and the road geometry, wherein the estimates are used to correct for alignment differences between the camera and the radar sensor while traveling a roadway, and thereby correct for position of the object errors in the vision track and the radar track. 2. The method of claim 1, wherein a vertical curvature (c) of an upcoming road segment represents said road geometry. 3. The method of claim 1, wherein said vision track is provided in terms of a two-dimensional Cartesian coordinate system (pixels (p, q)), and said radar track is provided in terms of a two-dimensional polar coordinate system (range and azimuth angle (R, θ)). 4. The method of claim 1, wherein said pan angle (ψ) estimate is filtered with a low-pass filter to provide a filtered pan angle estimate. 5. The method of claim 1, wherein step (d) further includes solving an optimization problem that minimizes the 3-D world coordinate alignment error between said matched pair. 6. The method of claim 5, wherein step (d) further includes utilizing a least squares problem. 7. The method of claim 1, wherein step (d) further includes removing any negligible terms and manipulating said linear equation so that a single linear equation is provided for said matched pair. 8. The method of claim 1, wherein step (d) further includes using said estimated camera pan angle (ψ) and said radar track to provide a pan corrected matched pair having a radar track that has been generally corrected for errors attributable to a camera pan angle. 9. The method of claim 8, wherein step (d) further includes using said pan corrected matched pair to estimate an instantaneous tilt angle (τ) and road geometry. 10. The method of claim 1, wherein the vehicle object detection system is a forward looking, radar-cued vision system. 11. A method for operating an object detection system to determine a location of an object, comprising the steps of: (a) generating a vision track from data provided by a camera, said vision track including data indicative of a locations of the object, wherein a pixel coordinate is generated for each object that is detected from an image; (b) generating a radar track from data provided by a radar sensor, said radar track including data indicative of the locations of the object, wherein a range and azimuth angle coordinate is generated for each detected object; (c) generating a matched pair indicative of the location of the object based upon the data from said vision and radar tracks; (d) estimating a camera pan angle (ψ) that best fits said matched pair, wherein said camera pan angle is defined as the horizontal angle between a camera boresight and a radar boresight, wherein said instantaneous tilt angle is a vertical misalignment between the camera boresight and the longitudinal axis of a planar ground patch under a vehicle, wherein said estimated camera pan angle, instantaneous tilt angle (τ), and road geometry minimize the alignment error between said matched pair, wherein said estimated camera pan angle is used to correct or reprocess each radar track so that a set of pan angle corrected matched pairs, each having a correct pan angle, is generated; (e) using said estimated camera pan angle (ψ) and said radar track to provide a pan corrected matched pair(s) having a radar track that has been generally corrected for errors attributable to a camera pan angle, wherein said estimates comprise linearizing at least one trigonometric equation about a nominal tilt angle (τ0) so that said trigonometric equation becomes a linear equation; (f) using said pan corrected matched pair(s) to estimate an instantaneous tilt angle (τ) and road geometry; and (g) determining the location of the object, said determining including identifying, locating or interpreting the object using the estimates of the pan angle (ψ) and the instantaneous tilt angle (τ) and the road geometry, wherein the estimates are used to correct for alignment differences between the camera and the radar sensor while traveling a roadway, and thereby correct for position of the object errors in the vision track and the radar track. 12. A method for operating an object detection system to determine a location of one or more of a plurality of objects, comprising the steps of: (a) generating a plurality of vision tracks from data provided by a camera, said vision tracks including data indicative of locations of the plurality of objects, wherein a pixel coordinate is generated for each object that is detected from an image; (b) generating a plurality of radar tracks from data provided by a radar sensor, said radar tracks including data indicative of the locations of the plurality of objects, wherein a range and azimuth angle coordinate is generated for each detected object; (c) generating a plurality of matched pairs indicative of the locations based upon the data from said vision and radar tracks; (d) estimating a camera pan angle (ψ) that best fits said plurality of matched pairs, wherein said camera pan angle is defined as the horizontal angle between a camera boresight and a radar boresight, wherein said instantaneous tilt angle is a vertical misalignment between the camera boresight and the longitudinal axis of a planar ground patch under a vehicle, wherein said estimated camera pan angle, instantaneous tilt angle (τ), and road geometry minimize the alignment error between said matched pair, wherein said estimated camera pan angle is used to correct or reprocess each radar track so that a set of pan angle corrected matched pairs, each having a correct pan angle, is generated, and wherein said estimated instantaneous tilt angle (τ) and road geometry comprise linearizing at least one trigonometric equation about a nominal tilt angle (τ0) so that said trigonometric equation becomes a linear equation; (e) using said estimated camera pan angle (ψ) and said plurality of radar tracks to provide a plurality of pan corrected matched pairs each having a radar track that has been generally corrected for errors attributable to a camera pan angle; (f) using said plurality of pan corrected matched pairs to generate a least squares problem, said least squares problem treats said estimated camera pan angle (ψ) as zero; (g) linearizing one or more trigonometric equations about a nominal tilt angle (τ0) so that said trigonometric equations become linear equations; (h) solving said least squares problem to estimate both said instantaneous tilt angle (τ) and said vertical curvature (c); and (i) determining the location of one or more of the objects, said determining including identifying, locating or interpreting objects using the estimates of the pan angle (ψ) and the instantaneous tilt angle (τ) and the road geometry, wherein the estimates are used to correct for alignment differences between the camera and the radar sensor while traveling a roadway, and thereby correct for position of the object errors in the vision track and the radar track.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.