IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0458525
(2012-04-27)
|
등록번호 |
US-8624709
(2014-01-07)
|
발명자
/ 주소 |
- Estes, Andrew D.
- Frederick, Johnny E.
|
출원인 / 주소 |
- Intergraph Technologies Company
|
대리인 / 주소 |
Sunstein Kann Murphy & Timbers LLP
|
인용정보 |
피인용 횟수 :
0 인용 특허 :
29 |
초록
▼
A method and a system for calibrating a camera in a surveillance system. The method and system use a mathematical rotation between a first coordinate system and a second coordinate system in order to calibrate a camera with a map of an area. In some embodiments, the calibration can be used to contro
A method and a system for calibrating a camera in a surveillance system. The method and system use a mathematical rotation between a first coordinate system and a second coordinate system in order to calibrate a camera with a map of an area. In some embodiments, the calibration can be used to control the camera and/or to display a view cone on the map.
대표청구항
▼
1. A computer-implemented method for calibrating at least one camera, the system including at least one camera, the method comprising: displaying a video feed from the at least one camera, the at least one camera having an orientation characterized by pan, zoom, and tilt coordinates;displaying a map
1. A computer-implemented method for calibrating at least one camera, the system including at least one camera, the method comprising: displaying a video feed from the at least one camera, the at least one camera having an orientation characterized by pan, zoom, and tilt coordinates;displaying a map of an area, the map being characterized by geospatial coordinates;allowing a user to select at least three pairs of points using at least one input device, a first point of the pair being selected in the map and a second point of the pair being selected from the video feed, the first point and the second point corresponding to the same geographic location;converting, in a computer process, the at least three points selected in the map from geospatial coordinates into Cartesian coordinates in three dimensions defined by a first three-dimensional coordinate system;converting, in a computer process, the at least three points selected in the video feed from pan, zoom, and tilt coordinates into Cartesian coordinates in three dimensions defined by a second three-dimensional coordinate system; anddetermining, in a computer process, a mathematical rotation between the first three-dimensional coordinate system and the second three-dimensional coordinate system based upon the Cartesian coordinates for the at least three pairs of points. 2. A method according to claim 1, wherein the mathematical rotation is a matrix. 3. A method according to claim 1, further comprising: displaying the location of the at least one camera on the map. 4. A method according to claim 1, wherein the geospatial coordinates are latitude, longitude, and altitude coordinates. 5. A method according to claim 1, wherein the input device is at least one of a mouse, a cursor, a crosshair, a touch screen, and a keyboard. 6. A method according to claim 1, further comprising: allowing the user to select at least one point in the map using the at least one input device;converting, in a computer process, the geospatial coordinates for the selected point into Cartesian coordinates in three dimensions defined by the first three-dimensional coordinate system;applying, in a computer process, the rotation to the Cartesian coordinates for the selected point to determine Cartesian coordinates in three dimensions defined by the second three-dimensional coordinate system;converting, in a computer process, the Cartesian coordinates defined by the second three-dimensional coordinate system into pan and tilt coordinates for the selected point; andproviding orientation instructions to the at least one camera based upon the pan and tilt coordinates for the selected point. 7. A method according to claim 1, further comprising: receiving coordinates from a sensor for at least one target;if the coordinates for the at least one target are not Cartesian coordinates defined by the first three-dimensional coordinate system, converting, in a computer process, the coordinates into Cartesian coordinates in three dimensions defined by the first three-dimensional coordinate system;applying, in a computer process, the rotation to the Cartesian coordinates defined by the first three-dimensional coordinate system to determine Cartesian coordinates in three dimensions defined by the second three-dimensional coordinate system;converting, in a computer process, the Cartesian coordinates defined by the second three-dimensional coordinate system into pan and tilt coordinates; andproviding orientation instructions to the at least one camera based upon the pan and tilt coordinates. 8. A method according to claim 7, further comprising: displaying the location of the at least one sensor on the map. 9. A method according to claim 7, further comprising: displaying the location of the at least one target on the map. 10. A method according to claim 1, wherein the video feed has upper left, upper right, lower left, and lower right corners, and the method further comprising: determining, in a computer process, effective pan and tilt angles for at least the lower left and lower right corners of the video feed based upon the pan, zoom, and tilt coordinates for the camera orientation;converting, in a computer process, the effective pan and tilt angles for at least the lower left and lower right corners of the video feed into Cartesian coordinates in three dimensions defined by the second three-dimensional coordinate system;applying, in a computer process, the rotation to the Cartesian coordinates defined by the second coordinate system to determine Cartesian coordinates in three dimensions defined by the first coordinate system for at least the lower left and lower right corners of the video feed;determining, in a computer process, a view cone using the Cartesian coordinates defined by the first coordinate system for at least the lower left and lower right corners of the video feed;determining, in a computer process, the view cone based upon the upper left and upper right corners of the video feed; anddisplaying the view cone on the map. 11. A method according to claim 10, wherein, when the tilt coordinate for the camera is below the horizon, determining the view cone based upon the upper left and upper right corners of the video feed comprises: determining, in a computer process, effective pan for the upper left and upper right corners of the video feed based upon the pan, zoom, and tilt coordinates for the camera orientation; andconverting, in a computer process, the effective pan and tilt angles for the upper left and upper right corners of the video feed into Cartesian coordinates in three dimensions defined by the second three-dimensional coordinate system;applying, in a computer process, the mathematical rotation to the Cartesian coordinates to determine Cartesian coordinates in three dimensions defined by the first three-dimensional coordinate system for the upper left and upper right corners of the video feed; anddetermining, in a computer process, the view cone based upon the Cartesian coordinates, defined by the first three-dimensional coordinate system, for the upper left, upper right, lower left and lower right corners. 12. A method according to claim 11, wherein the view cone is a polygon and the Cartesian coordinates, defined by the first three-dimensional coordinate system, for the upper left, upper right, lower left and lower right corners are the vertices of the polygon. 13. A method according to claim 10, wherein, when the tilt coordinate for the camera is above the horizon, determining the view cone based upon the upper left and upper right corners of the video feed comprises: determining, in a computer process, effective tilt angles for the upper left and upper right corners of the video feed based upon the pan, zoom, and tilt coordinates for the camera;determining, in a computer process, coordinates in three dimensions, defined by the first three-dimensional coordinate system, for the upper left and upper right corners of the video feed based upon a resolvable distance of the camera; anddetermining, in a computer process, the view cone based upon the Cartesian coordinates, defined by the first three-dimensional coordinate system, for the upper left, upper right, lower left and lower right corners. 14. A method according to claim 13, wherein the view cone is a polygon and the Cartesian coordinates, defined by the first three-dimensional coordinate system, for the upper left, upper right, lower left and lower right corners are the vertices of the polygon. 15. Apparatus comprising at least one non-transitory computer readable medium encoded with instructions which when loaded on at least one computer, establish processes for calibrating at least one camera orientation characterized by pan, zoom, and tilt coordinates, the processes including: allowing a user to select at least three pairs of points using at least one input device, a first point of the pair being selected in a map characterized by geospatial coordinates and a second point of the pair being selected from a video feed from the at least one camera, the first point and the second point corresponding to the same geographic location;converting the at least three points selected in the map from geospatial coordinates into Cartesian coordinates in three dimensions defined by a first three-dimensional coordinate system;converting the at least three points selected in the video feed from pan, zoom, and tilt coordinates into Cartesian coordinates in three dimensions defined by a second three-dimensional coordinate system; anddetermining a mathematical rotation between the first three-dimensional coordinate system and the second three-dimensional coordinate system based upon the Cartesian coordinates for the at least three pairs of points. 16. An apparatus according to claim 15, wherein the instructions establish processes further including: converting the geospatial coordinates for a selected point into Cartesian coordinates defined by the first three-dimensional coordinate system;applying the rotation to the Cartesian coordinates for the selected point to determine Cartesian coordinates defined by the second three-dimensional coordinate system;converting the Cartesian coordinates defined by the second three-dimensional coordinate system into pan and tilt coordinates for the selected point; andproviding orientation instructions to the at least one camera based upon the pan and tilt coordinates for the selected point. 17. An apparatus according to claim 15, wherein the instructions establish processes further including: receiving coordinates from a sensor for at least one target;if the coordinates for the at least one target are not Cartesian coordinates defined by the first three-dimensional coordinate system, converting the coordinates into Cartesian coordinates defined by the first three-dimensional coordinate system;applying the rotation to the Cartesian coordinates defined by the first three-dimensional coordinate system to determine Cartesian coordinates defined by the second three-dimensional coordinate system;converting the Cartesian coordinates defined by the second three-dimensional coordinate system into pan and tilt coordinates; andproviding orientation instructions to the at least one camera based upon the pan and tilt coordinates. 18. An apparatus according to claim 15, wherein the instructions establish processes further including: determining effective pan and tilt angles for at least the lower left and lower right corners of the video feed based upon the pan, zoom, and tilt coordinates for the camera orientation;converting the effective pan and tilt angles for at least the lower left and lower right corners of the video feed into Cartesian coordinates defined by the second three-dimensional coordinate system;applying the rotation to the Cartesian coordinates defined by the second three-dimensional coordinate system to determine Cartesian coordinates defined by the first three-dimensional coordinate system for at least the lower left and lower right corners of the video feed;determining a view cone using the Cartesian coordinates defined by the first three-dimensional coordinate system for at least the lower left and lower right corners of the video feed;determining the view cone based upon the upper left and upper right corners of the video feed; anddisplaying the view cone on the map. 19. A surveillance system comprising: at least one camera having an orientation characterized by pan, zoom and tilt coordinates;a processor in communication with the at least one camera;at least one display in communication with the processor, the at least one display displaying a video feed from the at least one camera and a map of an area, the map being characterized by geospatial coordinates;at least one input device in communication with the processor allowing a user to select points on the video feed and the map;a memory storing instructions executable by the processor to perform processes that include: converting points selected in the map from the geospatial coordinates into Cartesian coordinates in three dimensions defined by a first three-dimensional coordinate system;converting points selected in the video feed from pan, zoom, and tilt coordinates into Cartesian coordinates in three dimensions defined by a second three-dimensional coordinate system; anddetermining a mathematical rotation between the first three-dimensional coordinate system and the second three-dimensional coordinate system based upon the Cartesian coordinates for at least three pairs of points, a first point of the pair having been selected in the map and a second point of the pair having been selected from the video feed. 20. The surveillance system of claim 19, wherein the mathematical rotation is a matrix. 21. The surveillance system of claim 19, wherein the geospatial coordinates are latitude, longitude, and altitude coordinates. 22. The surveillance system of claim 19, wherein the at least one input device is at least one of a mouse, a cursor, a crosshair, a touch screen, and a keyboard. 23. The surveillance system of claim 19, wherein the memory further stores instructions executable by the processor to perform processes that include: applying the mathematical rotation to the Cartesian coordinates defined by the first three-dimensional coordinate system for a point selected by the user in the map to determine Cartesian coordinates defined by the second three-dimensional coordinate system;converting the Cartesian coordinates defined by the second three-dimensional coordinate system into pan and tilt coordinates for the selected point; andproviding orientation instructions to the at least one camera based upon the pan and tilt coordinates for the selected point. 24. The surveillance system of claim 19 further comprising a sensor connected to provide coordinates to the processor.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.