Continuous geospatial tracking system and method
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-007/18
H04W-004/00
출원번호
US-0367775
(2009-02-09)
등록번호
US-8711218
(2014-04-29)
발명자
/ 주소
Zehavi, Ron
출원인 / 주소
Verint Systems, Ltd.
대리인 / 주소
Meunier Carlin & Curfman
인용정보
피인용 횟수 :
5인용 특허 :
0
초록▼
A surveillance system and methods are disclosed. The system of the invention includes computing means connected to memory means, input means and a plurality of sensors, such as video camera and plurality of display screens. The system is adapted to compute for the sensors a 3D coverage space which c
A surveillance system and methods are disclosed. The system of the invention includes computing means connected to memory means, input means and a plurality of sensors, such as video camera and plurality of display screens. The system is adapted to compute for the sensors a 3D coverage space which considers terrain data and man-made objects and specific features of the sensors such as the 3D location and the pan, tilt and zoom (PTZ) of the camera and to establish a data base indicative of the coverage area. The system and method of the invention are also adapted to support tracking of an object within the coverage space of the sensors, either in automatic or manual mode and to provide a user of the system with data indicative of sensors into the coverage space of which a tracked object is about to enter.
대표청구항▼
1. A surveillance system for monitoring an area of interest, the system comprising: computing unit;means for memory connected to said computing unit;means for input connected to said computing unit to receive user instructions;means for displaying connected to said computing unit and to a plurality
1. A surveillance system for monitoring an area of interest, the system comprising: computing unit;means for memory connected to said computing unit;means for input connected to said computing unit to receive user instructions;means for displaying connected to said computing unit and to a plurality of sensors to display data received from at least one of said sensors and system data, said displaying means comprising: a central display; andat least one encircling display; anda database of three dimensional coordinates indicative of the coverage space of said sensors;wherein said displays are capable of displaying at least text and video;wherein said plurality of sensors are in active communication with said computing unit, the coverage area of at least one sensor within said area of interest is mapped in a process of mapping, and saved to said computing unit, said coverage area is the area said sensor is able to sense;said mapping including data describing elements or entities that interfere with the line of sight of said sensor;said computing unit to compute for at least one of said sensors, the three dimensional coordinates of a location of a sensed item within said coverage area, based on data indicative of the location of said sensed item from said at least one sensor;said computing unit to accept an indication of an object of interest in a two dimensional image displayed on said means for displaying from a user, and calculate a set of three dimensional coordinates indicative of the location of said object in said area of interest;said computing unit to designate at least one sensor to monitor and track said indicated object of interest;said means for displaying to display on said central display said location of said object of interest; andsaid means for displaying to display on said encircling displays areas neighboring the area displayed by said central display. 2. The system of claim 1, further comprising: a terrain database indicative of height of points within said area of interest. 3. The system of claim 1, wherein said mapping comprise: geographical data of said sensing area;data describing terrain elements that interfere with the line of sight of said sensor; anddata describing man-made entities that interfere with the line of sight of the said sensor. 4. The system of claim 1, wherein said mapping comprise: considering specific characteristics of said sensor, said characteristics are selected from a list comprising sensor type, sensor location, sensor angle of sight, sensor zooming capabilities and sensor performance data. 5. The system of claim 1, wherein said object of interest is presented in a format selected from a list comprising: presenting said object of interest on the background of a map;presenting said object of interest on the background of a two dimensional picture;presenting said object of interest on the background of a two dimensional video image;presenting said object of interest on a three dimensional background displaying a terrain corresponding to the area chosen for display; andpresenting said object of interest on a three dimensional background displaying a terrain corresponding to the area chosen for display together with man-made objects located at the area chosen for display. 6. The system of claim 1, further adapted to record inputs from at least one of said sensors. 7. The system of claim 1, further adapted to provide identification of sensors having a selected location inside their coverage area. 8. The system of claim 7, wherein said at least one sensor designated is selected by a user. 9. The system of claim 1, further adapted to calculate a momentary vector of movement comprising values representing the origin point, the direction of movement and the velocity of movement in case said object of interest is moving. 10. The system of claim 9, wherein said momentary vector of movement is displayed by said means for displaying. 11. The system of claim 9, further adapted to calculate based on said momentary vector of movement and on said sensor coverage mapping, which are the sensors into their coverage areas said moving object of interest is about to enter. 12. The system of claim 11, wherein the area being displayed on at least one of said side displays presents an area said moving object of interest is most likely to arrive at, according to said momentary vector of movement of said moving object. 13. The system of claim 11, further adapted to present to a user identification of said sensors into their coverage areas said moving object of interest is about to enter. 14. The system of claim 11, further comprising: at least one preference table indicating priority of sensors having specific location inside said area of interest, said sensors are from said plurality of sensors. 15. The system of claim 14, wherein said priority is set based on parameters selected from a list comprising the quality of the information received through said sensor, the accuracy of the information, the readiness of said sensor and the nature of the signal provided by said sensor. 16. The system of claim 14, further adapted to automatically switch the display on said central display to engage to a selected sensor of said sensors into their coverage areas said moving object of interest enters based on said momentary vector of movement and said automatic preference table. 17. The system of claim 16, further adapted to direct said selected sensor to point to the point in said selected sensor coverage area which is closest to the track of said moving object of interest prior to switching the display on said central display to said selected sensor. 18. The system of claim 14, further adapted to: evaluate future possible location of said object of interest in case continuous tracking contact with said object of interest is lost, said evaluation is based on parameters selected from a list comprising: last known location, last known speed, last known direction and available paths of propagation;present to a user the boundaries of an area to which said object of interest could have reached based on said evaluation; anddisplay to a user list of sensors having said area within their coverage area. 19. The system of claim 1, wherein said at least one sensor is selected from a list comprising sensors providing two dimensional location data and sensors providing three dimensional location data. 20. The system of claim 1, wherein said at least one sensor is selected from a list comprising camera, video camera, IR camera, EM radar, sonar, very shortwave sensor and MRI sensor. 21. A surveillance method for monitoring an area of interest, the method comprising: mapping the coverage area of at least one sensor of a plurality of sensors within said area of interest to create a mapped representation of said area, said coverage area is the area said sensor is able to sense;said mapped representation including data describing elements or entities that interfere with a line of sight of said sensor;saving said mapped representation of said coverage area;displaying data received from at least one of said plurality of sensors and system data on means for displaying comprising: a central display; andat least one encircling display;wherein said displays are capable of displaying at least text and video;maintaining a database of three dimensional coordinates indicative of the coverage space of said sensors;computing, for at least one of said sensors, the three dimensional coordinates of a location of a sensed item within said coverage area, based on data indicative of the location of said sensed item from said at least one sensor;accepting an indication of an object of interest in a two dimensional image displayed on said means for displaying from a user, and calculate a set of three dimensional coordinates indicative of the location of said object in said area of interest;designating at least one sensor to monitor and track said indicated object of interest;displaying on said central display said location of said object of interest; anddisplaying on said encircling displays areas neighboring the area displayed by said central display. 22. The method of claim 21, further comprising: maintaining a terrain database indicative of heights of points within said area of interest. 23. The method of claim 21, wherein said mapping comprise: geographical data of said sensing area;data describing terrain elements that interfere with the line of sight of said sensor; anddata describing man-made entities that interfere with the line of sight of the said sensor. 24. The method of claim 21, wherein said mapping comprise: considering specific characteristics of said sensor, said characteristics are selected from a list comprising sensor type, sensor location, sensor angle of sight, sensor zooming capabilities and sensor performance data. 25. The method of claim 21, wherein said object of interest is presented in a format selected from a list comprising: presenting said object of interest on the background of a map;presenting said object of interest on the background of a two dimensional picture;presenting said object of interest on the background of a two dimensional video image;presenting said object of interest on a three dimensional background displaying a terrain corresponding to the area chosen for display; andpresenting said object of interest on a three dimensional background displaying a terrain corresponding to the area chosen for display together with man-made objects located at the area chosen for display. 26. The method of claim 21, further comprising recording inputs from at least one of said sensors. 27. The method of claim 21, further comprising providing identification of sensors having a selected location inside their coverage space. 28. The method of claim 27, wherein said designation of at least one sensor to monitor and track said indicated object of interest is done according to a user selection. 29. The method of claim 21, further comprising calculating a momentary vector of movement comprising values representing the origin point, the direction of movement and the velocity of movement in case said object of interest is moving. 30. The method of claim 29, further comprising displaying said momentary vector of movement by said means for displaying. 31. The method of claim 29, further comprising calculating based on said momentary vector of movement and on said sensor coverage mapping, which are the sensors into their coverage areas said moving object of interest is about to enter. 32. The method of claim 31, further comprising displaying on at least one of said side displays an area said moving object of interest is most likely to arrive at, according to said momentary vector of movement of said moving object. 33. The method of claim 31, further comprising presenting to a user identification of said sensors into their coverage area said moving object of interest is about to enter. 34. The method of claim 31, further comprising: maintaining at least one preference table indicating priority of sensors having specific location inside said area of interest, said sensors are from said plurality of sensors. 35. The method of claim 34, further comprising setting said priority based on parameters selected from a list comprising the quality of the information received through said sensor, the accuracy of the information, the readiness of said sensor and the nature of the signal provided by said sensor. 36. The method of claim 34, further comprising switching the display on said central display automatically to a selected sensor of said sensors into their coverage areas said moving object o interest enters based on said momentary vector of movement and said automatic preference table. 37. The method of claim 36, further comprising directing said selected sensor to point to the point in said selected sensor coverage area which is closest to the track of said moving object of interest prior to switching the display on said central display to said selected sensor. 38. The method of claim 34, further comprising: evaluating future possible location of said object of interest in case continuous tracking contact with said object of interest is lost, said evaluation is based on parameters selected from a list comprising: last known location, last known speed, last known direction and available paths of propagation;presenting to a user the boundaries of an area to which said object of interest could have reached based on said evaluation; anddisplaying to a user list of sensors having said area within their coverage area. 39. The method of claim 21, wherein said at least one sensor is selected from a list comprising sensors providing two dimensional location data and sensors providing three dimensional location data. 40. The method of claim 21, wherein said at least one sensor is selected from a list comprising video camera, IR camera, EM radar, sonar, very shortwave sensor and MRI sensor. 41. A non-transitory machine-readable medium having stored thereon instructions that, if executed by a machine, cause the machine to perform a method comprising: receiving from at least one sensor location information of at least one object;receiving user instructions;mapping the coverage area of said at least one sensor of a plurality of sensors within an area of interest to create a mapped representation of said area, said coverage area is the area said sensor is able to sense;said mapped representation including data describing elements or entities that interfere with a line of sight of said sensor;saving said mapped representation of said coverage area;sending for display data received from at least one of said plurality of sensors and system data on means for displaying; maintaining a database of three dimensional coordinates indicative of the coverage space of said sensors;computing, for at least one of said sensors, the three dimensional coordinates of a location of a sensed item within said coverage area, based on data indicative of the location of said sensed item for said at least one sensor;accepting an indication of an object of interest in a two dimensional image displayed on said means for displaying from a user, and calculate a set of three dimensional coordinates indicative of the location of said object in said area of interest;designating at least one sensor to monitor and track said indicated object of interest;displaying on a central display said location of said object of interest; anddisplaying on encircling displays areas neighboring the area displayed by said central display.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.