Moving object detection, tracking, and displaying systems
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06F-003/048
G08B-013/194
G06T-007/20
출원번호
US-0908281
(2010-10-20)
등록번호
US-9430923
(2016-08-30)
발명자
/ 주소
Kniffen, Stacy K.
Gibbs, Daniel P.
Bailey, Weldon T.
Hillebrand, Mark J.
Erbert, Stephen R.
출원인 / 주소
INNOVATIVE SIGNAL ANALYSIS, INC.
대리인 / 주소
Scholz, Katherine M.
인용정보
피인용 횟수 :
1인용 특허 :
58
초록▼
Moving object detecting, tracking, and displaying systems are provided. Systems illustratively include a graphical user interface and a processing unit. The processing unit is a functional part of the system that executes computer readable instructions to generate the graphical user interface. The g
Moving object detecting, tracking, and displaying systems are provided. Systems illustratively include a graphical user interface and a processing unit. The processing unit is a functional part of the system that executes computer readable instructions to generate the graphical user interface. The graphical user interface may include an alert and tracking window that has a first dimension that corresponds to a temporal domain and a second dimension that corresponds to a spatial domain. In some embodiments, alert and tracking windows include target tracking markers. Target tracking markers optionally provide information about moving objects such as, but not limited to, information about past locations of moving objects and information about sizes of moving objects. Certain embodiments may also include other features such as zoom windows, playback controls, and graphical imagery added to a display to highlight moving objects.
대표청구항▼
1. A moving object tracking security system comprising: a graphical user interface on a display device, the graphical user interface comprising a plurality of discrete portions, wherein a first portion comprises an alert and tracking window, the alert and tracking window having a first dimension tha
1. A moving object tracking security system comprising: a graphical user interface on a display device, the graphical user interface comprising a plurality of discrete portions, wherein a first portion comprises an alert and tracking window, the alert and tracking window having a first dimension that corresponds to a temporal domain and a second dimension corresponding to a spatial domain, and a second portion, wherein the second portion provides a pictorial display of a plurality of objects, at least one of which is a detected moving object and wherein the detected moving object is indicated by a detection highlight;wherein the graphical user interface is configured to simultaneously identify and track each of the plurality of detected moving objects, wherein simultaneously tracking the plurality of detected moving objects comprises, for each of the detected moving objects: providing at least one indicia of a past location of the detected moving object and a real-time current location of the detected moving object; andwherein the current and the past location indicia are provided simultaneously and wherein the real-time current location indicia is displayed on both the first and second portion and wherein the indicia of past location are displayed in the second portion of the graphical user interface;a processing unit that is a functional part of the system that executes computer readable instructions to generate the graphical user interface and wherein the processing unit generates the indicia of past locations by comparing a location of the detected moving object in each of a series of successive pictures displayed in the pictorial display portion, wherein each of the successive pictures are registered successively against a reference image such that the current location of the detected moving object is successively updated and wherein each picture comprises a series of images combined, by the processing unit, to form a panoramic view; andwherein the first portion comprises a plurality of rasters, each raster comprising a row of pixels, wherein each pixel in the row of pixels corresponds to a corresponding column of pixels in the pictorial image of the second portion, and wherein each raster is generated by the processing unit as a new image is received and provide on the pictorial display. 2. The system of claim 1, wherein the detection highlight provides information about the real-time current speed, a real-time current location and at least one past location and at least one past speed for at least one of a plurality of detected moving objects, and at least one predicted future location of the detected moving objects wherein at least one of the plurality of detected moving objects does not include any component of the moving object tracking security system. 3. The system of claim 1, and further wherein tracking the plurality of detected moving objects further comprises: providing at least one indicia of a predicted future trajectory of the detected moving object, and wherein the at least one indicia of a predicted future trajectory of the detected moving object is automatically displayed by the graphical user interface and wherein the processing unit generates the indicia of a predicted future trajectory by comparing the registered successive pictures displayed in the pictorial display portion. 4. The system of claim 3, wherein real-time current, past and predicted future locations of the plurality of detected moving objects within a video window are highlighted within the video window utilizing an automated computer algorithm, and wherein a notification is provided within the video window, indicating a substantial change in trajectory or speed from the predicted trajectory of one of the plurality of detected moving objects. 5. The system of claim 3, wherein the pictorial display portion is a video window, and wherein the video window and the alert and tracking window are aligned in a vertical direction, such that at least one edge of the alert and tracking window is in contact with an edge of the video window. 6. The system of claim 3, wherein the pictorial display portion is a video window, and wherein the video window and the alert and tracking window are aligned in a horizontal direction, such that at least one edge of the alert and tracking window is in contact with an edge of the video window. 7. The system of claim 3, wherein the graphical user interface includes one or more zoom windows in a second portion of the graphical user interface, the one or more zoom windows being generated based at least in part upon a user selection. 8. The system of claim 3, wherein the graphical user interface includes one or more zoom windows, the one or more zoom windows being generated automatically by a software algorithm. 9. The system of claim 1, wherein the system obtains information from a non-optical sensor. 10. The moving object tracking security system of claim 1, wherein the alert and tracking window is configured to provide an alert when an object passes a specified threshold. 11. The moving object tracking security system of claim 10, wherein the specified threshold is a distance from an indicated location. 12. The moving object tracking security system of claim 10, wherein the specified threshold is a minimum altitude. 13. A method of detecting and tracking one or more moving objects, implemented on a computing device with a processor, the method comprising: displaying a camera image that comprises indicia of one or more objects in a first graphical user interface window of a display;detecting, with the processor, a potentially moving object within the camera image;determining, with the processor that the potentially moving object is a moving object;tracking, with the processor, the moving object, wherein tracking comprises; simultaneously showing, in a second graphical user interface window of the display, a contrast view of the moving object, wherein the contrast view comprises a plurality of rasters, each raster comprising a row of pixels corresponding to a compressed view of the displayed camera image, wherein the plurality of rasters comprises a path against a background wherein the background comprises a first color and the path comprises a second color, wherein the path comprises a plurality of successively plotted past locations and ending with a real-time current location of the moving object;generating the plurality of rasters, wherein generating a raster comprises using the processor to successively compare a first position of the moving object in a first camera image with a second position of the moving object in a second camera image, and highlighting a pixel corresponding to a portion of the compressed view containing a detected moving object;updating the second graphical user interface after each successive comparison by adding a new raster representative of a newly detected real-time current position of the moving object in a last-taken camera image, and wherein a new raster is added to the plurality of rasters such that, over time, the path grows as each new raster is added, wherein the new raster representative of the newly detected real-time current position of the moving object is determined by an algorithm, wherein the algorithm is configured to analyze each pixel in the raster, detects and records a change in pixel characteristics above a threshold; andwherein the first and second graphical user interface windows are provided in separate viewing windows such that they may be viewed simultaneously. 14. The method of claim 13, wherein the first and the second graphical user interfaces are shown on one display device. 15. The method of claim 13, wherein the first and second graphical user interfaces are shown on multiple display devices. 16. The method of claim 13, and further comprising: providing a larger representation of the moving object in a zoom window. 17. The method of claim 16, further comprising: changing a magnification level of the zoom window. 18. The method of claim 16, wherein the zoom window and the first graphical user interface window are coded such that an area of the first graphical user interface window that corresponds to the zoom window is identifiable. 19. The method of claim 13, wherein the moving object is a first moving object and the path is a first path and wherein the method further comprises: simultaneously showing, in the second graphical user interface window, a contrast view of the first moving object as well as a second moving object, wherein the contrast view comprises the first path as well as a second path against the background, wherein the first path comprises a plurality of successively plotted past locations and ending with a real-time current location of the first moving object, and wherein the second path comprises a plurality of successively plotted past locations and ending with a real-time current location of the second moving object and wherein the plurality of past locations of the first and second moving objects are generated using the processor to compare a first position of each of the moving objects in the first camera image with a second position of the moving objects in a second later-taken camera image. 20. A moving object tracking system, implemented on a computer with a processing unit comprising: a display comprising: a first window that includes a camera image of an area presented on the display, the camera image of the area having within the camera image a plurality of detected moving objects, wherein the camera image is periodically updated such that the first window displays a real-time current position of the plurality of detected moving objects;a second window that includes information about a past location, a real-time current location, and an estimated future location for each of the plurality of moving detected objects, wherein the information about the past locations comprise a path presented against a contrasting background, and wherein the real-time current location comprises an indicator at an end of the path;a third window that provides a zoom view of one of the plurality of detected moving objects along with an indication of the portion of the first or second window being shown in the third window; andwherein the first and second windows are presented simultaneously and wherein the second window updates automatically based on a comparison of a series of changing positions of each of the plurality of detected moving objects in the camera image; andwherein the processing unit is a functional part of the system that executes computer readable instructions to generate the first, second and third windows on the display and wherein the processing unit is configured to receive a series of successive camera images, analyze the series of successive camera images by identifying a change in position of a moving object between two successive camera images, and generate a raster indicative of the detected motion, wherein the processing unit is configured to display the camera image in the first window of the display, and further configured to update the second window by adding the generated raster to a series of previously generated rasters. 21. The system of claim 20, wherein the second window has one dimension that corresponds to a spatial dimension of the first window. 22. The system of claim 20, further comprising: a data storage unit that stores information about the area collected by a sensor; andplayback controls that allow a user to review the stored information. 23. The system of claim 20, further comprising: an interface that receives information about the area from an external data store; andplayback controls that allow a user to review the received information. 24. The system of claim 20, wherein the first window and the second window are updated at the same rate. 25. The system of claim 20, wherein the first window and the second window are updated at different rates. 26. The moving object tracking system of claim 20, wherein the camera image is a panoramic view comprising multiple images joined together.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (58)
Korein James ; Nayar Shree K. ; Yaseen ; II L. Clayton ; Peri Venkata N., Adjustable imaging system with wide angle capability.
Kakou,Noritoshi; Fukuhara,Yoshio; Misawa,Masahiro, Composite camera system, zoom camera image display control method, zoom camera control method, control program, and computer readable recording medium.
Sunaga, Toshihiro, Image pickup unit having light incident side reflecting element and drive means for driving reflecting element, and apparatus having same.
Jackson, Laban Phelps; Pecoraro, Alexis S.; Hansen, Peter; Bauer, Martin L.; Martin, H. Lee, Method and apparatus for the interactive display of any portion of a spherical image.
Kaneko,Toshimitsu; Hori,Osamu; Mita,Takeshi; Yamamoto,Koji, Method of describing object region data, apparatus for generating object region data, video processing apparatus and video processing method.
Michaelson, Dave; Khatwa, Ratan; Suchodolski, Jeanne C., Method, apparatus and computer program products for alerting submersible vessels to hazardous conditions.
Au, KwongWing; Curtner, Keith L.; Bedros, Saad J., Methods for defining, detecting, analyzing, indexing and retrieving events using video image processing.
Hayashi,Yoshihiko; Fujibayashi,Keizo; Hosaka,Naoki; Sado,Tetsuo, Scanning type image pick-up apparatus and a scanning type laser beam receive apparatus.
Kenneth A. Milnes ; Marvin S. White ; Richard H. Cavallaro ; Stanley K. Honey ; Fred Judson Heinzmann, System for determining information about a golf club and/or a golf ball.
Gibbs, Daniel P.; Kniffen, Stacy K.; Dean, Joran S.; Bailey, Weldon T.; Becker, Michael F., System for extending a field-of-view of an image acquisition device.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.