A method of analyzing images over time is provided herein. The method includes: capturing a plurality of images each associated with specified objects in specified locations such that a specified area is covered; specifying regions of interest (ROI) in each of the captured images; repeating the capt
A method of analyzing images over time is provided herein. The method includes: capturing a plurality of images each associated with specified objects in specified locations such that a specified area is covered; specifying regions of interest (ROI) in each of the captured images; repeating the capturing with at least one of: a different location, a different orientation, and a different timing such that the captured images are associated with the specified covered area; and comparing the captured imaged produced in the capturing with the captured imaged produced in the repeating of the capturing to yield comparison between the captured objects by comparing specified ROI.
대표청구항▼
1. A system comprising: a plurality of image capture devices, each one of the plurality of image capture devices having an image capturing unit, a positioning unit, and a processing unit,wherein one or more applications are executed on each of the plurality of image capture devices and wherein each
1. A system comprising: a plurality of image capture devices, each one of the plurality of image capture devices having an image capturing unit, a positioning unit, and a processing unit,wherein one or more applications are executed on each of the plurality of image capture devices and wherein each one of the plurality of image capture devices is mounted to, or integrated in, a different one of a plurality of vehicles,wherein each one of the plurality of image capture devices captures a sequence of images and extracts therefrom information about a plurality of surface objects located within a distance from a first vehicle in real-time, where said first vehicle is one of the plurality of vehicles, andwherein, for each image capture device, the one or more applications executed on that image capture device instructs the respective processing unit to derive location data related to the plurality of surface objects based on positioning measurements derived from the positioning unit of that image capture device; anda server, connected to each one of the one or more applications via a wireless network, where the server receives the information and the location data of the plurality of surface objects transmitted by each one of the one or more applications, and generates a report in real-time, retrievable by a remote device in real-time, where said report is created according to an analysis of the information and the location data from the one or more applications, the report including a subset of the information and location data of said plurality of surface objects located within said distance from said first vehicle, received from the one or more applications, wherein the subset of information and location data belongs to one of a plurality of different categories, and wherein said report includes a status of said plurality of surface objects belonging to said one of said plurality of different categories. 2. The system according to claim 1, wherein, for each of the plurality of image capture devices, the processing unit of that image capture device is further configured to apply a decision function to the captured images and to momentary kinetics parameters, to yield an analysis of a risk of a collision between the first vehicle and a second vehicle, wherein the image capture device further comprises a front camera to capture images of a driver of the first vehicle, wherein the processing unit is adapted to process the images of the driver and to detect, accordingly, physical and cognitive conditions of the driver, and wherein the processing unit is further configured to estimate the risk based on the physical and cognitive conditions and instruct a display device included in or communicatively coupled to the image capture device to selectively display an alert based on the estimated risk. 3. The system according to claim 1, wherein, for each of the plurality of image capture devices, the processing unit of that image capture device is further configured to apply a decision function to the captured images and to momentary kinetics parameters, to yield an analysis of a risk of a collision between the first vehicle and a second vehicle, and wherein the processing unit is adapted to process the sequence of images and determine, accordingly, conditions of a road, wherein the processing unit is further configured to apply the decision function to the determined conditions of the road, to yield an improved estimation of the risk of collision. 4. The system according to claim 1, wherein, for each of the plurality of image capture devices, the processing unit of that image capture device is further configured to apply a decision function to the captured images and to momentary kinetics parameters, to yield an analysis of a risk of a collision between the first vehicle and a second vehicle, wherein the processing unit is adapted to process the sequence of images and detect, accordingly, pedestrians, and wherein the processing unit is further configured to apply the decision function to the detected pedestrians to yield an updated estimation of the risk of collision. 5. The system according to claim 1, wherein at least one of the plurality of image capture devices is further configured to consume electricity from a vehicle-mounted power source. 6. The system according to claim 1, wherein at least one of the plurality of image capture devices is mounted in a quick mount attached to a surface within one of the plurality of vehicles, the at least one quick mount unit comprising a quick release cradle on a first end and a suction cup at a second end. 7. The system according to claim 1, wherein said report maps vacant parking places detected in real-time by said extracting. 8. The system according to claim 1, wherein said report maps places in a road that are candidates to be fixed. 9. The system according to claim 1, wherein said report maps businesses on a street. 10. The system according to claim 1, wherein said report maps overall status of cars in a certain region. 11. The system according to claim 1, wherein said report maps status of plants in a city. 12. The system according to claim 1, wherein said report includes a map of a city that maps roads and buildings. 13. The system according to claim 1, wherein said report maps vehicles for sale within a range. 14. A method comprising: executing at least one smart phone application installed in each of a plurality of smart phones, each having an image capturing unit, a positioning unit, and a processing unit, wherein all of the units are physically packed within a respective smart phone and wherein each of the plurality of smart phones is housed in or mounted to a different one of a plurality of vehicles;capturing a sequence of images, by each of the plurality of smart phones;extracting from the sequence of images information about a plurality of surface objects located within a distance from a first vehicle in real time, where said first vehicle is one of the plurality of vehicles;deriving location data related to the plurality of surface objects based on positioning measurements derived from the positioning unit of each of said plurality of smart phones;connecting to a remote server via a wireless network; andtransmitting the information and the location data of the plurality of surface objects to the remote server,wherein the server generates a report, in real-time, accessible to each of the plurality of smart phones via the wireless network in real-time, said report comprising a map of a city according to an analysis of the information and the location data from the at least one smart phone application, the report including a subset of the information and location data of said plurality of surface objects located within said distance from said first vehicle, wherein the subset of information and location data belongs to one of a plurality of different categories, and wherein said report includes a status of said plurality of surface objects belonging to said one of said plurality of different categories. 15. The method according to claim 14, further comprising applying a decision function to the captured images and to momentary kinetics parameters to yield an analysis of a risk of a collision between the first vehicle and a second vehicle, determining physical and cognitive conditions of a driver of the first vehicle, and applying the decision function to the determined driver conditions to yield an updated estimation of the risk of collision. 16. The method according to claim 14, further comprising applying a decision function to the captured images and to momentary kinetics parameters to yield an analysis of a risk of a collision between the first vehicle and a second vehicle, processing the sequence of images and determining, accordingly, conditions of a road, and applying the decision function to the determined conditions of the road to yield an updated estimation of the risk of collision. 17. The method according to claim 14, further comprising applying a decision function to the captured images and to momentary kinetics parameters to yield an analysis of a risk of a collision between the first vehicle and a second vehicle, processing the sequence of images and detecting, accordingly, pedestrians on the captured images, and applying the decision function to the detected pedestrians on the captured images to yield an updated estimation of the risk of collision. 18. The method according to claim 14, wherein the at least one smart phone is further configured to consume electricity from a vehicle-mounted power source. 19. An image capture device mounted to or integrated in a first vehicle of a plurality of vehicles, the image capture device comprising: an image capturing unit;a positioning unit; anda processor, wherein the processor executes one or more applications that, when executed, cause the image capture device to: capture a sequence of images and to extract therefrom information about a plurality of surface objects located within a distance from the first vehicle of the plurality of vehicles, in real time,derive location data related to the plurality of surface objects based on positioning measurements received from the positioning unit,connect to a remote server via a wireless network,transmit the information and the location data of the plurality of surface objects to the remote server for generation of a real-time report, andaccess and display the report received in real-time from the remote server, wherein the report comprises a map of a city generated at the remote server according to an analysis of the information and the location data received by the remote server from the image capture device and information and location data received by the remote server from a plurality of other image capture devices, each mounted to or integrated in a different one of the plurality of vehicles, and wherein the report includes a subset of the information and location data of said plurality of surface objects located within said distance from said first vehicle, the subset of information and location data belonging to one of a plurality of different categories, and wherein said report includes a status of said plurality of surface objects belonging to said one of said plurality of different categories. 20. The image capture device of claim 19, wherein the processor is further configured to execute the one or more applications to generate an alert responsive to the report based on a location of the first vehicle.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (6)
David S. Breed ; Wilbur E. Duvall ; Wendell C. Johnson, Accident avoidance system.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.