A device to detect occupancy of an environment includes a sensor to capture video frames from a location in the environment. The device may compare rules with data using a rules engine. The microcontroller may include a processor and memory to produce results indicative of a condition of the environ
A device to detect occupancy of an environment includes a sensor to capture video frames from a location in the environment. The device may compare rules with data using a rules engine. The microcontroller may include a processor and memory to produce results indicative of a condition of the environment. The device may also include an interface through which the data is accessible. The device may generate results respective to the location in the environment. The microcontroller may be in communication with a network. The video frames may be concatenated to create an overview to display the video frames substantially seamlessly respective to the location in which the sensor is positioned. The overview may be viewable using the interface and the results of the analysis performed by the rules engine may be accessible using the interface.
대표청구항▼
1. A device to detect occupancy of an environment, the device comprising: a sensor to capture images from a location in the environment, the images being transmittable as data;rules comparable with the data using a rules engine operable by a microcontroller with a processor and memory to produce res
1. A device to detect occupancy of an environment, the device comprising: a sensor to capture images from a location in the environment, the images being transmittable as data;rules comparable with the data using a rules engine operable by a microcontroller with a processor and memory to produce results indicative of a condition of the environment, the rules being included in the memory; andan interface through which the data is accessible;wherein at least part of the rules define an analysis of the data including:detecting a precedent image,detecting a subsequent image,comparing the precedent image with the subsequent image to detect a likelihood of disparity between the images,creating a detected object image based on the disparity between the images,comparing the detected object image to known image data acquired from one or more known physical objects to detect a likelihood of similarity between the images,generating the results of the analysis respective to the likelihood of similarity, andgenerating the results of the analysis respective to the likelihood of disparity, the results being storable in the memory;wherein the results are generated respective to the location in the environment;wherein a plurality of locations are includable in the environment;wherein the microcontroller is in communication with a network using a network interface;wherein a plurality of sensors are locatable in the plurality of locations and are adapted to intercommunicate with the microcontroller through the network;wherein the images captured at each of the plurality of locations are concatenated to create an overview;wherein the overview displays the images captured from the plurality of locations substantially seamlessly respective to the location in which the sensor is positioned, the overview being viewable using the interface;wherein the results of the analysis performed by the rules engine are accessible using the interface;wherein the results of the analysis are includable in the overview; andwherein the one or more known physical objects are definable in the rules by placing the one or more known physical objects in the location in the environment and capturing the known image data of the one or more known physical objects using at least one of the plurality of sensors. 2. A device according to claim 1 further comprising a centralized computing device to receive data from the plurality of sensors and operate at least part of the rules engine; wherein the centralized computing device includes a central processor, central memory, and a central network interface to be in communication with the sensors through the network; wherein the results of the analysis performed by the centralized computing device are accessible using the interface. 3. A device according to claim 2 wherein the centralized computing device undergoes an initialization operation to detect the sensor at each of the plurality of locations in the environment and to define at least part of the rules relative to the locations in the environment that includes the sensor. 4. A device according to claim 3 wherein the sensor is coupled to the microcontroller; and wherein the microcontroller is in communication with the centralized computing device though the network to analyze the data received by the sensor. 5. A device according to claim 1 wherein each of the plurality of sensors are positioned at locations throughout the environment in an approximately uniform manner; wherein the images are captured by the sensors in the locations in an approximately uniform manner; and wherein the images are concatenated to create the overview that includes substantially all of the locations in the environment substantially seamlessly. 6. A device according to claim 5 wherein the locations throughout the environment are configured relating to an approximately grid based pattern; and wherein the sensors are positionable at the locations to capture the images relative to the approximately grid based pattern. 7. A device according to claim 5 wherein the sensors are positioned to capture the images from the environment using similar viewing angles relative to the environment. 8. A device according to claim 1 wherein a high likelihood of disparity detected in the location is indicative of occupancy of the location; and wherein a low likelihood of disparity in the location is indicative of no occupancy of the location. 9. A device according to claim 1 wherein the plurality of sensors intercommunicate though the network using mesh networking. 10. A device according to claim 1 wherein the sensor includes a camera. 11. A device according to claim 10 wherein the image is a video frame captured by the camera. 12. A device according to claim 1 wherein an event is definable in the rules to relate to the condition being detected in the environment; wherein the event is associable with an action; and wherein the action may occur subsequent to detecting the event. 13. A device according to claim 12 wherein the action includes generating an alert. 14. A device according to claim 1 wherein each of the images are compressible by the microcontroller; and wherein the images that are compressed are transmitted through the network. 15. A device according to claim 1 wherein the images are concatenated to create a video feed; and wherein the video feed is accessible using the interface. 16. A device according to claim 1 wherein an image is receivable from the plurality of locations; wherein the image received from each of the plurality of locations is concatenated to create a video overview; and wherein the video overview is accessible using the interface. 17. A device according to claim 16 wherein the results of the analysis performed by the rules engine at each of the locations is included in the video overview and accessible using the interface. 18. A device according to claim 1 wherein a portion of the overview is viewable using the interface as a field of view; wherein a wide field of view includes substantially all locations; wherein a narrow field of view includes one location; wherein the field of view is scalable between the wide field of view and the narrow field of view. 19. A device according to claim 1 wherein supplemental data is viewable using the interface, the supplemental data being accessible through the network. 20. A device according to claim 19 wherein the supplemental data is included with the overview substantially seamlessly. 21. A device according to claim 1 wherein the condition in the environment is motion. 22. A device according to claim 1 wherein an object occupying at least part of the environment is detectable by the sensor. 23. A device according to claim 1 wherein the object is detectable by at least part of the plurality of sensors to create a stereoscopic perspective; and wherein a parallax among the images in the stereoscopic perspective is used to calculate depth in a three-dimensional space. 24. A device according to claim 1 wherein the sensor, the microcontroller and the interface are includable in a luminaire. 25. A device to detect occupancy of an environment, the device comprising: a plurality of sensors that include a camera and that are locatable to capture video frames from a plurality of locations in the environment that are being transmittable as data;rules comparable with the data using a rules engine operable by a microcontroller with a processor and memory to produce results indicative of a condition of the environment, the rules being included in the memory; andan interface through which the data is accessible;wherein at least part of the rules define an analysis of the data including:detecting a precedent video frame,detecting a subsequent video frame,comparing the precedent video frame with the subsequent video frame to detect a likelihood of disparity between the video frames,creating a detected object image based on the disparity between the video frames,comparing the detected object image to known image data acquired from one or more known physical objects to detect a likelihood of similarity between the images,generating the results of the analysis respective to the likelihood of similarity, andgenerating the results of the analysis respective to the likelihood of disparity, the results being storable in the memory;wherein the results are generated respective to the location in the environment;wherein a plurality of locations are includable in the environment;wherein the microcontroller is in communication with a network using a network interface;wherein a plurality of sensors are locatable in the plurality of locations and are adapted to intercommunicate with the microcontroller through the network;wherein the video frames captured at each of the plurality of locations are concatenated to create an overview;wherein the overview displays the video frames captured from the plurality of locations substantially seamlessly respective to the location in which the sensor is positioned, the overview being viewable using the interface;wherein each of the video frames are compressible by the microcontroller to be transmittable through the network;wherein the object is detectable by at least part of the plurality of sensors to create a stereoscopic perspective; andwherein the one or more known physical objects are definable in the rules by placing the one or more known physical objects in the location in the environment and capturing the known image data of the one or more known physical objects using at least one of the plurality of sensors. 26. A device according to claim 25 further comprising a centralized computing device to receive data from the plurality of sensors and operate at least part of the rules engine; wherein the centralized computing device includes a central processor, central memory, and a central network interface to be in communication with the sensors through the network; wherein the results of the analysis performed by the centralized computing device are accessible using the interface. 27. A device according to claim 26 wherein the centralized computing device undergoes an initialization operation to detect the sensor at each of the plurality of locations in the environment and to define at least part of the rules relative to the locations in the environment that includes the sensor. 28. A device according to claim 27 wherein the sensor is coupled to the microcontroller; and wherein the microcontroller is in communication with the centralized computing device though the network to analyze the data received by the sensor. 29. A device according to claim 25 wherein each of the plurality of sensors are positioned at locations throughout the environment in an approximately uniform manner; wherein the video frames are captured by the sensors in the locations in an approximately uniform manner; and wherein the video frames are concatenated to create the overview that includes substantially all of the locations in the environment substantially seamlessly. 30. A device according to claim 29 wherein the locations throughout the environment are configured relating to an approximately grid based pattern; and wherein the sensors are positionable at the locations to capture the video frames relative to the approximately grid based pattern. 31. A device according to claim 29 wherein the sensors are positioned to capture the video frames from the environment using similar viewing angles relative to the environment. 32. A device according to claim 25 wherein a high likelihood of disparity detected in the location is indicative of occupancy of the location; and wherein a low likelihood of disparity in the location is indicative of no occupancy of the location. 33. A device according to claim 25 wherein the plurality of sensors intercommunicate though the network using mesh networking. 34. A device according to claim 25 wherein an event is definable in the rules to relate to the condition being detected in the environment; wherein the event is associable with an action; wherein the action may occur subsequent to detecting the event; and wherein the action includes generating an alert. 35. A device according to claim 25 wherein a video feed is receivable from the plurality of locations; wherein the video feed received from each of the plurality of locations is concatenated to create a video overview; and wherein the video overview is accessible using the interface. 36. A device according to claim 35 wherein the results of the analysis performed by the rules engine at each of the locations is included in the video overview and accessible using the interface. 37. A device according to claim 25 wherein a portion of the overview is viewable using the interface as a field of view; wherein a wide field of view includes substantially all locations; wherein a narrow field of view includes less than all locations; wherein the field of view is scalable between the wide field of view and the narrow field of view. 38. A device according to claim 25 wherein supplemental data is viewable using the interface, the supplemental data being accessible through the network; and wherein the supplemental data is included with the overview substantially seamlessly. 39. A device according to claim 25 wherein the condition in the environment is motion; and wherein an object occupying at least part of the environment is detectable by the sensor. 40. A device according to claim 25 wherein a parallax among the video frames in the stereoscopic perspective is used to calculate depth in a three-dimensional space. 41. A device according to claim 25 wherein the results of the analysis performed by the rules engine are includable in the overview and are accessible using the interface. 42. A device according to claim 25 wherein the sensor, the microcontroller and the interface are includable in a luminaire. 43. A method of detecting occupancy of an environment, the method comprising: capturing video frames from a location in the environment sensors, the video frames being transmittable as data;comparing the data to rules using a rules engine operable by a microcontroller with a processor and memory to produce results indicative of a condition of the environment;analyzing the data by:detecting a precedent video frame,detecting a subsequent video frame,comparing the precedent video frame with the subsequent video frame to detect a likelihood of disparity between the video frames,creating a detected object image based on the disparity between the video frames,comparing the detected object image to known image data acquired from one or more known physical objects to detect a likelihood of similarity between the images,generating the results of the analysis respective to the likelihood of similarity, andgenerating the results of the analysis respective to the likelihood of disparity, the results being storable in the memory;concatenating the video frames captured at each of the plurality of locations to create an overview;displaying the video frames captured from the plurality of locations substantially seamlessly respective to the location in which the sensors are positioned, the overview being viewable using an interface;providing access to the results of the analysis using the interface; andincluding the results of the analysis in the overview;wherein the one or more known physical objects are definable in the rules by placing the one or more known physical objects in the location in the environment and capturing the known image data of the one or more known physical objects using at least one of the plurality of sensors. 44. A method according to claim 43 further comprising receiving data from the sensors on a centralized computing device; initializing the centralized computing device to detect the sensors at each of the plurality of locations in the environment and to define at least part of the rules relative to the locations in the environment that includes the sensor. 45. A method according to claim 43 further comprising positioning each of the sensors at locations throughout the environment in an approximately uniform manner; capturing the video frames in the locations in an approximately uniform manner; and concatenating the video frames to create the overview that includes substantially all of the locations in the environment substantially seamlessly. 46. A method according to claim 45 further comprising configuring the locations throughout the environment relating to an approximately grid based pattern; and positioning the sensors at the locations to capture the video frames relative to the approximately grid based pattern. 47. A method according to claim 45 further comprising positioning the sensors to capture the video frames from the environment using similar viewing angles relative to the environment. 48. A method according to claim 43 wherein a high likelihood of disparity detected in the location is indicative of occupancy of the location; and wherein a low likelihood of disparity in the location is indicative of no occupancy of the location. 49. A method according to claim 43 wherein the sensor includes a camera. 50. A method according to claim 43 further comprising defining an event in the rules to relate to the condition being detected in the environment; wherein the event is associable with an action; wherein the action may occur subsequent to detecting the event; and wherein the action includes generating an alert. 51. A method according to claim 43 further comprising compressing each of the video frames; and transmitting the video frames that are compressed through the network. 52. A method according to claim 43 wherein a video feed is receivable from the plurality of locations; wherein the video feed received from each of the plurality of locations is concatenated to create a video overview; and wherein the video overview is accessible using the interface. 53. A method according to claim 52 further comprising including the results of the analysis performed by the rules engine at each of the locations in the video overview to be accessible using the interface. 54. A method according to claim 43 wherein a portion of the overview is viewable using the interface as a field of view; wherein a wide field of view includes substantially all locations; wherein a narrow field of view includes one location; wherein the field of view is scalable between the wide field of view and the narrow field of view. 55. A method according to claim 43 wherein supplemental data is viewable using the interface, the supplemental data being accessible through the network; wherein the supplemental data is included with the overview substantially seamlessly. 56. A method according to claim 43 wherein the object is detectable by at least part of the plurality of sensors to create a stereoscopic perspective; and wherein a parallax among the video frames in the stereoscopic perspective is used to calculate depth in a three-dimensional space.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (107)
Kim, Dae Won; Yoon, Yeo Jin; Kal, Dae Sung, AC light emitting diode.
Zhou, Dongsheng; Shteynberg, Anatoly; Rodriguez, Harry; Eason, Mark; Nguyen, Lanh, Digital driver apparatus, method and system for solid state lighting.
Callahan Michael (720 Greenwich New York NY 10014) Chester John K. (78 Washington Ave. High Bridge NJ 08829) Goddard Robert M. (448 E. 20th St New York NY 10009), Inductorless controlled transition and other light dimmers.
Sprague, Randall B.; Miller, Joshua O.; Brown, Margaret K.; Freeman, Mark O.; Niesten, Maarten; Xue, Bin; Wiklof, Christopher A., Integrated photonics module and devices using integrated photonics modules.
Lee, Chung Hoon; Lee, Keon Young; Kim, Hong San; Kim, Dae Won; Choi, Hyuck Jung, LED package having an array of light emitting cells coupled in series.
Lee, Chung-Hoon; Lee, Keon-Young; Yves, Lacroix, Light emitting element with a plurality of cells bonded, method of manufacturing the same, and light emitting device using the same.
Morejon,Israel J.; Zhai,Jinhui; Li,Haizhang, Light-emitting diode (LED) illumination system for a digital micro-mirror device (DMD) and method of providing same.
Grötsch, Stefan; Kudaev, Sergey; Moffat, Bryce Anton; Schreiber, Peter, Light-emitting module, particularly for use in an optical projection apparatus.
Given, Terry; Harris, Michael; Myers, Peter Jay, Lighting control device for controlling dimming, lighting device including a control device, and method of controlling lighting.
Maxik, Fredric S.; Bretschneider, Eric; Medelius, Pedro; Bartine, David E.; Soler, Robert R.; Flickinger, Gregory, Motion detection system and associated methods having at least one LED of second set of LEDs to vary its voltage.
Wheatley,John A.; Ouderkirk,Andrew J.; Weber,Michael F.; Epstein,Kenneth A.; Watson,James E., Phosphor based illumination system having a light guide and an interference reflector.
Penn, Steven Monroe; Dewald, Duane Scott; Barry, Ronald Allen, Projection system and method including spatial light modulator and compact diffractive optics.
Brunschwiler, Thomas J.; Despont, Michel; Lantz, Mark A.; Michel, Bruno; Vettiger, Peter, Semiconductor device with a high thermal dissipation efficiency.
Maxik, Fredric S.; Bartine, David E.; Soler, Robert R.; Bastien, Valerie A.; Schellack, James Lynn; Grove, Eliza Katar, Sustainable outdoor lighting system for use in environmentally photo-sensitive area.
Maxik, Fredric S.; Bartine, David E.; Soler, Robert R.; Grove, Eliza Katar; Bretschneider, Eric, Tunable LED lamp for producing biologically-adjusted light.
Maxik, Fredric S.; Bartine, David E.; Soler, Robert R.; Grove, Eliza Katar; Bretschneider, Eric, Tunable LED lamp for producing biologically-adjusted light and associated methods.
Maxik, Fredric S.; Bartine, David E.; Medelius, Pedro; Bretschneider, Eric, Wavelength sensing lighting system and associated methods for national security application.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.