Vector Field SLAM is a method for localizing a mobile robot in an unknown environment from continuous signals such as WiFi or active beacons. Disclosed is a technique for localizing a robot in relatively large and/or disparate areas. This is achieved by using and managing more signal sources for cov
Vector Field SLAM is a method for localizing a mobile robot in an unknown environment from continuous signals such as WiFi or active beacons. Disclosed is a technique for localizing a robot in relatively large and/or disparate areas. This is achieved by using and managing more signal sources for covering the larger area. One feature analyzes the complexity of Vector Field SLAM with respect to area size and number of signals and then describe an approximation that decouples the localization map in order to keep memory and run-time requirements low. A tracking method for re-localizing the robot in the areas already mapped is also disclosed. This allows to resume the robot after is has been paused or kidnapped, such as picked up and moved by a user. Embodiments of the invention can comprise commercial low-cost products including robots for the autonomous cleaning of floors.
대표청구항▼
1. A method of estimating a pose of a robot, the method comprising: computing the pose of the robot through simultaneous localization and mapping as the robot moves along a surface to generate one or more maps, wherein the pose comprises position and orientation of the robot;navigating the robot suc
1. A method of estimating a pose of a robot, the method comprising: computing the pose of the robot through simultaneous localization and mapping as the robot moves along a surface to generate one or more maps, wherein the pose comprises position and orientation of the robot;navigating the robot such that the robot cleans the surface in a methodical manner;determining that navigation of the robot has been paused by a user-initiated pausing event, and after resuming navigation of the robot: re-localizing the robot within a map of the one or more maps;after re-localizing, returning the robot to a previous pose prior to resuming cleaning, wherein the previous prior pose is from a time prior to the user-initiated pausing event, wherein the prior pose comprises a prior position and a prior orientation; andresuming cleaning of the surface in the methodical manner. 2. The method of claim 1, further comprising resuming cleaning of the surface substantially without re-cleaning areas of the surface already cleaned prior to the user-initiated pausing event. 3. The method of claim 1, wherein simultaneous localization and mapping is performed for learning the spatial distribution of one or more continuous signals in an environment in which the robot is navigating, and wherein each of the one or more maps is defined over a vector field of expected measurement values of the one or more continuous signals for various positions along the surface. 4. The method of claim 1, wherein simultaneous localization and mapping is performed via analysis of two or more light patterns projected onto a ceiling, and wherein each of the one or more maps is defined over a vector field of light sensor measurement values expected for various positions along the surface. 5. The method of claim 4, wherein re-localizing further comprises: obtaining a current set of actually-observed light sensor measurements;comparing the actually-observed light sensor measurements to a plurality of light sensor measurements of the one or more maps corresponding to different positions within the one or more maps; andidentifying a pose from among the plurality of positions of the one or more maps, wherein the pose comprises a position and orientation of the robot. 6. The method of claim 4, wherein re-localizing further comprises: obtaining a current set of actually-observed light sensor measurements;converting the current set of actually-observed light sensor measurements to converted light sensor measurements, wherein the converted light sensor measurements are independent of the orientation of the robot;comparing the converted light sensor measurements to a plurality of light sensor measurements of the one or more maps corresponding to different positions within the one or more maps; andidentifying a pose from among the plurality of positions of the one or more maps, wherein the pose comprises a position and orientation of the robot. 7. The method of claim 6, wherein comparing further comprises: calculating closeness between the converted light sensor measurements and the plurality of light sensor measurements for nodes of the map;selecting a cell based on the closeness calculated for its nodes;interpolating within the cell to generate the position of the pose; andanalyzing the current set of actually-observed light sensor measurements to generate an orientation of the pose. 8. The method of claim 7, wherein calculating closeness comprises calculating Mahalanobis distances. 9. The method of claim 7, wherein the identified cell corresponds to the cell with the lowest average closeness for its nodes. 10. The method of claim 7, further comprising performing a significance test associated with signal strength on the actually-observed light sensor measurements, and rejecting those measurements failing the significance test. 11. The method of claim 7, further comprising confirming the identified pose by further tracking of the pose of the robot and comparing a count of measurement outliers to a value to confirm or reject the identified pose. 12. The method of claim 7, wherein the actually-observed light sensor measurements and the plurality of light sensor measurements are based on photodiode current measurements. 13. The method of claim 6, wherein comparing further comprises: calculating closeness between the converted light sensor measurements and the plurality of light sensor measurements for nodes of the map;selecting a node based on the closeness;for a cell adjacent to the selected node, interpolating with the cell to generate the position of the pose; andanalyzing the current set of actually-observed light sensor measurements to generate an orientation of the pose. 14. The method of claim 13, wherein for a cell adjacent to the selected node comprises for each cell adjacent to the selected node, and further comprising selecting a position from within one or more cells based on least squared error. 15. The method of claim 1, wherein the robot comprises an autonomous robotic cleaner, wherein cleaning comprises cleaning, the method further comprising cleaning the surface while performing SLAM. 16. The method of claim 1, wherein navigating further comprises navigating using an occupancy grid map to determine where the robot should go next. 17. The method of claim 1, wherein determining that navigation of the robot has been paused further comprises determining that the robot has been lifted off of the surface, controlling the robot so that it should not be moving, and detecting motion with a gyroscope. 18. The method of claim 1, wherein determining that operation of the robot has been paused further comprises detecting user interaction with a pause function of the robot. 19. The method of claim 1, wherein the user-initiated pausing event is associated with emptying a dust bin or changing a cleaning pad. 20. An apparatus comprising: a robot; anda controller of the robot configured to: compute a pose of the robot through simultaneous localization and mapping as the robot moves along a surface to generate one or more maps, wherein the pose comprises a position and orientation of the robot;navigate the robot such that the robot cleans the surface in a methodical manner;determine that navigation of the robot has been paused by a user-initiated pausing event, and after resumption of navigation of the robot: re-localize the robot within a map of the one or more maps;return the robot to a previous pose prior to resumption of cleaning, wherein the previous prior pose is from a time prior to the user-initiated pausing event, wherein the prior pose comprises a prior position and a prior orientation; andresume cleaning of the surface in the methodical manner. 21. The apparatus of claim 20, wherein the controller is further configured to resume cleaning of the surface substantially without re-cleaning areas of the surface already cleaned prior to the user-initiated pausing event. 22. The apparatus of claim 20, wherein the controller is further configured to perform simultaneous localization and mapping for learning the spatial distribution of one or more continuous signals in an environment in which the robot is navigating, and wherein each of the one or more maps is defined over a vector field of expected measurement values of the one or more continuous signals for various positions along the surface. 23. The apparatus of claim 20, wherein the controller is further configured to perform simultaneous localization and mapping via analysis of two or more light patterns projected onto a ceiling, and wherein each of the one or more maps is defined over a vector field of light sensor measurement values expected for various positions along the surface. 24. The apparatus of claim 23, wherein to re-localize the robot, the controller is further configured to: obtain a current set of actually-observed light sensor measurements;compare the actually-observed light sensor measurements to a plurality of light sensor measurements of the one or more maps corresponding to different positions within the one or more maps; andidentify a pose from among the plurality of positions of the one or more maps, wherein the pose comprises a position and orientation of the robot. 25. The apparatus of claim 23, wherein to re-localize the robot, the controller is further configured to: obtain a current set of actually-observed light sensor measurements;convert the current set of actually-observed light sensor measurements to converted light sensor measurements, wherein the converted light sensor measurements are independent of the orientation of the robot;compare the converted light sensor measurements to a plurality of light sensor measurements of the one or more maps corresponding to different positions within the one or more maps; andidentify a pose from among the plurality of positions of the one or more maps, wherein the pose comprises a position and orientation of the robot. 26. The apparatus of claim 25, wherein to compare, the controller is further configured to: calculate closeness between the converted light sensor measurements and the plurality of light sensor measurements for nodes of the map;select a cell based on the closeness calculated for its nodes;interpolate within the cell to generate the position of the pose; andanalyze the current set of actually-observed light sensor measurements to generate an orientation of the pose. 27. The apparatus of claim 26, wherein to calculate closeness, the controller is further configured to calculate Mahalanobis distances. 28. The apparatus of claim 26, wherein the identified cell corresponds to the cell with the lowest average closeness for its nodes. 29. The apparatus of claim 26, wherein the controller is further configured to perform a significance test associated with signal strength on the actually-observed light sensor measurements, and to reject those measurements failing the significance test. 30. The apparatus of claim 26, wherein the controller is further configured to confirm the identified pose by further tracking of the pose of the robot and to compare a count of measurement outliers to a value to confirm or reject the identified pose. 31. The apparatus of claim 26, wherein the actually-observed light sensor measurements and the plurality of light sensor measurements are based on photodiode current measurements. 32. The apparatus of claim 25, wherein to compare, the controller is further configured to: calculate closeness between the converted light sensor measurements and the plurality of light sensor measurements for nodes of the map;select a node based on the closeness;for a cell adjacent to the selected node, interpolate with the cell to generate the position of the pose; andanalyze the current set of actually-observed light sensor measurements to generate an orientation of the pose. 33. The apparatus of claim 32, wherein for a cell adjacent to the selected node comprises for each cell adjacent to the selected node, and wherein the controller is further configured to select a position from within one or more cells based on least squared error. 34. The apparatus of claim 20, wherein the robot comprises an autonomous robotic cleaner, wherein the controller is further configured to have the robot clean a surface while the controller performs SLAM. 35. The apparatus of claim 20, wherein to navigate, the controller is further configured to navigate using an occupancy grid map to determine where the robot should go next. 36. The apparatus of claim 20, wherein to determine that navigation of the robot has been paused, the controller is further configured to determine that the robot has been lifted off of the surface, to control the robot so that it should not be moving, and to detect motion with a gyroscope. 37. The apparatus of claim 20, wherein to determine that navigation of the robot has been paused, the controller is further configured to detect user interaction with a pause function of the robot. 38. The apparatus of claim 20, wherein the user-initiated pausing event is associated with emptying a dust bin or changing a cleaning pad.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (38)
Dooley, Michael; Pirjanian, Paolo; Romanov, Nikolai; Chiu, Lihu; Di Bernardo, Enrico; Stout, Michael; Brisson, Gabriel, Application of localization, positioning and navigation systems for robotic enabled mobile products.
Ozick, Daniel N.; Okerholm, Andrea M.; Mammen, Jeffrey W.; Halloran, Michael J.; Sandin, Paul E.; Won, Chikyung, Autonomous coverage robot navigation system.
Benayad-Cherif Faycal E. K. (66 Highland Ave. ; #4 Somerville MA 02143) Maddox James F. (55 Hillside Ave. Arlington MA 02174) George ; II Robert W. (2 Karen Rd. Windham NH 03087), Position locating system for a vehicle.
Goncalves,Luis Filipe Domingues; Di Bernardo,Enrico; Pirjanian,Paolo; Karlsson,L. Niklas, Systems and methods for computing a relative pose for global localization in a visual simultaneous localization and mapping system.
Goncalves, Luis Filipe Domingues; Karlsson, L. Niklas; Pirjanian, Paolo; Di Bernardo, Enrico, Systems and methods for controlling a density of visual landmarks in a visual simultaneous localization and mapping system.
Goncalves,Luis Filipe Domingues; Karlsson,L. Niklas; Pirjanian,Paolo; Di Bernardo,Enrico, Systems and methods for controlling a density of visual landmarks in a visual simultaneous localization and mapping system.
Karlsson,L. Niklas; Goncalves,Luis Filipe Domingues; Di Bernardo,Enrico; Pirjanian,Paolo, Systems and methods for correction of drift via global localization with a visual landmark.
Goncalves,Luis Filipe Domingues; Karlsson,L. Niklas; Pirjanian,Paolo; Di Bernardo,Enrico, Systems and methods for filtering potentially unreliable visual data for visual simultaneous localization and mapping.
Karlsson,L. Niklas; Pirjanian,Paolo; Goncalves,Luis Filipe Domingues; Di Bernardo,Enrico, Systems and methods for incrementally updating a pose of a mobile device calculated by visual simultaneous localization and mapping techniques.
Domingues Goncalves, Luis Filipe; Di Bernardo, Enrico; Pirjanian, Paolo; Karlsson, L. Niklas, Systems and methods for landmark generation for visual simultaneous localization and mapping.
Karlsson,L. Niklas; Pirjanian,Paolo; Goncalves,Luis Filipe Domingues; Di Bernardo,Enrico, Systems and methods for using multiple hypotheses in a visual simultaneous localization and mapping system.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.