[미국특허]
Methods and devices for determining movements of an object in an environment
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06F-017/00
G08G-001/16
G01S-013/87
G01S-013/93
G01S-013/86
G01S-013/72
출원번호
US-0090308
(2013-11-26)
등록번호
US-8989944
(2015-03-24)
발명자
/ 주소
Agarwal, Pratik
Zhu, Jiajun
Dolgov, Dmitri
출원인 / 주소
Google Inc.
대리인 / 주소
McDonnell Boehnen Hulbert & Berghoff LLP
인용정보
피인용 횟수 :
12인용 특허 :
6
초록▼
An example method may include receiving a first set of points based on detection of an environment of an autonomous vehicle during a first time period, selecting a plurality of points from the first set of points that form a first point cloud representing an object in the environment, receiving a se
An example method may include receiving a first set of points based on detection of an environment of an autonomous vehicle during a first time period, selecting a plurality of points from the first set of points that form a first point cloud representing an object in the environment, receiving a second set of points based on detection of the environment during a second time period which is after the first period, selecting a plurality of points from the second set of points that form a second point cloud representing the object in the environment, determining a transformation between the selected points from the first set of points and the selected points from the second set of points, using the transformation to determine a velocity of the object, and providing instructions to control the autonomous vehicle based at least in part on the velocity of the object.
대표청구항▼
1. A computer-implemented method, comprising: receiving a first set of points based on detection of an environment of an autonomous vehicle by one or more sensors on the autonomous vehicle during a first time period;selecting a plurality of points from the first set of points that form a first point
1. A computer-implemented method, comprising: receiving a first set of points based on detection of an environment of an autonomous vehicle by one or more sensors on the autonomous vehicle during a first time period;selecting a plurality of points from the first set of points that form a first point cloud representing an object in the environment;receiving a second set of points based on detection of the environment by the one or more sensors on the autonomous vehicle during a second time period, wherein the second time period is after the first period;selecting a plurality of points from the second set of points that form a second point cloud representing the object in the environment;determining, by a control system of the autonomous vehicle, a transformation between the selected points from the first set of points that form the first point cloud representing the object in the environment and the selected points from the second set of points that form the second point cloud representing the object in the environment;using the transformation to determine a velocity of the object; andcontrolling the autonomous vehicle based at least in part on the velocity of the object. 2. The method of claim 1, wherein using the transformation to determine the velocity of the object comprises determining at least one of a position and a heading of the object. 3. The method of claim 1, further comprising before selecting the plurality of points from the first set of points, filtering the first set of points to remove one or more points generated based on detection of a ground surface in the environment. 4. The method of claim 1, wherein selecting the plurality of points from the first set of points that form the first point cloud representing the object comprises selecting points such that adjacent selected points have a pairwise distance that is less than a threshold distance. 5. The method of claim 1, wherein determining the transformation comprises determining a translation and a rotation between the selected points from the first set of points and the selected points from the second set of points so as to minimize a distance between selected points from the first set of points and associated points from the second set of points. 6. The method of claim 1, further comprising providing instructions to control the autonomous vehicle so as to avoid the object. 7. The method of claim 6, wherein providing instructions to control the autonomous vehicle so as to avoid the object comprises determining a plurality of contour outlines representing the object based on at least one of the first point cloud and the second point cloud. 8. The method of claim 1, further comprising: receiving a third set of points based on detection of the environment during a third time period, wherein the third time period is after the second time period;selecting a plurality of points from the third set of points that form a third point cloud representing the object in the environment;determining a transformation between the selected points from the second set of points and the selected points from the third set of points; andusing the transformation between the selected points from the first set of points and the selected points from the second set of points and the transformation between the selected points from the second set of points and the selected points from the third set of points to determine an acceleration of the object. 9. The method of claim 1, further comprising: selecting a plurality of points from the first set of points that form point cloud representations of one or more additional objects in the environment;selecting a plurality of points from the second set of points that form point cloud representations of the one or more additional objects in the environment;determining a transformation between the selected points from the first set of points and the selected from the second set of points for the one or more additional objects; andusing the transformation to determine a velocity of the one or more additional objects. 10. An autonomous vehicle, comprising: one or more depth sensors; anda control system configured to: receive a first set of points based on detection of an environment by the one or more depth sensors on the autonomous vehicle during a first time period;select a plurality of points from the first set of points that form a first point cloud representing an object in the environment;receive a second set of points based on detection of the environment by the one or more depth sensors during a second time period, wherein the second time period is after the first time period;select a plurality of points from the second set of points that form a second point cloud representing the object in the environment;determine a transformation between the selected points from the first set of points that form the first point cloud representing the object in the environment and the selected from the second set of points that form the second point cloud representing the object in the environment;use the transformation to determine a velocity of the object; andcontrol the autonomous vehicle based at least in part on the velocity of the object. 11. The vehicle of claim 10, wherein the control system is configured to determine the velocity of the object by determining at least one of a position and a heading of the object. 12. The vehicle of claim 10, wherein the control system is further configured to before selecting the plurality of points from the first set of points, filter the first set of points to remove one or more points generated based on detection of a ground surface in the environment. 13. The vehicle of claim 10, wherein the control system is further configured to provide instructions to control the vehicle so as to avoid the object. 14. The vehicle of claim 10, wherein the control system is further configured to: receive the first set of points in the environment from one of a plurality of depth sensors on the vehicle; andprovide instructions to control the vehicle so as to avoid the object based on sensor data from any one of the plurality of depth sensors. 15. The vehicle of claim 10, wherein the control system is further configured to: receive a third set of points based on detection of the environment during a third time period by the depth sensor, wherein the third time period is after the second time period;select a plurality of points from the third set of points that form a third point cloud representing the object in the environment;determine a transformation between the selected points from the second set of points and the selected from the third set of points; anduse the transformation between the selected points from the first set of points and the selected points from the second set of points and the transformation between the selected points from the second set of points and the selected points from the third set of points to determine an acceleration of the object. 16. The vehicle of claim 10, wherein the control system is further configured to: select a plurality of points from the first set of points that form point cloud representations of one or more additional objects in the environment;select a plurality of points from the second set of points that form point cloud representations of the one or more additional objects in the environment;determine a transformation between the selected points from the first set of points and the selected points from the second set of points for the one or more additional objects; anduse the transformation to determine a velocity of the one or more additional objects. 17. A non-transitory computer readable medium having stored therein instructions, that when executed by a control system of an autonomous vehicle, cause the control system to perform functions comprising: receiving a first set of points based on detection of an environment of the autonomous vehicle by one or more sensors on the autonomous vehicle during a first time period;selecting a plurality of points from the first set of points that form a first point cloud representing an object in the environment;receiving a second set of points based on detection of the environment by the one or more sensors on the autonomous vehicle during a second time period, wherein the second time period is after the first period;selecting a plurality of points from the second set of points that form a second point cloud representing the object in the environment;determining a transformation between the selected points from the first set of points that form the first point cloud representing the object in the environment and the selected points from the second set of points that form the second point cloud representing the object in the environment;using the transformation to determine a velocity of the object; andcontrolling the autonomous vehicle based at least in part on the velocity of the object. 18. The non-transitory computer readable medium of claim 17, wherein using the transformation to determine the velocity of the object comprises determining at least one of a position and a heading of the object. 19. The non-transitory computer readable medium of claim 17, further comprising providing instructions, that when executed by the computing system, cause the computing system to perform a function comprising: controlling the autonomous vehicle so as to avoid the object.
Trepagnier, Paul Gerard; Nagel, Jorge Emilio; Kinney, Powell McVay; Dooner, Matthew Taylor; Wilson, Bruce Mackie; Schneider, Jr., Carl Reimers; Goeller, Keith Brian, Navigation and control system for autonomous vehicles.
Campbell, Jeffrey Scott; Sundarakrishnamachari, Rangarajan; Ramaswamy, Rathnakumar, High vibration connector with a connector-position-assurance device.
Qiu, Hang; Govindan, Ramesh; Gruteser, Marco; Bai, Fan, Method and apparatus of networked scene rendering and augmentation in vehicular environments in autonomous driving systems.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.