IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0180576
(2011-07-12)
|
등록번호 |
US-8811720
(2014-08-19)
|
발명자
/ 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
Schwegman, Lundberg & Woessner, P.A.
|
인용정보 |
피인용 횟수 :
9 인용 특허 :
24 |
초록
▼
In accordance with particular embodiments, a method includes receiving LIDAR data associated with a geographic area and generating a three-dimensional image of the geographic area based on the LIDAR data. The method further includes presenting at least a first portion of the three-dimensional image
In accordance with particular embodiments, a method includes receiving LIDAR data associated with a geographic area and generating a three-dimensional image of the geographic area based on the LIDAR data. The method further includes presenting at least a first portion of the three-dimensional image to a user based on a camera at a first location. The first portion of the three-dimensional image is presented from a walking perspective. The method also includes navigating the three-dimensional image based on a first input received from the user. The first input is used to direct the camera to move along a path in the walking perspective based on the first input and the three-dimensional image. The method further includes presenting at least a second portion of the three-dimensional image to the user based on navigating the camera to a second location. The second portion of the three dimensional image presented from the walking perspective.
대표청구항
▼
1. A method comprising: receiving light detection and ranging (LIDAR) data comprising a plurality of data points created from reflections from a LIDAR system, the plurality of data points arranged within a three-dimensional space associated with a geographic area; receiving a selection of a first pe
1. A method comprising: receiving light detection and ranging (LIDAR) data comprising a plurality of data points created from reflections from a LIDAR system, the plurality of data points arranged within a three-dimensional space associated with a geographic area; receiving a selection of a first perspective for presenting a three-dimensional image; generating the three-dimensional image of the geographic area based on the plurality of data points of the LIDAR data arranged within a three-dimensional space and the selection of the first perspective for presenting the three-dimensional image; presenting at least a first portion of the three-dimensional image to a user based on the selection of the first perspective and a viewing area of a camera arranged to correspond to the user positioned at a first location in the geographic area, the first portion of the three-dimensional image presented from the selection of the first perspective; navigating the three-dimensional image based on a first input received from the user, the first input directing the camera to move along a path in the selection of the first perspective to simulate movement of the user in the geographic area based on the first input and the three-dimensional image; andpresenting at least a second portion of the three-dimensional image to the user based on the selection of a second perspective and navigating the camera to a second location in the geographic area, the second portion of the three dimensional image presented from the selection of the second perspective to simulate movement of the user to the second location in the geographic area as seen from the second perspective. 2. The method of claim 1, further comprising receiving a perspective toggle request from a user requesting the three-dimensional image be presented from a pan and zoom perspective for the second perspective. 3. The method of claim 2, further comprising: presenting at least a third portion of the three dimensional image to a user based on a camera at a third location in the geographic area, the third portion of the three-dimensional image presented from the pan and zoom perspective;navigating the three-dimensional image based on a second input received from the user, the second input directing the camera to move to a fourth location in the geographic area based on the second input to simulate movement of the user to the fourth location in the geographic area, the camera moving in the pan and zoom perspective in a relatively straight line from the third location to the fourth location; andpresenting at least a fourth portion of the three-dimensional image to the user based on navigating the camera to the fourth location to simulate movement of the user to the fourth location in the geographic area, the fourth portion of the three dimensional image presented from the pan and zoom perspective. 4. The method of claim 3, wherein the first input is based on a first control scheme associated with a first input device and the second input is based on a second control scheme associated with the first input device. 5. The method of claim 1: further comprising determining a ground level of the three-dimensional image, the ground level corresponding to a terrain of the geographic area when the selection of the first perspective is a walking perspective; andwherein navigating the three-dimensional image based on an input received from the user comprises directing the camera to move along a path in the walking perspective that follows the terrain based on the first input and the three-dimensional image to simulate movement of the user along the path. 6. The method of claim 1, further comprising loading and executing a first-person perspective game engine configured to facilitate in generating the three-dimensional image, presenting the first and second portions of the three-dimensional image, and navigating the three-dimensional image. 7. The method of claim 1, wherein navigating the three-dimensional image based on a first input received from the user comprises navigating the three-dimensional image based on a first input received from a gamepad operated by the user. 8. A system comprising: an interface configured to receive light detection and ranging (LIDAR) data comprising a plurality of data points created from reflections from a LIDAR system, the plurality of data points arranged within a three-dimensional space associated with a geographic area and to receive a selection of a first perspective for presenting a three-dimensional image; anda processor coupled to the interface and configured to:generate the three-dimensional image of the geographic area based on the plurality of data points of the LIDAR data arranged within a three-dimensional space and the selection of the first perspective for presenting the three-dimensional image;present at least a first portion of the three-dimensional image to a user based on the selection of the first perspective and a viewing area of a camera arranged to correspond to the user positioned at a first location in the geographic area, the first portion of the three-dimensional image presented from the selection of the first perspective; navigate the three-dimensional image based on a first input received from the user the first input directing the camera to move along a path in the selection of the first perspective to simulate movement of the user in the geographic area based on the first input and the three-dimensional image; andpresent at least a second portion of the three-dimensional image to the user based on the selection of a second perspective and navigating the camera to a second location in the geographic area, the second portion of the three dimensional image presented from the selection of the second perspective to simulate movement of the user to the second location in the geographic area as seen from the second perspective. 9. The system of claim 8, wherein the interface further configured to receive a perspective toggle request from a user requesting the three-dimensional image be presented from a pan and zoom perspective for the second perspective. 10. The system of claim 9, wherein the processor is further configured to: present at least a third portion of the three-dimensional image to a user based on a camera at a third location in the geographic area, the third portion of the three-dimensional image presented from the pan and zoom perspective;navigate the three-dimensional image based on a second input received from the user, the second input directing the camera to move to a fourth location in the geographic area based on the second input to simulate movement of the user to the fourth location in the geographic area, the camera moving in the pan and zoom perspective in a relatively straight line from the third location to the fourth location; andpresent at least a fourth portion of the three-dimensional image to the user based on navigating the camera to the fourth location to simulate movement of the user to the fourth location in the geographic area, the fourth portion of the three dimensional image presented from the pan and zoom perspective. 11. The system of claim 10, wherein the first input is based on a first control scheme associated with a first input device and the second input is based on a second control scheme associated with the first input device. 12. The system of claim 8, wherein: the processor is further configured to determine a ground level of the three-dimensional image, the ground level corresponding to a terrain of the geographic area when the selection of the first perspective is a walking perspective; andthe processor configured to navigate the three-dimensional image based on an input received from the user is further configured to direct the camera to move along a path in the walking perspective that follows the terrain based on the first input and the three-dimensional image to simulate movement of the user along the path. 13. The system of claim 8, wherein the processor is further configured to load and execute a first-person perspective game engine configured to facilitate in generating the three-dimensional image, presenting the first and second portions of the three-dimensional image, and navigating the three-dimensional image. 14. The system of claim 8, wherein the processor configured to navigate the three-dimensional image based on a first input received from the user is further configured to navigate the three-dimensional image based on a first input received from a gamepad operated by the user. 15. A non-transitory computer readable medium comprising instructions that, when executed by a processor, cause the machine to; receive light detection and ranging (LIDAR) data comprising a plurality of data points created from reflections from a LIDAR system, the plurality of data points arranged within a three-dimensional space associated with a geographic area; receiving a selection of a first perspective for presenting a three-dimensional image;generate the three-dimensional image of the geographic area based on the plurality of data points of the LIDAR data arranged within a three-dimensional space and the selection of the perspective for presenting the three-dimensional image;present at least a first portion of the three-dimensional image to a user based on the selection of the first perspective and a viewing area of a camera arranged to correspond to the user positioned at a first location in the geographic area, the first portion of the three-dimensional image presented from the selection of the first perspective;navigate the three-dimensional image based on a first input received from the user, the first input directing the camera to move along a path in the selection of the first perspective to simulate movement of the user in the geographic area based on the first input and the three-dimensional image; andpresent at least a second portion of the three-dimensional image to the user based on the selection of a second perspective and navigating the camera to a second location in the geographic area, the second portion of the three dimensional image presented from the selection of the second perspective to simulate movement of the user to the second location in the geographic area as seen from the second perspective. 16. The computer readable medium of claim 15, further configured to receive a perspective toggle request from a user requesting the three-dimensional image be presented from a pan and zoom perspective for the second perspective. 17. The computer readable medium of claim 16, further configured to: present at least a third portion of the three-dimensional image to a user based on a camera at a third location in the geographic area, the third portion of the three-dimensional image presented from the pan and zoom perspective;navigate the three-dimensional image based on a second input received from the user, the second input directing the camera to move to a fourth location in the geographic area based on the second input to simulate movement of the user to the fourth location in the geographic area, the camera moving in the pan and zoom perspective in a relatively straight line from the third location to the fourth location; andpresent at least a fourth portion of the three-dimensional image to the user based on navigating the camera to the fourth location to simulate movement of the user to the fourth location in the geographic area, the fourth portion of the three dimensional image presented from the pan and zoom perspective. 18. The Computer readable of claim 17, wherein the first input is based on a first control scheme associated with a first input device and the second input is based on a second control scheme associated with the first input device. 19. The computer readable of claim 15: further configured to determine a ground-level of the three-dimensional image, the ground level corresponding to a terrain of the geographic area when the selection of the first perspective is a walking perspective; andwherein the logic configured to navigate the three-dimensional image based on an input received from the user is further configured to direct the camera to move along a path in the walking perspective that follows the terrain based on the first input and the three-dimensional image to simulate movement of the user along the path. 20. The computer readable medium of claim 15, further configured to load and execute a first-person perspective game engine configured dimensional to facilitate in image, presenting generating the three the first and second portions of the three-dimensional image, and navigating the three-dimensional image. 21. The computer readable medium of claim 15, wherein the logic configured to navigate the three-dimensional image based on a first input received from the user is further configured to navigate the three-dimensional image based on a first input received from a gamepad operated by the user.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.