IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0064474
(2006-08-31)
|
등록번호 |
US-8818076
(2014-08-26)
|
국제출원번호 |
PCT/US2006/033966
(2006-08-31)
|
§371/§102 date |
20080222
(20080222)
|
국제공개번호 |
WO2007/027847
(2007-03-08)
|
발명자
/ 주소 |
- Shenkar, Victor
- Harari, Alexander
|
출원인 / 주소 |
|
대리인 / 주소 |
Smith Risley Tempel Santos LLC
|
인용정보 |
피인용 횟수 :
10 인용 특허 :
16 |
초록
▼
A method, a system, and a program for high-fidelity three-dimensional modeling of a large-scale urban environment, performing the following steps: acquiring imagery of the urban environment, containing vertical aerial stereo-pairs, oblique aerial images; street-level imagery; and terrestrial laser s
A method, a system, and a program for high-fidelity three-dimensional modeling of a large-scale urban environment, performing the following steps: acquiring imagery of the urban environment, containing vertical aerial stereo-pairs, oblique aerial images; street-level imagery; and terrestrial laser scans,acquiring metadata pertaining to performance, spatial location and orientation of imaging sensors providing the imagery;identifying pixels representing ground control-points and tie-points in every instance of the imagery where the ground control-points and tie-points have been captured;co-registering the instances of the imagery using the ground control-points, the tie-points and the metadata, andreferencing the co-registered imagery to a common, standard coordinate system. The referenced co-registration obtained enables: extraction of ground coordinates for each pixel located in overlapping segments of the imagery, representing a 3D-point within the urban environment; andapplying data pre-processing and 3D modeling procedures;to create the high-fidelity 3D model of a large-scale urban environment.
대표청구항
▼
1. A method for three-dimensional (3D) modeling of an urban environment, said method comprising the steps of: a) acquiring 3D aerial imagery of said urban environment;b) acquiring 3D street-level imagery of said urban environment;c) identifying pixels representing ground control-points and tie-point
1. A method for three-dimensional (3D) modeling of an urban environment, said method comprising the steps of: a) acquiring 3D aerial imagery of said urban environment;b) acquiring 3D street-level imagery of said urban environment;c) identifying pixels representing ground control-points and tie-points in every instance of said imagery, in which said ground control points and tie-points have been captured; andd) co-registering said instances of said 3D aerial imagery and said 3D street-level imagery using at least one of ground control-points and tie-points;wherein said co-registered imagery enables extraction of ground coordinates for each pixel located in overlapping segments of aerial imagery and said street-level imagery representing a 3D-point within said urban environment and the ability to switch between over lapping segments. 2. A three-dimensional modeling method according to claim 1, additionally comprising the steps of: e) determining ground coordinates for a collection of selected pixels of said aerial imagery;f) forming a three dimensional box-model of selected modeling units spanned over said ground coordinates of said collection of selected pixels; andg) incorporating geometric details and textural details taken from said street-level imagery into said modeling units forming a high-fidelity 3D-model of said urban environment. 3. A three-dimensional modeling method according to claim 2, wherein said step of incorporating geometric details and textural details into said modeling units forming a high-fidelity 3D-model of said urban environment further comprises: 1) dividing said 3D-model into data-layers comprising: i) a building-models data-layer comprising at least one 3D building model;ii) a terrain-skin data-layer comprising a 3D terrain skin model; andiii) a street-level-culture data-layer comprising at least one 3D street-level-culture model;wherein said building-models data-layer is at least partially based on said aerial imagery and wherein said terrain skin model and street-level-culture model are at least partially based on said street-level imagery. 4. A three-dimensional modeling method according to claim 3, further comprising: 2) developing said 3D building models, said 3D terrain skin models, and said 3D street-level-culture models;3) creating several levels-of-detail for said data layers: and4) merging corresponding instances of said at least one 3D building model, terrain skin model, and street-level-culture models, forming a high-fidelity 3D-model of said urban environment. 5. A three-dimensional modeling method according to claim 4, further comprising: j) dividing said urban environment into city-block segments;k) applying said steps (f) through (h) to each city-block segment to form a 3D model of said city-block segment, independently of other city-block segments; andl) integrating said 3D models of said city-block segments to form an integrated 3D model of said urban environment. 6. A three-dimensional modeling method according to claim 1, wherein said vertical aerial imagery comprises a plurality of stereo-pairs of vertical images providing a continuous coverage of said urban environment and a plurality of oblique aerial images providing a continuous coverage of said urban environment in four main directions. 7. A three-dimensional modeling method according to claim 4, wherein said step of developing said 3D building models, said 3D terrain skin model, and said 3D street-level-culture models comprises at least one of the steps selected from the group of steps comprising: 1) automatically importing selected segments of said imagery and associated metadata that are pertinent to modeling a specific city-block segment;2) automatically identifying best image quality photographic textures of said selected segments;3) automatically pasting said best image quality photographic textures onto corresponding 3D-geometry;4) interactively determining spatial location of repetitive 3D-geometric elements;5) automatically inserting said repetitive 3D-geometric elements into said 3D-model;6) interactively determining at least one of: precise spatial location, physical size, and cross-sections of specific details extracted from said 3D street-level imagery;7) automatically inserting said specific details into said 3D model of said urban environment;8) interactively creating templates of building details;9) selectively implanting said templates into at least one of said 3D building models, terrain skin model and street-level-culture models; and10) automatically generating a 3D-mesh representing said terrain skin surface, using a predetermined set of 3D-points and 3D-lines (“polylines”) belonging to said 3D terrain skin model, without affecting existing 3D-details. 8. A computer program product, stored on one or more non-transitory computer-readable media, comprising instructions operative to cause a programmable processor to: a) acquire imagery of said urban environment comprising: 1) 3D aerial imagery comprising at least one of: i) stereo-pair of vertical aerial images; and ii) oblique aerial images taken in four main directions; and2) 3D street-level imagery comprising; i) a plurality of terrestrial photographs; andii) a plurality of terrestrial laser scans;b) acquire metadata comprising information pertaining to: performance, spatial location and orientation of imaging sensors providing said aerial imagery and street-level imagery;c) identify pixels representing ground control-points and tie-points in every instance of said imagery, in which said ground control-points and tie-points have been captured;d) co-register said instances of said imagery using said ground control points, said tie-points and said metadata; ande) reference said co-registration of said imagery to a common, standard coordinate system, enabling thereby extraction of ground coordinates for each pixel located in overlapping segments of said imagery, representing a 3D-point within said urban environment. 9. A computer program product according to claim 8, comprising additional instructions operative to cause said programmable processor to: f) determine ground coordinates for a collection of selected pixels of said imagery;g) form a three dimensional box-model of selected modeling units spanned over said ground coordinates of said collection of selected pixels; andh) incorporate geometric details and textural details into said modeling units while forming a high-fidelity 3D-model of said urban environment. 10. A computer program product according to claim 9, comprising additional instructions operative to cause said programmable processor to: 1) divide said 3D-model into data layers comprising: i) a building-models data-layer comprising at least one 3D building model;ii) a terrain-skin data-layer comprising 3D terrain-skin model; andiii) a street-level-culture data-layer comprising at least one 3D street-level-culture model; and2) develop said 3D building models, said 3D terrain skin models, and said 3D street-level-culture models;3) create several levels-of-detail for said data layers: and4) merge corresponding instances of at least one said 3D building model, terrain skin model, and street-level-culture models, forming a high-fidelity 3D-model of said urban environment. 11. A computer program product according to claim 10, comprising additional instructions operative to cause said programmable processor to: i) store said 3D-model in a database. 12. A computer program product according to claim 11, comprising additional instructions operative to cause said programmable processor to: j) divide said urban environment into city-block segments;k) apply said steps f to h to each city-block segment to form a 3D model of said city-block segment, independently of other city-block segments; andl) integrate said 3D models of said city-block segments to form an integrated 3D model of said urban environment. 13. A computer program product according to claim 10, comprising additional instructions operative to cause said programmable processor to perform at least one function selected from the group of functions consisting of: 1) import selected segments of said imagery and associated metadata that are pertinent to modeling a specific city-block segment;2) identify best image quality photographic textures of said selected segments;3) paste said best image quality photographic textures onto corresponding 3D-geometry;4) enable a user to interactively determine spatial location of repetitive 3D-geometric elements;5) insert said repetitive 3D-geometric element into said 3D-model;6) enable a user to interactively determine at least one of: precise spatial location, physical size, and cross-sections, of specific details extracted from said 3D street-level imagery;7) automatically insert said specific details into said 3D model of said urban environment;8) enable a user to interactively create templates of building details;9) selectively implant said templates into at least one of said 3D building models, terrain skin model and street-level-culture models; and10) automatically generate a 3D-mesh representing said terrain skin surfaces, using a predetermined set of 3D-points and 3D-lines (“polylines”) belonging to said 3D terrain skin model, without affecting existing 3D-details.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.