IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0266942
(2008-11-07)
|
등록번호 |
US-8144954
(2012-03-27)
|
발명자
/ 주소 |
- Quadling, Mark
- Tchouprakov, Andrei
- Severance, Gary
- Freeman, Glen
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
4 인용 특허 :
9 |
초록
▼
Methods, systems, and devices for generating textured 3D models are provided. The present disclosure describes methods, systems, and devices for combining multiple images onto a 3D model. In some instances, the textures of the images are applied to the 3D model dynamically so that the textured 3D mo
Methods, systems, and devices for generating textured 3D models are provided. The present disclosure describes methods, systems, and devices for combining multiple images onto a 3D model. In some instances, the textures of the images are applied to the 3D model dynamically so that the textured 3D model is viewable from different viewpoints in real time on a display. The present disclosure also describes methods, systems, and devices for selecting the images and, in particular, the portions of the selected images to map to defined portions of the 3D model. In addition, the present disclosure describes how to adjust the images themselves to remove the effects of directional lighting. Some aspects of the present disclosure are particularly useful in the context of a 3D modeling of dental preparations. In some instances, a 3D digitizer is used to produce 3D models of dental preparations that are rendered on a display in real time and are fully 3D dimensional, while accurately depicting the surface textures of the item(s) being digitized.
대표청구항
▼
1. A method comprising: scanning a physical object with a 3D digitizer and attached light source from a plurality of viewpoints to obtain a 3D model and a plurality of images of the object;adjusting an intensity of at least one of the plurality of images using an intensity function that compensates
1. A method comprising: scanning a physical object with a 3D digitizer and attached light source from a plurality of viewpoints to obtain a 3D model and a plurality of images of the object;adjusting an intensity of at least one of the plurality of images using an intensity function that compensates for effects of the attached light source to create at least one intensity adjusted texture; andmapping textures from the plurality of images to corresponding points in the 3D model to generate a textured 3D model of the object, wherein each point in the 3D model corresponds to a texture from one of the plurality of images and at least one point corresponds to the at least one intensity adjusted texture. 2. The method of claim 1, wherein the mapping of textures from the plurality of images to the corresponding points in the 3D model is dependent on a viewpoint of the 3D model. 3. The method of claim 2, further comprising: ordering the plurality of images (1 to n) based on a relative proximity of the corresponding viewpoint of the image to the viewpoint of the 3D model, where image 1 is closest in proximity and image n is farthest in proximity to the viewpoint of the 3D model; andwherein mapping the textures comprises utilizing the texture of image 1 for a primary portion of the 3D model. 4. The method of claim 3, wherein mapping the textures comprises utilizing the textures of images 2 through m, where m is less than or equal to n, for secondary portions of the 3D model. 5. The method of claim 4, wherein the physical object comprises a dental item. 6. The method of claim 1, further comprising calibrating the 3D digitizer to determine the intensity function. 7. The method of claim 6, wherein calibrating the 3D digitizer comprises: imaging a surface of uniform color with the 3D digitizer from a plurality of distances to obtain a plurality of calibration images;determining an intensity map for each of the plurality of calibration images; anddetermining the intensity function by interpolating between the intensity maps of the plurality of calibration images. 8. The method of claim 1, wherein the attached light source includes a laser light source. 9. The method of claim 1, wherein the 3D digitizer is sized for intra-oral use and configured for obtaining a plurality of images of dental structures. 10. The method of claim 1, further comprising displaying the textured 3D model in real time on a display. 11. The method of claim 1, wherein mapping textures from the plurality of images to corresponding points in the 3D model further comprises determining which textures from the plurality of images to map to the corresponding points in the 3D model based on a current viewpoint of the 3D model. 12. The method of claim 11, wherein determining which textures from the plurality of images to map to the corresponding points in the 3D model comprises: ordering the plurality of images based on a relative proximity of the corresponding viewpoint of the image to the current viewpoint of the 3D model;determining which portions of the 3D model can be mapped using the image closest in proximity to the current viewpoint of the 3D model and mapping those portions of the 3D model with the corresponding textures the image closest in proximity to the current viewpoint;determining which portions of the 3D model can be mapped using the image next closest in proximity to the current viewpoint of the 3D model and mapping those portions of the 3D model with the corresponding textures of the image next closest in proximity to the current viewpoint until all portions of the 3D model visible from the current viewpoint are textured. 13. A method comprising: scanning a physical object with a 3D digitizer having an attached light source from a plurality of viewpoints to obtain a 3D model and a plurality of images of the object;rendering the 3D model on a display from a first viewpoint, wherein rendering the 3D model comprises mapping textures from the plurality of images to corresponding points in the 3D model to generate a textured 3D model of the object, wherein mapping the textures from the plurality of images comprises: ordering the plurality of images based on a relative proximity of the viewpoint of the image to the first viewpoint of the 3D model, andapplying the textures to the corresponding points in the 3D model using the images having the closest relative proximity to the first viewpoint. 14. The method of claim 13, wherein the 3D model is defined by a plurality of triangles, and wherein a texture from a single one of the plurality of images is applied to each of the plurality of triangles based on the ordering of the plurality of images. 15. The method of claim 13, wherein if textures from multiple images are available to map to a single corresponding point of the 3D model, then the image having the closest proximity to the first viewpoint is utilized. 16. The method of claim 13, wherein mapping the textures further comprises: determining which portions of the 3D model can be mapped using the image in closest proximity to the first viewpoint;mapping the portions of the 3D model that can be mapped using the image in closest proximity to the first viewpoint with a corresponding texture of the image in closest proximity;determining which unmapped portions of the 3D model can be mapped using the image second closest in proximity to the first viewpoint;mapping the unmapped portions of the 3D model that can be mapped using the image second closest in proximity to the first viewpoint with a corresponding texture of the image second closest in proximity. 17. The method of claim 16, wherein mapping the textures further comprises: determining which unmapped portions of the 3D model can be mapped using the image next closest in proximity to the first viewpoint;mapping the unmapped portions of the 3D model that can be mapped using the image next closest in proximity to the first viewpoint with a corresponding texture of the image next closest in proximity until all portions of the 3D model visible from the first viewpoint are textured. 18. The method of claim 13, further comprising: rendering the 3D model on a display from a second viewpoint, wherein rendering the 3D model comprises mapping textures from the plurality of images to corresponding points in the 3D model to generate a textured 3D model of the object, wherein mapping the textures from the plurality of images comprises: ordering the plurality of images based on a relative proximity of the viewpoint of the image to the second viewpoint of the 3D model, andapplying the textures from the images having the closest relative proximity to the second viewpoint to the corresponding points in the 3D model. 19. The method of claim 13, wherein the attached light source includes a laser light source. 20. A method comprising: providing a 3D digitizer comprising an attached light source;imaging a calibration object with the 3D digitizer from a plurality of distances to obtain a plurality of images;determining an intensity map for each of the plurality of images; anddetermining an intensity function by interpolating between the intensity maps of the plurality of images, the intensity function compensating for effects caused by the attached light source. 21. The method of claim 20, wherein the calibration object has a substantially uniform color. 22. The method of claim 21, wherein the calibration object comprises at least one planar surface portion. 23. The method of claim 20, further comprising: imaging a subject object with the 3D digitizer to obtain a plurality of object images; andadjusting each of the plurality of object images using the determined intensity function to compensate for effects caused by the attached light source. 24. The method of claim 23, further comprising: generating a 3D model of the object based on the plurality of adjusted object images. 25. The method of claim 24, wherein generating the 3D model comprises applying textures from the plurality of adjusted object images to surfaces of the 3D model. 26. A method comprising: scanning a dental preparation with a 3D digitizer from a plurality of viewpoints to obtain 3D coordinate data and a plurality of images;viewing from a first viewpoint a textured 3D model of the dental preparation generated from the 3D coordinate data and at least some of the plurality of images, where textures from the at least some of the plurality of images are applied to the surfaces of the 3D model visible in the first viewpoint based on a relative proximity of the viewpoint of an image to the first viewpoint of the 3D model;identifying and marking a first portion of a margin on the textured 3D model from the first viewpoint, where at least the first portion of the margin is identified at least partially based on the textures of the surfaces of the 3D model; anddesigning a dental prosthetic device based on the identified and marked margin. 27. The method of claim 26, further comprising: viewing from a second viewpoint the textured 3D model of the dental preparation, where textures from the at least some of the plurality of images are applied to the surfaces of the 3D model visible in the second viewpoint based on a relative proximity of the viewpoint of an image to the second viewpoint of the 3D model; andidentifying and marking a second portion of the margin on the 3D model from the second viewpoint. 28. The method of claim 27, further comprising rotating the textured 3D model from the first viewpoint to the second viewpoint via a user interface. 29. The method of claim 26, further comprising sending information regarding the designed dental prosthetic device to a milling machine suitable for making the designed dental prosthetic device from a mill blank. 30. The method of claim 26, wherein marking the first portion of the margin comprises selecting two or more coordinates of the textured 3D model via a user interface. 31. The method of claim 26, wherein scanning the dental preparation comprises scanning a dental preparation that has not been treated with a contrast agent. 32. The method of claim 31, wherein different materials are distinguishable in the textured 3D model. 33. The method of claim 32, wherein at least tooth structures and gums are distinguishable in the textured 3D model. 34. The method of claim 33, wherein blood and dental restorations are distinguishable in the textured 3D model. 35. A method comprising: receiving 3D coordinate data and a plurality of images obtained by scanning a dental preparation with a 3D digitizer from a plurality of viewpoints;generating a textured 3D model of the dental preparation from a first viewpoint using the 3D coordinate data and at least some of the plurality of images, where textures from the at least some of plurality of images are applied to the surfaces of the 3D model visible in the first viewpoint based on a relative proximity of the viewpoint of an image to the first viewpoint of the 3D model;displaying the textured 3D model from the first viewpoint on a display;receiving an input from a user identifying a first portion of a margin on the textured 3D model from the first viewpoint;generating the textured 3D model of the dental preparation from a second viewpoint using the 3D coordinate data and at least some of the plurality of images, where textures from the at least some of plurality of images are applied to the surfaces of the 3D model visible in the second viewpoint based on a relative proximity of the viewpoint of an image to the second viewpoint of the 3D model;displaying the textured 3D model from the second viewpoint on the display; andreceiving an input from a user identifying a second portion of a margin on the textured 3D model from the second viewpoint. 36. The method of claim 35, further comprising rotating the textured 3D model from the first viewpoint to the second viewpoint based on a command input by a user. 37. The method of claim 35, wherein generating the textured 3D model from the first viewpoint comprises: ranking the at least some of the plurality of images based on a relative proximity of the viewpoint of the image to the first viewpoint of the 3D model;determining which portions of the 3D model can be mapped using the image in closest proximity to the first viewpoint;mapping the portions of the 3D model that can be mapped using the image in closest proximity to the first viewpoint with a corresponding texture of the image in closest proximity;determining which unmapped portions of the 3D model can be mapped using the image second closest in proximity to the first viewpoint; andmapping the unmapped portions of the 3D model that can be mapped using the image second closest in proximity to the first viewpoint with a corresponding texture of the image second closest in proximity. 38. The method of claim 37, wherein generating the textured 3D model from the second viewpoint comprises: ranking the at least some of the plurality of images based on a relative proximity of the viewpoint of the image to the second viewpoint of the 3D model;determining which portions of the 3D model can be mapped using the image in closest proximity to the second viewpoint;mapping the portions of the 3D model that can be mapped using the image in closest proximity to the second viewpoint with a corresponding texture of the image in closest proximity;determining which unmapped portions of the 3D model can be mapped using the image second closest in proximity to the second viewpoint; andmapping the unmapped portions of the 3D model that can be mapped using the image second closest in proximity to the second viewpoint with a corresponding texture of the image second closest in proximity.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.