최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0774958 (2013-02-22) |
등록번호 | US-9378584 (2016-06-28) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 0 인용 특허 : 305 |
A computer-implemented method for rendering virtual try-on products is described. A first render viewpoint is selected of a virtual 3-D space that includes a 3-D model of at least a portion of a user generated from an image of the user and a 3-D polygon mesh of an object. Polygons of the 3-D polygon
A computer-implemented method for rendering virtual try-on products is described. A first render viewpoint is selected of a virtual 3-D space that includes a 3-D model of at least a portion of a user generated from an image of the user and a 3-D polygon mesh of an object. Polygons of the 3-D polygon mesh are designated as backwards-facing polygons and front-facing polygon in relation to the first render viewpoint. A shadow texture map of the object is applied to the 3-D model of the user. A transparency texture map of the object is applied to the backwards-facing polygon of the 3-D polygon mesh of the object. A first color texture map of the object is applied to the result of the application of the transparency texture map to the backwards-facing polygon. The virtual 3-D space is rendered at the first render viewpoint.
1. A computer-implemented method for rendering virtual try-on products, the method comprising: selecting, via a processor, a first render viewpoint of a virtual 3-D space, wherein the virtual 3-D space comprises a 3-D model of at least a portion of a user generated from an image of the user and a 3-
1. A computer-implemented method for rendering virtual try-on products, the method comprising: selecting, via a processor, a first render viewpoint of a virtual 3-D space, wherein the virtual 3-D space comprises a 3-D model of at least a portion of a user generated from an image of the user and a 3-D polygon mesh representative of a real world object, wherein the object comprises at least one of clothing, footwear, glasses, jewelry, accessories, and hair styles;designating, via the processor, a first set comprising at least one polygon of the 3-D polygon mesh of the object as a backwards-facing polygon in relation to the first render viewpoint;designating, via the processor, a second set comprising at least one polygon of the 3-D polygon mesh of the object as a front-facing polygon in relation to the first render viewpoint;applying, via the processor, a shadow texture map of the object to the 3-D model of the user;applying, via the processor, a transparency texture map of the object to the backwards-facing polygon of the 3-D polygon mesh of the object;applying, via the processor, a first color texture map of the object to the result of the application of the transparency texture map to the backwards-facing polygon; andrendering, via the processor, the virtual 3-D space at the first render viewpoint including both the 3-D model of at least a portion of a user and the 3-D polygon mesh of an object. 2. The method of claim 1, further comprising: applying the transparency texture map of the object to the front-facing polygon of the 3-D polygon mesh of the object; andapplying the first color texture map of the object to the result of the application of the transparency texture map to the front-facing polygon. 3. The method of claim 1, further comprising: placing at least a portion of the 3-D polygon mesh of the object within a predetermined distance of at least one point on the 3-D model of the user, wherein the application of the shadow texture map is based on the position of the 3-D polygon mesh of the object in relation to the 3-D model of the user. 4. The method of claim 1, further comprising: detecting a shadow value of the object from a scan of the object; andcreating the shadow texture map from the detected shadow value. 5. The method of claim 4, further comprising: mapping a 2-D coordinate of the shadow texture map to a point on the 3-D model of the user; andmultiplying a value of the point on the 3-D model of the user by the shadow value. 6. The method of claim 1, further comprising: detecting a transparency value of the object from a scan of the object; andcreating the transparency texture map from the detected transparency value. 7. The method of claim 6, further comprising: mapping a 2-D coordinate of the transparency texture map to a point on the 3-D model of the user and the 3-D polygon mesh of the object; andmultiplying a value of the point on the 3-D model of the user by the transparency value. 8. The method of claim 7, further comprising: selecting a first scanning angle of a scan of an object, wherein the first scanning angle corresponds to the first render viewpoint;detecting a first color value of the object at the first scanning angle;creating the first color texture map from the detected first color value. 9. The method of claim 8, further comprising: mapping a 2-D coordinate of the first color texture map to the point on the 3-D model of the user and the 3-D polygon mesh of the object; and multiplying the resultant value of the point on the 3-D model of the user and the 3-D polygon mesh of the object by the first color value. 10. The method of claim 1, further comprising: selecting a second render viewpoint of the virtual 3-D space. 11. The method of claim 10, further comprising: selecting a second scanning angle of a scan of an object, wherein the second scanning angle corresponds to the second render viewpoint;detecting a second color value of the object at the second scanning angle; andcreating a second color texture map from the detected second color value. 12. The method of claim 10, further comprising: applying the shadow texture map of the object to the 3-D model of the user at the second render viewpoint;applying the transparency texture map of the object to the backwards-facing polygon of the 3-D polygon mesh of the object at the second render viewpoint; andapplying the second color texture map of the object to the result of the application of the transparency texture map to the backwards-facing polygon at the second render viewpoint. 13. The method of claim 10, further comprising: applying the transparency texture map of the object to the front-facing polygon of the 3-D polygon mesh of the object at the second render viewpoint; andapplying the second color texture map of the object to the result of the application of the transparency texture map to the front-facing polygon at the second render viewpoint;rendering the virtual 3-D space at the second render viewpoint. 14. The method of claim 1, further comprising: dividing the 3-D polygon mesh of the object into two or more portions;determining an order to the portions of the divided 3-D polygon mesh of the object from furthest portion to closest portion relative to the determined render viewpoint of the virtual 3-D space;rendering the 3-D polygon mesh of the object from the furthest portion to the closest portion. 15. The method of claim 1, further comprising: determining whether a portion of the 3-D polygon mesh of the object is visible in relation to the 3-D model of the user based on the determined render viewpoint, wherein rendering the scene comprises rendering the scene based on a visible portion of the 3-D polygon mesh of the object. 16. The method of claim 1, further comprising: determining a first level and a second level of blur accuracy;determining a first level and a second level of blur intensity; andapplying the first level of blur accuracy at the first level of blur intensity to the rendered depiction of the object. 17. The method of claim 16, further comprising: detecting an edge of the rendered depiction of the object; andapplying the first level of blur accuracy at the second level of blur intensity to the rendered depiction of the object. 18. The method of claim 16, further comprising: upon receiving a user input to adjust the render viewpoint, applying the second level of blur accuracy to the rendered depiction of the object. 19. A computing device configured to render virtual try-on products, comprising: a processor;memory in electronic communication with the processor;instructions stored in the memory, the instructions being executable by the processor to: select a first render viewpoint of a virtual 3-D space, wherein the virtual 3-D space comprises a 3-D model of at least a portion of a user generated from an image of the user and a 3-D polygon mesh of a real world object, wherein the object comprises at least one of clothing, footwear, glasses, jewelry, accessories, and hair styles;designate a first set comprising at least one polygon of the 3-D polygon mesh of the object as a backwards-facing polygon in relation to the first render viewpoint;designate a second set comprising at least one polygon of the 3-D polygon mesh of the object as a front-facing polygon in relation to the first render viewpoint;apply a shadow texture map of the object to the 3-D model of the user;apply a transparency texture map of the object to the backwards-facing polygon of the 3-D polygon mesh of the object;apply a first color texture map of the object to the result of the application of the transparency texture map to the backwards-facing polygon; andrender the virtual 3-D space at the first render viewpoint including both the 3-D model of at least a portion of a user and the 3-D polygon mesh of an object. 20. The computing device of claim 19, wherein the instructions are executable by the processor to: apply the transparency texture map of the object to the front-facing polygon of the 3-D polygon mesh of the object; andapply the first color texture map of the object to the result of the application of the transparency texture map to the front-facing polygon. 21. The computing device of claim 19, wherein the instructions are executable by the processor to: detect a shadow value of the object from a scan of the object; andcreate the shadow texture map from the detected shadow value. 22. The computing device of claim 21, wherein the instructions are executable by the processor to: map a 2-D coordinate of the shadow texture map to a point on the 3-D model of the user; andmultiply a value of the point on the 3-D model of the user by the shadow value. 23. The computing device of claim 19, wherein, upon determining the first application is a trusted application, the instructions are executable by the processor to: detect a transparency value of the object from a scan of the object; andcreate the transparency texture map from the detected transparency value. 24. The computing device of claim 23, wherein the instructions are executable by the processor to: map a 2-D coordinate of the transparency texture map to a point on the 3-D model of the user and the 3-D polygon mesh of the object; andmultiply a value of the point on the 3-D model of the user by the transparency value. 25. The computing device of claim 24, wherein the instructions are executable by the processor to: select a first scanning angle of a scan of an object where the scanning angle corresponds to the first render viewpoint;detect a first color value of the object at the first scanning angle;create the first color texture map from the detected first color value. 26. The computing device of claim 25, wherein the instructions are executable by the processor to: map a 2-D coordinate of the first color texture map to the point on the 3-D model of the user and the 3-D polygon mesh of the object; andmultiply the resultant value of the point on the 3-D model of the user and the 3-D polygon mesh of the object by the first color value. 27. The computing device of claim 19, wherein the instructions are executable by the processor to: select a second render viewpoint of the virtual 3-D space. 28. The computing device of claim 27, wherein the instructions are executable by the processor to: select a second scanning angle of a scan of an object, wherein the second scanning angle corresponds to the second render viewpoint; detect a second color value of the object at the second scanning angle; andcreate a second color texture map from the detected second color value. 29. The computing device of claim 27, wherein the instructions are executable by the processor to: apply the shadow texture map of the object to the 3-D model of the user at the second render viewpoint;apply the transparency texture map of the object to the backwards-facing polygon of the 3-D polygon mesh of the object at the second render viewpoint; andapply the second color texture map of the object to the result of the application of the transparency texture map to the backwards-facing polygon at the second render viewpoint. 30. The computing device of claim 27, wherein the instructions are executable by the processor to: apply the transparency texture map of the object to the front-facing polygon of the 3-D polygon mesh of the object at the second render viewpoint;apply the second color texture map of the object to the result of the application of the transparency texture map to the front-facing polygon at the second render viewpoint; andrender the virtual 3-D space at the second render viewpoint. 31. The computing device of claim 19, wherein the instructions are executable by the processor to: divide the 3-D polygon mesh of the object into two or more portions;determine an order to the portions of the divided 3-D polygon mesh of the object from furthest portion to closest portion relative to the determined render viewpoint of the virtual 3-D space;render the 3-D polygon mesh of the object from the furthest portion to the closest portion. 32. The computing device of claim 19, wherein the instructions are executable by the processor to: determine whether a portion of the 3-D polygon mesh of the object is visible in relation to the 3-D model of the user based on the determined render viewpoint, wherein the instruction to render the scene comprises an instruction to render the scene based on a visible portion of the 3-D polygon mesh of the object. 33. The computing device of claim 19, wherein the instructions are executable by the processor to: determine a first level and a second level of blur accuracy;determine a first level and a second level of blur intensity; andapply the first level of blur accuracy at the first level of blur intensity to the rendered depiction of the object. 34. The computing device of claim 33, wherein the instructions are executable by the processor to: detect an edge of the rendered depiction of the object; andapply the first level of blur accuracy at the second level of blur intensity to the rendered depiction of the object. 35. The computing device of claim 33, wherein the instructions are executable by the processor to: upon receiving a user input to adjust the render viewpoint, apply the second level of blur accuracy to the rendered depiction of the object. 36. A computer-program product for rendering virtual try-on products, the computer-program product comprising a non-transitory computer-readable medium storing instructions thereon, the instructions being executable by a processor to: select a first render viewpoint of a virtual 3-D space, wherein the virtual 3-D space comprises a 3-D model of at least a portion of a user generated from an image of the user and a 3-D polygon mesh of a real world object, wherein the object comprises at least one of clothing, footwear, glasses, jewelry, accessories, and hair styles;designate a first set comprising at least one polygon of the 3-D polygon mesh of the object as a backwards-facing polygon in relation to the first render viewpoint;designate a second set comprising at least one polygon of the 3-D polygon mesh of the object as a front-facing polygon in relation to the first render viewpoint;apply a shadow texture map of the object to the 3-D model of the user;apply a transparency texture map of the object to the backwards-facing polygon of the 3-D polygon mesh of the object;apply a first color texture map of the object to the result of the application of the transparency texture map to the backwards-facing polygon;apply the transparency texture map of the object to the front-facing polygon of the 3-D polygon mesh of the object; andapply the first color texture map of the object to the result of the application of the transparency texture map to the front-facing polygon render the virtual 3-D space at the first render viewpoint including both the 3-D model of at least a portion of a user and the 3-D polygon mesh of an object. 37. The computer-program product of claim 36, wherein the instructions are executable by the processor to: select a second render viewpoint of the virtual 3-D space;select a first scanning angle of a scan of an object where the scanning angle corresponds to the first render viewpoint;detect a second color value of the object from a scan of the object;create a second color texture map from the detected second color value, wherein the second color texture map corresponds to the second render viewpoint;apply the shadow texture map of the object to the 3-D model of the user at the second render viewpoint;apply the transparency texture map of the object to the backwards-facing polygon of the 3-D polygon mesh of the object at the second render viewpoint;apply the second color texture map of the object to the result of the application of the transparency texture map to the backwards-facing polygon at the second render viewpoint;apply the transparency texture map of the object to the front-facing polygon of the 3-D polygon mesh of the object at the second render viewpoint; andapply the second color texture map of the object to the result of the application of the transparency texture map to the front-facing polygon at the second render viewpoint; andrender the virtual 3-D space at the second render viewpoint. 38. The computer-program product of claim 36, wherein the instructions are executable by the processor to: determine a first level and a second level of blur accuracy;determine a first level and a second level of blur intensity; andapply the first level of blur accuracy at the first level of blur intensity to the rendered depiction of the object. 39. The computer-program product of claim 38, wherein the instructions are executable by the processor to: detect an edge of the rendered depiction of the object; andapply the first level of blur accuracy at the second level of blur intensity to the rendered depiction of the object. 40. The computer-program product of claim 38, wherein the instructions are executable by the processor to: upon receiving a user input to adjust the render viewpoint, apply the second level of blur accuracy to the rendered depiction of the object.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.