최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0345119 (2012-01-06) |
등록번호 | US-9736524 (2017-08-15) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 0 인용 특허 : 1118 |
The present disclosure provides user interface methods of and systems for displaying at least one available action overlaid on an image, comprising displaying an image; selecting at least one action and assigning a ranking weight thereto based on at least one of (1) image content, (2) current device
The present disclosure provides user interface methods of and systems for displaying at least one available action overlaid on an image, comprising displaying an image; selecting at least one action and assigning a ranking weight thereto based on at least one of (1) image content, (2) current device location, (3) location at which the image was taken, (4) date of capturing the image; (5) time of capturing the image; and (6) a user preference signature representing prior actions chosen by a user and content preferences learned about the user; and ranking the at least one action based on its assigned ranking weight.
1. A computer-implemented user interface method of displaying at least one available action overlaid on an image, the method comprising: generating for display, a live image and a visual guide overlaid on the live image;identifying an object of interest in the live image based on a proximity of the
1. A computer-implemented user interface method of displaying at least one available action overlaid on an image, the method comprising: generating for display, a live image and a visual guide overlaid on the live image;identifying an object of interest in the live image based on a proximity of the object of interest to the visual guide;identifying, by a processor, without receiving user input, a first plurality of actions of different types from a second plurality of actions for subsequent selection by a user, the first plurality of actions being identified automatically based at least in part on the object of interest and at least one of (1) current device location, (2) location at which the live image was taken, (3) date of capturing the live image, (4) time of capturing the live image, and (5) a user preference signature representing prior actions selected by a user and content preferences learned about the user associated with particular times or locations at which the prior actions were selected by the user;assigning a ranking weight to the first plurality of actions based on a non-textual portion of the identified object of interest;ranking the first plurality of actions based on its assigned ranking weight; andpresenting the first plurality of actions to a user as selectable options. 2. The method of claim 1, wherein the presenting of the first plurality of actions to the user as selectable options includes displaying the first plurality of actions in an order based on the ranking. 3. The method of claim 2, further comprising updating the user preference signature to include information about the action chosen by the user from among the first plurality of actions. 4. The method of claim 1, wherein the live image is an image of a portion of an environment surrounding the user. 5. The method of claim 1, wherein the identifying a first plurality of actions and assigning a ranking weight thereto includes determining the ranking weight by a machine learning process. 6. The method of claim 1, further comprising selecting the highest ranked action within the first plurality of actions in response to activation of a hardware camera button. 7. The method of claim 1, further comprising analyzing the live image to learn about the object of interest. 8. The method of claim 7, further comprising using at least one of the location of the device and the location at which the live image was taken to augment the analyzing the live image to learn about the object of interest. 9. The method of claim 8, wherein the first plurality of actions includes an action to purchase an item corresponding to the object of interest from an online storefront corresponding to a physical storefront, if the device's location is proximate to the physical storefront. 10. The method of claim 7, wherein the analyzing the live image to learn about the object of interest includes comparing the live image against a collection of at least one sample image to determine the object of interest. 11. The method of claim 10, wherein the analyzing the live image to learn about the object of interest includes using optical character recognition to learn about textual image content. 12. The method of claim 10, wherein the analyzing the live image to learn about the object of interest includes analyzing at least one partial image selected based on a proximity of the at least one partial image to a visual field of interest for the user. 13. The method of claim 1, wherein the first plurality of actions are presented at a first point in time, further comprising: storing the live image to a memory along with data about at least one of the location of the device, the date at which the live image was captured, and the time at which the live image was captured; andpresenting the first plurality of actions at a second point in time in an order based on the ranking when the user later acts upon the stored live image after the first point in time. 14. The method of claim 1, the method further comprising: subsequent to identifying the first plurality of actions, identifying a second object in the live image, wherein the second object is farther from the visual guide than the object of interest;identifying an alternate action associated with the second object; andpresenting the alternate action to a user as a selectable option. 15. A system for displaying at least one available action overlaid on an image, the system comprising: a memory device that stores instructions; anda processor circuitry that executes the instructions and is configured to: generate, for display, a live image and a visual guide overlaid on the live image;identify an object of interest in the live image based on the proximity of the object of interest to the visual guide;identify, without receiving user input, a first plurality of actions of different types from a second plurality of actions for subsequent selection by the user, the first plurality of actions being identified automatically based at least in part on the object of interest and at least one of (1) current device location, (2) location at which the live image was taken, (3) date of capturing the live image, (4) time of capturing the live image, and (5) a user preference signature representing prior actions selected by a user and content preferences learned about the user associated with particular times or locations at which prior actions were selected by the user;assign a ranking weight to the first plurality of actions based on a non-textual portion of the identified object of interest;rank the first plurality of actions based on its assigned ranking weight; andpresent the first plurality of actions to a user as selectable options. 16. The system of claim 15, wherein the processor circuitry is further configured to present the first plurality of actions to a user as selectable options by displaying the first plurality of actions in an order based on the ranking. 17. The system of claim 16, the processor circuitry further being configured to cause the computer system to update the user preference signature to include information about the action chosen by the user from among the first plurality of actions. 18. The system of claim 15, wherein the live image is an image of a portion of an environment surrounding the user. 19. The system of claim 15, wherein the processor circuitry is further configured to determine the ranking weight by a machine learning process. 20. The system of claim 15, the processor circuitry being further configured to cause the computer system to select the highest ranked action within the first plurality of actions in response to activation of a hardware camera button. 21. The system of claim 15, the processor circuitry being further configured to cause the computer system to analyze the live image to learn about the object of interest. 22. The system of claim 21, the processor circuitry being further configured to cause the computer system to use at least one of the location of the device and the location at which the live image was taken to augment the analyzing of the live image to learn about the object of interest. 23. The system of claim 21, wherein the processor circuitry is further configured to compare the live image against a collection of at least one sample image to determine the object of interest. 24. The system of claim 23, wherein the processor circuitry is further configured to use optical character recognition to learn about textual image content. 25. The system of claim 23, wherein the processor circuitry is further configured to analyze at least one partial image selected based on a proximity of the at least one partial image to a visual field of interest for the user. 26. The system of claim 15, wherein the processor circuitry is further configured to present the first plurality of actions at a first point in time, and the processor circuitry is further configured to: cause the computer system to store the live image to a memory along with data about at least one of the location of the device, the date at which the live image was captured, and the time at which the live image was captured; andcause the computer system to present the first plurality of actions again at a second point in time in-an order based on the ranking when the user later acts upon the stored live image after the first point in time.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.