System, method and apparatus of eye tracking or gaze detection applications including facilitating action on or interaction with a simulated object
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06T-019/00
G09G-005/377
G06Q-030/06
G06F-003/01
G02B-027/01
G06F-003/0488
G06K-009/00
출원번호
US-0863698
(2018-01-05)
등록번호
US-10127735
(2018-11-13)
발명자
/ 주소
Spivack, Nova
출원인 / 주소
Augmented Reality Holdings 2, LLC
대리인 / 주소
London Bridge Ventures
인용정보
피인용 횟수 :
0인용 특허 :
101
초록▼
Techniques are disclosed for facilitating action by a user on a simulated object in an augmented reality environment. In some embodiments, a method includes, detecting a gesture of the user in a real environment via a sensor of the device; wherein, the gesture includes, movement of eye ball or eye f
Techniques are disclosed for facilitating action by a user on a simulated object in an augmented reality environment. In some embodiments, a method includes, detecting a gesture of the user in a real environment via a sensor of the device; wherein, the gesture includes, movement of eye ball or eye focal point of one or more eyes of the user. The gesture can be detected by tracking: a movement of one or more eyes of the user, a non-movement of one or more eyes of the user, a location of a focal point of one or more eyes of the user, and/or a movement of an eye lid of one or more eyes of the user. The gesture can be captured to implement the action on the simulated object in the augmented reality environment.
대표청구항▼
1. A method to facilitate action by a user via a device on a simulated object in an augmented reality environment, the method comprising: detecting a gesture of the user in a real environment via a sensor of the device;wherein, the gesture includes, one or more of: movement of one or more eyes of th
1. A method to facilitate action by a user via a device on a simulated object in an augmented reality environment, the method comprising: detecting a gesture of the user in a real environment via a sensor of the device;wherein, the gesture includes, one or more of: movement of one or more eyes of the user, andmovement of eye focal point of the one or more eyes of the user;capturing the gesture to implement the action on the simulated object in the augmented reality environment;wherein, the gesture is detected based on a given speed or velocity of the movement of the one or more eyes while the one or more eyes is in motion relative to the user;wherein, the action to be performed on the simulated object is based on the gesture that is detected;wherein, the gesture includes locating the eye focal point of the one or more eyes of the user to target the simulated object that is in the eye focal point;wherein, the eye focal point is determined from the given speed or velocity. 2. The method of claim 1, wherein, the user is associated with a physical location of the real environment;wherein, the simulated object is associated with one or more of: the physical location, location ranges in or near the physical location, a physical entity in or near the physical location, and another user in or near the physical location. 3. The method of claim 1, wherein, the user is associated with a physical location of the real environment;wherein, the simulated object includes information or data associated with one or more of: the physical location, location ranges in or near the physical location, a physical entity in or near the physical location, and another user in or near the physical location. 4. The method of claim 1, wherein the gesture is detected via tracking by the sensor, of one or more of: a movement of the one or more eyes of the user,a direction of the movement of the one or more eyes of the user,a non-movement of the one or more eyes of the user,a location of the eye focal point of the one or more eyes of the user. 5. The method of claim 1, wherein the gesture is detected via tracking by the sensor, a movement of an eye lid of the one or more eyes of the user. 6. The method of claim 1, wherein, the gesture further includes, blinking of the one or more eyes of the user. 7. The method of claim 6, wherein, the action associated with the blinking of the one or more of the eyes of the user includes selection of the simulated object in the eye focal point of the one or more eyes. 8. The method of claim 6, wherein, the action associated with the blinking of the one or more eyes of the user includes one or more of, taking a photo in the augmented reality environment and selecting a button in the augmented reality environment. 9. The method of claim 1, wherein the action includes, one or more of: a selection mechanism of the simulated object and pointing to the simulated object. 10. The method of claim 1, wherein, the action includes tracing an outline of the simulated object. 11. The method of claim 1, wherein, the sensor includes, one or more of: an image sensor, a motion sensor, and a user stimulus sensor. 12. The method of claim 1, further comprising, performing search based on the simulated object that is in the eye focal point. 13. The method of claim 1, wherein, the gesture further includes a movement of, one or more of, a hand or a finger of the user. 14. The method of claim 1, wherein, the gesture further includes a movement of, one or more of, a head, an arm, a limb and a torso of the user. 15. The method of claim 1, wherein, the simulated object includes an advertisement relevant to a physical entity or physical object in a physical location of the real environment. 16. The method of claim 1, wherein the simulated object that is targeted in the eye focal point is depicted in the augmented reality environment as a result of a search query. 17. A system to enable interaction with a simulated object by a user in a simulated environment, the system comprising: a processor;a memory having stored thereon instructions which, when executed by the processor, cause the system to:detect a gesture of the user in a real environment;wherein, the gesture includes, one or more of: movement of eye ball motion of one or more eyes of the user, andmovement of eye focal point of the one or more eyes of the user;capture the gesture to facilitate the interaction with the simulated object in the simulated environment;wherein, the gesture is captured using a given speed or velocity of the movement of the one or more eyes while the one or more eyes is in motion relative to the user;wherein, the interaction with the simulated objected that is enabled in the simulated environment is based on the gesture of the user;wherein, the gesture includes locating the eye focal point of the one or more eyes of the user to target the simulated object that is in the eye focal point;wherein, the eye focal point is determined from the given speed or velocity. 18. The system of claim 17, wherein the simulated object includes multimedia content. 19. The system of claim 17: wherein the simulated object represents or is associated with a physical entity or physical object in a physical location of the real environment;wherein, the physical entity or the physical object represented by or associated with the simulated object has been recognized or identified through recognition of the physical object or the physical entity using a recognition pattern, or identified through detection of physical characteristics of the physical object or the physical entity. 20. The system of claim 17: wherein search is performed based on the simulated object that is in the eye focal point. 21. The system of claim 17: wherein the simulated object represents or is associated with a physical object in a physical location of the real environment;wherein, the physical object represented by or associated with the simulated object has been recognized through detection or identification of the physical object using one or more or, an augmented reality marker, a barcode and an RFID code. 22. The system of claim 17, wherein the gesture is detected via tracking, one or more of: a movement of the one or more eyes of the user,a non-movement of the one or more eyes of the user,a location of the eye focal point of the one or more eyes of the user, anda movement of an eye lid of the one or more eyes of the user. 23. The system of claim 17, wherein, the gesture further includes, blinking of the one or more eyes of the user. 24. The system of claim 17: wherein the simulated object that is in the eye focal point is depicted in the simulated environment as a result of a search query. 25. The system of claim 17: wherein search is performed based on the simulated object that is in the eye focal point;wherein, advertisements are identified or detected based on the search that is performed. 26. An apparatus which facilitates action by a user on a virtual object in a digital environment, the apparatus comprising: a sensor which detects a gesture of the user in a real environment;wherein, the gesture includes, one or more of: movement of eye ball of one or more eyes of the user, andmovement of eye focal point of the one or more eyes of the user;the gesture being captured to implement the action to be performed on the virtual object in the digital environment;wherein the gesture is detected based on a given speed or velocity of the movement of the one or more eyes while the one or more eyes is in motion relative to the user;wherein, the action to be performed on the virtual object is based on the gesture that is detected;wherein, the gesture includes locating the eye focal point of the one or more eyes of the user to target the virtual object that is in the eye focal point;wherein, the eye focal point is determined from the given speed or velocity of the movement of the one or more eyes. 27. The apparatus of claim 26, further comprising: an output module to present an updated rendering of the virtual object resulting from the action being performed on the virtual objectwherein, the output module includes, one or more of, an eye piece, goggles, or a mobile display. 28. The apparatus of claim 26: wherein the virtual object that is in the eye focal point is depicted in the digital environment as a result of a search query. 29. The apparatus of claim 26: wherein search is performed based on the virtual object that is in the eye focal point. 30. A non-transitory machine-readable storage medium, having stored thereon instructions which when executed by a processor, cause the processor to perform, a method to facilitate a transaction via an online environment, of a physical product in a real environment that is represented by a simulated object, the method comprising: capturing a gesture performed with respect to the simulated object;wherein, the gesture is performed by a user of the real environment to initiate the transaction of the physical product associated with the simulated object in the online environment;wherein, the gesture performed by the user with respect to the simulated object, includes movement of one or more eyes of the user;wherein, the gesture is captured using a given speed or velocity of the movement of the one or more eyes while the one or more eyes is in motion relative to the user;wherein, the gesture includes locating an eye focal point of the one or more eyes of the user to target the simulated object that is in the eye focal point;wherein, the eye focal point is determined from the given speed or velocity;conducting the transaction of the physical product in the online environment, according to the gesture performed by the user with respect to the simulated object associated with the physical product.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (101)
Michael J. Pelosi, 3D navigation system using complementary head-mounted and stationary infrared beam detection units.
Edecker, Ada Mae; Slyanko, Alex, Apparatus and method for creating a virtual three-dimensional environment, and method of generating revenue therefrom.
Elko David A. (Poughkeepsie NY) Frey Jeffrey A. (New Paltz NY) Helffrich Audrey A. (Poughkeepsie NY) Nick Jeffrey M. (Fishkill NY) Swanson Michael D. (Poughkeepsie NY), Authorization method for conditional command execution.
Sbisa, Daniel Charles; Khalil, Anis; Cope, Warren B.; Susbilla, John T.; Rangarajan, Ramaswami, Centralized service control for a telecommunication system.
Hatcherson, Robert Allen; Holt, Richard Keith; Tarter, Stephen Edward; Johnson, Jeremiah Jay; Fleury, Frederick Bryan; Estep, II, George William, Container-based architecture for simulation of entities in a time domain.
Brush ; II Abbott Purdy ; Gage Christopher Arnell Surtees ; Lection David Bruce ; Schell David Allen, Delivery of objects in a virtual world using a descriptive container.
Strubbe, Hugo J.; Eshelman, Larry J.; Gutta, Srinivas; Milanski, John; Pelletier, Daniel, Environment-responsive user interface/entertainment device that simulates personal interaction.
Lannert,Eric Jeffrey; Gobran,Timothy John; Smith,Karen Therese; Willow,Michael James; Conant,Jonathan Christian; Murphy,Scott Michael, Goal based system utilizing a time based model.
Faris, Sadeg M.; Hamlin, Gregory; Flannery, James P., INTERNET-BASED METHOD OF AND SYSTEM FOR MONITORING SPACE-TIME COORDINATE INFORMATION AND BIOPHYSIOLOGICAL STATE INFORMATION COLLECTED FROM AN ANIMATE OBJECT ALONG A COURSE THROUGH THE SPACE-TIME CONT.
Dobbins, Michael K.; Rondot, Pascale; Shone, Eric D.; Yokell, Michael R.; Abshire, Kevin J.; Harbor, Sr., Anthony Ray; Lovell, Scott; Barron, Michael K., Immersive collaborative environment using motion capture, head mounted display, and cave.
Huang, Ronald Keryuan; Mayor, Rob; Mahe, Isabel; Piemonte, Patrick, Interactive gaming with co-located, networked direction and location aware devices.
Bansal, Deepak; Kalia, Manish; Shorey, Rajeev, Media access control scheduling methodology in master driven time division duplex wireless Pico-cellular systems.
Budzinski,Robert L., Memory system for storing and retrieving experience and knowledge with natural language utilizing state representation data, word sense numbers, function codes, directed graphs and/or context memory.
White Carl J. (1869 Strum Ave. Walla Walla WA 99372), Method and apparatus for entrapment prevention and lateral guidance in passenger conveyor systems.
Cha, Inhyok; Shah, Yogendra C.; Ye, Chunxuan, Method and apparatus for securing location information and access control using the location information.
Dockter Michael Jon ; Farber Joel Frank ; Lynn Ronald William ; Richardt Randal James, Method and system for controlling access to data resources and protecting computing system resources from unauthorized access.
Rosenberg Louis B. ; Brave Scott B., Method for providing force feedback to a user of an interface device based on interactions of a controlled cursor with graphical elements in a graphical user interface.
Pea, Roy D.; Mills, Michael I.; Hoffert, Eric; Rosen, Joseph H.; Dauber, Kenneth, Methods and apparatus for interactive network sharing of digital video content.
Rhoads, Geoffrey B.; Rodriguez, Tony F.; Lord, John D.; MacIntosh, Brian T.; Rhoads, Nicole; Conwell, William Y., Methods and systems for content processing.
Nguyen, Tuan; Duck, Anthony Peter; Rawles, Ian; Mair, Thomas; Gray, Robert, Methods and systems for electronics assembly system consultation and sales.
Ruppert, Ryan; Atashband, Farshid; Singh, Saurabh; Arbogast, Christopher P.; Phillips, Randy; Lowell, Mark, Networked gaming system including a live floor view module.
Chesley,Harry R.; Kimberly,Greg; Gupta,Anoop; Vellon,Manuel; Drucker,Steven M., Presentation system with distributed object oriented multi-user domain and separate view and model objects.
Kusumoto, Laura Lee; Sacerdoti, Earl David; Sigler, Leila Janine; Sigler, Sonya Lee, System and method for consumer-selected advertising and branding in interactive media.
Basso, Andrea; Eleftheriadis, Alexandros; Kalva, Hari; Puri, Atul; Schmidt, Robert Lewis, System and method of organizing data to facilitate access and streaming.
Conner Michael H. (Austin TX) Coskun Nurcan (Austin TX), System for processing application programs including a language independent context management technique.
Lannert, Eric Jeffrey; Gobran, Timothy John; Smith, Karen Therese; Willow, Michael James; Conant, Jonathan Christian; Murphy, Scott Michael, System, method and article of manufacture for a goal based system utilizing a time based model.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.