Deriving input from six degrees of freedom interfaces
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06F-003/033
G09G-005/08
G06F-003/0354
G06F-003/01
출원번호
US-0199239
(2011-08-22)
등록번호
US-9229540
(2016-01-05)
발명자
/ 주소
Mandella, Michael J.
Gonzalez-Banos, Hector H.
Alboszta, Marek
출원인 / 주소
Electronic Scripting Products, Inc.
대리인 / 주소
Maiorana, PC, Christopher P.
인용정보
피인용 횟수 :
25인용 특허 :
190
초록▼
The present invention relates to interfaces and methods for producing input for software applications based on the absolute pose of an item manipulated or worn by a user in a three-dimensional environment. Absolute pose in the sense of the present invention means both the position and the orientatio
The present invention relates to interfaces and methods for producing input for software applications based on the absolute pose of an item manipulated or worn by a user in a three-dimensional environment. Absolute pose in the sense of the present invention means both the position and the orientation of the item as described in a stable frame defined in that three-dimensional environment. The invention describes how to recover the absolute pose with optical hardware and methods, and how to map at least one of the recovered absolute pose parameters to the three translational and three rotational degrees of freedom available to the item to generate useful input. The applications that can most benefit from the interfaces and methods of the invention involve 3D virtual spaces including augmented reality and mixed reality environments.
대표청구항▼
1. An interface for producing an input from an absolute pose of an item associated with a user in a three-dimensional environment, said interface comprising: a) a unit on-board said item, said unit configured to receive non-collinear optical inputs presented by at least one stationary object in said
1. An interface for producing an input from an absolute pose of an item associated with a user in a three-dimensional environment, said interface comprising: a) a unit on-board said item, said unit configured to receive non-collinear optical inputs presented by at least one stationary object in said three-dimensional environment, said at least one stationary object having at least one feature detectable via an electromagnetic radiation, said at least one feature presenting a sufficient number of said non-collinear optical inputs for establishing a stable frame in said three-dimensional environment;b) processing electronics employing a computer vision algorithm using a homography to recover said absolute pose of said item from a geometrical description of said non-collinear optical inputs in terms of absolute pose parameters in said stable frame and to generate a signal related to at least one of said absolute pose parameters;c) an application employing said signal in said input, wherein said absolute pose of said item comprises at least three translational degrees of freedom and at least three rotational degrees of freedom, said at least one absolute pose parameter is related to at least one among said at least three translational degrees of freedom and said at least three rotational degrees of freedom by a mapping and at least one aspect of said application varies with said absolute pose of said item. 2. The interface of claim 1, wherein said mapping comprises a one-to-one mapping. 3. The interface of claim 1, wherein said mapping comprises a scaling in at least one among said at least three translational degrees of freedom and said at least three rotational degrees of freedom. 4. The interface of claim 1, wherein said at least one among said at least three translational degrees of freedom and said at least three rotational degrees of freedom comprises two translational degrees of freedom defining a plane in said three-dimensional environment. 5. The interface of claim 4, further comprising a display and wherein said plane is plane-parallel with said display. 6. The interface of claim 5, wherein said display is integrated in one stationary object from among said at least one stationary object, and said one stationary object is selected from the group consisting of televisions, computers, electronic picture frames, game consoles, electronic devices, tools and appliances. 7. The interface of claim 1, wherein said at least one among said at least three translational degrees of freedom and said at least three rotational degrees of freedom comprises three translational degrees of freedom defining a volume in said three-dimensional environment. 8. The interface of claim 7, further comprising a three-dimensional display and wherein said volume corresponds to a virtual display volume of said three-dimensional display. 9. The interface of claim 1, wherein said at least one among said at least three translational degrees of freedom and said at least three rotational degrees of freedom comprises a rotational degree of freedom defining an axis in said three-dimensional environment and said absolute pose parameter maps to rotation around said axis. 10. The interface of claim 9, wherein said axis corresponds to a mechanical axis of said item. 11. The interface of claim 1, wherein said at least three translational degrees of freedom and said at least three rotational degrees of freedom comprise three mutually orthogonal translational degrees of freedom and three mutually orthogonal rotational degrees of freedom. 12. The interface of claim 11, wherein said three mutually orthogonal translational degrees of freedom comprise three orthogonal Cartesian axes. 13. The interface of claim 12, wherein said three orthogonal Cartesian axes are used as world coordinates for describing said stable reference frame and a predetermined point on said item is expressed in said world coordinates and defines a position of said item in said world coordinates. 14. The interface of claim 11, wherein said three mutually orthogonal rotational degrees of freedom are described in said application by a mathematical equivalent belonging to the group consisting of (i) three Euler angles, (ii) pitch, yaw, and roll angles, (iii) Brian-Tait angles, (iv) quaternions, and (v) direction cosines. 15. The interface of claim 1, wherein said at least one absolute pose parameter in said signal maps to three of said at least three translational degrees of freedom and three of said at least three rotational degrees of freedom thereby comprising a full parameterization of said absolute pose of said item, and wherein said application employs said full parameterization in said input for said application. 16. The interface of claim 15, further comprising a feedback unit for providing feedback to said user in response to at least one portion of said full parameterization. 17. The interface of claim 16, wherein said feedback unit comprises a display associated with said application, and said feedback comprises visual information. 18. The interface of claim 17, wherein said visual information comprises at least an image rendered from a point of view of said item, said point of view being derived from said at least one portion of said full parameterization. 19. The interface of claim 18, wherein said feedback unit comprises a haptic feedback unit associated with said application, and said feedback comprises haptic information. 20. The interface of claim 19, wherein said haptic information comprises feedback to at least one body part of said user, said haptic feedback being derived from at least one portion of said full parameterization. 21. The interface of claim 15, wherein said at least one stationary object comprises a display and said full parameterization of said item is employed by said application to compute an intersection of a mechanical axis of said item with said display. 22. The interface of claim 21, wherein said unit comprises an optic defining an optical axis, and said optical axis is chosen as said mechanical axis of said item. 23. The interface of claim 21, wherein said application comprises a place-holder entity placed at said intersection of said mechanical axis of said item with said display. 24. The interface of claim 23, wherein said place-holder entity is selected from the group consisting of an insertion cursor, a feedback cursor, a control icon, a display icon, a visual feedback entity. 25. The interface of claim 1, further comprising a relative motion sensor on-board said item for producing data indicative of a change in at least one among said at least three translational degrees of freedom and said at least three rotational degrees of freedom. 26. The interface of claim 1, wherein said at least one stationary object comprises a device selected from the group consisting of a game console, a television, a stereo, an electronic picture frame, a computer, a tablet, an RF transmitter unit, a set-top box, a base station, a portable user device having a display, a non-portable user device having a display and an appliance. 27. The interface of claim 1, wherein said non-collinear optical inputs are selected from the group consisting of point-like inputs, line-like inputs, area-like inputs and volume-like inputs. 28. The interface of claim 27, wherein said at least one feature comprises an emitter of said electromagnetic radiation and said non-collinear optical inputs comprise emissions from said emitter. 29. The interface of claim 27, wherein said at least one feature comprises a reflector of said electromagnetic radiation and said non-collinear optical inputs comprise reflected electromagnetic radiation from said reflector. 30. The interface of claim 29, wherein said item further comprises an emitter for emitting said electromagnetic radiation into said three-dimensional environment. 31. The interface of claim 29, further comprising an emitter of a pattern of radiation for emitting said electromagnetic radiation into said three-dimensional environment. 32. The interface of claim 1, wherein said three-dimensional environment is selected from the group of environments consisting of real space, a cyberspace, a virtual space, an augmented reality space and a mixed space. 33. The interface of claim 1, wherein said item comprises a manipulated item. 34. The interface of claim 33, wherein said manipulated item is selected from the group consisting of wands, remote controls, portable phones, portable electronic devices, medical implements, digitizers, hand-held tools, hand-held clubs, gaming controls, gaming items, digital inking devices, pointers, remote touch devices, TV remotes and magic wands. 35. The interface of claim 34, wherein said manipulated item is a portable phone and said input is used to control a user device selected from the group consisting of a game console, a television, a stereo, an electronic picture frame, a computer, a tablet, an RF transmitter unit, a set-top box, a base station, a portable user device having a display, a non-portable user device having a display and an appliance. 36. The interface of claim 1, wherein said item comprises a wearable item. 37. The interface of claim 36, wherein said wearable item is selected from the group consisting of items affixed on headgear, on glasses, on gloves, on rings, on watches, on articles of clothing, on accessories, on jewelry and on accoutrements. 38. The interface of claim 37, wherein said input is used to control a user device selected from the group consisting of a game console, a game object, a television, a stereo, an electronic picture frame, a computer, a tablet, an RF transmitter unit, a set-top box, a base station, a portable user device having a display, a non-portable user device having a display and an appliance. 39. A method for producing an input from an absolute pose of an item associated with a user in a three-dimensional environment, said method comprising: a) determining a placement of at least one stationary object having at least one feature in said three-dimensional environment, said feature presenting a sufficient number of non-collinear optical inputs detectable via an electromagnetic radiation to establish a stable frame in said three-dimensional environment;b) receiving by a unit on-board said item said non-collinear optical inputs;c) using with processing electronics employing a computer vision algorithm that uses a homography to recover said absolute pose of said item from a geometrical description of said non-collinear optical inputs in terms of absolute pose parameters in said stable frame;d) generating a signal related to at least one of said absolute pose parameters;e) communicating said signal via a link to an application for use in generating said input, wherein said absolute pose comprises at least three translational degrees of freedom and at least three rotational degrees of freedom, said at least one absolute pose parameter is related to at least one among said at least three translational degrees of freedom and said at least three rotational degrees of freedom by a mapping and at least one aspect of said application varies with said absolute pose of said item. 40. The method of claim 39, wherein said mapping comprises a one-to-one mapping. 41. The method of claim 39, wherein said mapping comprises a scaling in at least one among said at least three translational degrees of freedom and said at least three rotational degrees of freedom. 42. The method of claim 39, further comprising: a) constructing a subspace of said at least three translational degrees of freedom and said at least three rotational degrees of freedom;b) projecting said at least one absolute pose parameter into said subspace to obtain a projected portion of said absolute pose parameter; andc) communicating said projected portion to said application for use in said input. 43. The method of claim 42, wherein said subspace is selected from the group consisting of points, axes, planes and volumes. 44. The method of claim 39, wherein said three-dimensional environment is selected from the group of environments consisting of real space, a cyberspace, a virtual space, an augmented reality space and a mixed space. 45. The method of claim 39, further comprising processing said signal to compute a position of said item in said application. 46. The method of claim 45, further comprising providing a feedback to said user depending on said position. 47. The method of claim 39, further comprising processing said signal to compute an orientation of said item in said application. 48. The method of claim 47, further comprising providing a feedback to said user depending on said orientation. 49. The method of claim 39, further comprising providing a relative motion sensor on-board said object for producing data indicative of a change in at least one among said at least three translational degrees of freedom and said at least three rotational degrees of freedom. 50. The method of claim 39, wherein said electromagnetic radiation is emitted from on-board said item. 51. The method of claim 39, wherein said electromagnetic radiation is emitted from an emitter located at a predetermined location in said three-dimensional environment. 52. The method of claim 51, wherein said electromagnetic radiation is emitted in a pattern. 53. The method of claim 39, wherein said feature emits said electromagnetic radiation. 54. The method of claim 53, wherein said electromagnetic radiation is further reflected from known objects in said three-dimensional environment in a pattern before being received by said on-board unit. 55. A method for controlling a controlled object based on an absolute pose of an item associated with a user in a three-dimensional environment, said method comprising: a) placing in said three-dimensional environment at least one stationary object having at least one feature, said at least one feature presenting a sufficient number of non-collinear optical inputs detectable via an electromagnetic radiation to establish a stable frame in said three-dimensional environment;b) receiving by a unit on-board said item said non-collinear optical inputs;c) recovering with processing electronics employing a computer vision algorithm that uses a nomography said absolute pose of said item from a geometrical description of said non-collinear optical inputs in terms of absolute pose parameters in said stable frame;d) generating a signal related to at least one of said absolute pose parameters;e) communicating said signal to a control of said controlled object, wherein said absolute pose comprises at least three translational degrees of freedom and at least three rotational degrees of freedom, said at least one absolute pose parameter is related to at least one among said at least three translational degrees of freedom and said at least three rotational degrees of freedom by a mapping and at least one aspect of said application varies with said absolute pose of said item. 56. The method of claim 55, wherein said mapping comprises a one-to-one mapping. 57. The method of claim 55, wherein said mapping comprises a scaling of said at least one absolute pose parameter to a corresponding at least one among said at least three translational degrees of freedom and said at least three rotational degrees of freedom. 58. The method of claim 55, further comprising: a) constructing a subspace of said at least three translational degrees of freedom and said at least three rotational degrees of freedom;b) projecting said absolute pose parameter into said subspace to obtain a projected portion of said absolute pose parameter; andc) communicating said projected portion to said control for controlling said controlled object. 59. The method of claim 58, wherein said subspace is selected from the group consisting of points, axes, planes and volumes. 60. The method of claim 55, wherein said three-dimensional environment is selected from the group of environments consisting of real space, a cyberspace, a virtual space, an augmented reality space and a mixed space. 61. The method of claim 60, wherein said controlled object resides in one of said group of environments consisting of real space, a cyberspace, a virtual space, an augmented reality space and a mixed space. 62. The method of claim 55, further comprising providing a feedback to said control depending on said at least one absolute pose parameter.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (190)
Cipolla Roberto (Cambridge GBX) Okamoto Yasukazu (Chiba-ken JPX) Kuno Yoshinori (Osaka-fu JPX), 3D human interface apparatus using motion recognition based on dynamic image processing.
Nakadaira, Atsushi; Suzuki, Naobumi; Ochi, Daisuke, 3D pointing method, 3D display control method, 3D pointing device, 3D display control device, 3D pointing program, and 3D display control program.
Buermann,Dale H.; Gonzalez Banos,Hector H.; Mandella,Michael J.; Carl,Stewart R., Apparatus and method for determining an inclination of an elongate object contacting a plane surface.
Roberts Andrew F. (Charlestown MA) Sachs Emanuel M. (Somerville MA) Stoops David R. (Cambridge MA) Ulrich Karl T. (Belmont MA) Siler Todd L. (Cambridge MA) Gossard David C. (Andover MA) Celniker Geor, Computer aided drawing in three dimensions.
Yutaka Usuda JP; Ichirou Takeuchi JP; Sueo Amemiya JP; Jun Namiki JP; Naoki Miyauchi JP, Coordinate input pen, and electronic board, coordinate input system and electronic board system using the coordinate input pen.
Katsuyuki Omura JP; Kunikazu Tsuda JP; Makoto Tanaka JP, Coordinate position inputting/detecting device, a method for inputting/detecting the coordinate position, and a display board system.
Katsuyuki Omura JP; Takao Inoue JP, Coordinate position inputting/detecting device, a method for inputting/detecting the coordinate position, and a display board system.
Teufel Thomas,DEX ; Keller Gerhard,DEX, Data acquisition device for optical detection and storage of visually marked and projected alphanumerical characters, graphics and photographic picture and/or three dimensional topographies.
Fagin, Ronald; Megiddo, Nimrod; Morris, Robert John Tasman; Rosen, Hal Jervis; Rajagopalan, Sridhar; Zimmerman, Thomas Guthrie, Digital pen using speckle tracking.
McSheery Tracy D. ; Black John R. ; Nollet Scott R. ; Johnson Jack L. ; Jivan Vinay C., Distributed-processing motion tracking system for tracking individually modulated light points.
Kimber, Donald G.; Guo, Feng; Rieffel, Eleanor G.; Murai, Kazumasa; Omohundro, Stephen, Featured wands for camera calibration and as a gesture based 3D interface device.
Chris Dominick Kasabach ; John Michael Stivoric ; Francine Duskey Gemperle ; Christopher Pacione ; Eric Teller, Handheld apparatus for recognition of writing, for remote communication, and for user defined input templates.
Grajski Kamil A. (San Jose CA) Chow Yen-Lu (Saratoga CA) Lee Kai-Fu (Saratoga CA), Handwriting signal processing front-end for handwriting recognizers.
Baron Ehud (Haifa ILX) Prishvin Alexander (Jerusalem ILX) Bar-Itzhak Zeev (Haifa ILX) Korsensky Victor (Haifa ILX), Handwritting input apparatus for handwritting recognition using more than one sensing technique.
Carl,Stewart R.; Alboszta,Marek; Mandella,Michael J.; Gonzalez,Hector H; Hawks,Timothy, Implement for optically inferring information from a jotting surface and environmental landmarks.
Foxlin Eric M. (Cambridge MA), Inertial orientation tracker apparatus having automatic drift compensation for tracking human head and other similarly s.
Akira Morishita JP; Hiroshi Mizoguchi JP, Information input device, position information holding device, and position recognizing system including them.
Aoyagi Tetsuji (Kanagawa JPX) Miura Takeshi (Aomori JPX) Suzuki Hajime (Kanagawa JPX) Sanchez Russell I. (Seattle WA) Svancarek Mark K. (Redmond WA) Suzuki Toru (Kanagawa JPX) Paull Mike M. (Seattle , Input device for providing multi-dimensional position coordinate signals to a computer.
Howell David N. L. (Anel Gate ; Jubillee Drive Colwall ; Malvern ; Worcestershire WR13 6DQ GB2) Hilton Colin S. (20 Harrier House ; Falcon Road Battersea - London SW11 2NW GB2) Bridle John S. (14 Cra, Method and apparatus for capturing information in drawing or writing.
Hinckley, Kenneth P.; Sinclair, Michael J.; Szeliski, Richard S.; Conway, Matthew J.; Hanson, Erik J., Method and apparatus for computer input using six degrees of freedom.
Buermann,Dale H.; Mandella,Michael J.; Carl,Stewart R.; Zhang,Guanghua G.; Gonzalez Banos,Hector H., Method and apparatus for determining absolute position of a tip of an elongate object on a plane surface with invariant features.
Horton Mike A. (Berkeley CA) Newton A. Richard (Woodside CA), Method and apparatus for determining position and orientation of a moveable object using accelerometers.
Dolfing Jannes G. A.,NLX ; Hab-Umbach Reinhold,DEX, Method and apparatus for on-line handwriting recognition based on feature vectors that use aggregated observations derived from time-sequential frames.
Kasabach, Chris Dominick; Stivoric, John Michael; Gemperle, Francine Duskey; Pacione, Christopher; Teller, Eric, Method and apparatus for recognition of writing, for remote communication, and for user defined input templates.
Stork David G. ; Angelo Michael ; Wolff Gregory J., Method and apparatus for tracking a hand-held writing instrument with multiple sensors that are calibrated by placing the writing instrument in predetermined positions with respect to the writing sur.
Smithies Christopher P. K. (Wimborne GB2) Newman Jeremy M. (Somerset GB2), Method and system for the capture, storage, transport and authentication of handwritten signatures.
Smithies Christopher Paul Kenneth (Corfe Mullen ; Wimborne GB2) Newman Jeremy Mark (Frome ; Somerset GB2), Method and system for the verification of handwritten signatures.
Bi Depeng ; Cohen Gary Steven ; Cortopassi Michael ; George Jose T. ; Gladwin S. Christopher ; Hsiung Harry ; Lim Peng ; Parham John Allan ; Soucy Alan Joseph ; Voegeli Derick W. ; Wilson James Y., Mouse emulation with a passive pen.
Wright ; Jr. ; Sanford J. ; Anderson ; Peter T. ; Grimes ; Ralph S., Multi-modal data input/output apparatus and method compatible with bio-engineering requirements.
Sontag Heinz (Friedrichshafen DEX) Elias Hartmut (Meersburg DEX) Ludwig Wolfgang (Taegerwilen CHX) Fritzsch Walter (Markdorf DEX) Eschner Wolfgang (Daisendorf DEX) Hundhausen Rainer (Immenstaad DEX) , Optical position detection.
Lazzouni Mohamed (Worcester MA) Kazeroonian Ali Seyed (Framingham MA) Gholizadeh Dariush (Framingham MA) Ali Omar (Roslindale MA 4), Pen and paper information recording system.
Lazzouni Mohamed (Worcester MA) Yousaf Mohamed (Shrewsbury MA) Qureshi Rizwan A. (Worcester MA) Nazir Naveed A. (Shrewsbury MA), Pen and paper information recording system using an imaging pen.
Nam, Dong-kyung; Yoo, Ho-joon; Cho, Woo-jong; Jeong, Moon-sik; Kim, Chang-su; Hong, Sun-gi; Park, Yong-gook; Shim, Jung-hyun, Pointing input device, method, and system using image pattern.
Ohta, Keizo, Position calculation apparatus, storage medium storing position calculation program, game apparatus, and storage medium storing game program.
Hotelling, Steven Porter; King, Nicholas Vincent; Kerr, Duncan Robert; Low, Wing Kong, Remote control systems that can distinguish stray light sources.
Hotelling, Steven Porter; King, Nicholas Vincent; Kerr, Duncan Robert; Low, Wing Kong, Remote control systems that can distinguish stray light sources.
Horvitz Eric J. ; Markley Michael E. ; Sonntag Martin L., System and method for resizing an input position indicator for a user interface of a computer system.
Pittel,Arkady; Schiller,Ilya; Liberman,Sergey; Shleppi,Garry; Funk,Ethan A.; Subach,Vladimir V.; Goldman,Andrew M.; Reznik,Leonid; Selitsky,Simon; Stein,Mario A., Tracking motion of a writing instrument.
Skurnik, David; Sprague, Randall Brian; Jones, Geoffrey Hugh; Abbott, Eric Charles; Law, Waisiu, Very high speed photodetector system using a PIN photodiode array for position sensing.
Paull Mike M. ; Sanchez Russell I. ; Svancarek Mark K. ; Aoyagi Tetsuji,JPX, Wireless input device, for use with a computer, employing a movable light-emitting element and a stationary light-receiv.
Poulos, Adam G.; Tomlin, Arthur; Balan, Alexandru Octavian; Dulu, Constantin; Edmonds, Christopher Douglas, Optically augmenting electromagnetic tracking in mixed reality.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.