Apparatus and method for determining an absolute pose of a manipulated object in a real three-dimensional environment with invariant features
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G06F-003/033
출원번호
UP-0584402
(2009-09-03)
등록번호
US-7826641
(2010-11-22)
발명자
/ 주소
Mandella, Michael J.
Gonzalez-Banos, Hector H.
Alboszta, Marek
출원인 / 주소
Electronic Scripting Products, Inc.
대리인 / 주소
Alboszta, Marek
인용정보
피인용 횟수 :
132인용 특허 :
147
초록▼
An apparatus and method for optically inferring an absolute pose of a manipulated object in a real three-dimensional environment from on-board the object with the aid of an on-board optical measuring arrangement. At least one invariant feature located in the environment is used by the arrangement fo
An apparatus and method for optically inferring an absolute pose of a manipulated object in a real three-dimensional environment from on-board the object with the aid of an on-board optical measuring arrangement. At least one invariant feature located in the environment is used by the arrangement for inferring the absolute pose. The inferred absolute pose is expressed with absolute pose data (φ, θ, ψ, x, y, z) that represents Euler rotated object coordinates expressed in world coordinates (Xo, Yo, Zo) with respect to a reference location, such as, for example, the world origin. Other conventions for expressing absolute pose data in three-dimensional space and representing all six degrees of freedom (three translational degrees of freedom and three rotational degrees of freedom) are also supported. Irrespective of format, a processor prepares the absolute pose data and identifies a subset that may contain all or fewer than all absolute pose parameters. This subset is transmitted to an application via a communication link, where it is treated as input that allows a user of the manipulated object to interact with the application and its output.
대표청구항▼
The invention claimed is: 1. An apparatus for processing absolute pose data derived from an absolute pose of a manipulated object in a real three-dimensional environment, said apparatus comprising: a) at least one invariant feature in said real three-dimensional environment; b) an optical measuring
The invention claimed is: 1. An apparatus for processing absolute pose data derived from an absolute pose of a manipulated object in a real three-dimensional environment, said apparatus comprising: a) at least one invariant feature in said real three-dimensional environment; b) an optical measuring means for optically inferring said absolute pose from on-board said manipulated object using said at least one invariant feature and expressing said inferred absolute pose with absolute pose data (φ, θ, ψ, x, y, z) representing Euler rotated object coordinates expressed in world coordinates (Xo, Yo, Zo) with respect to a reference location; c) a processor for preparing said absolute pose data and identifying a subset of said absolute pose data; and d) a communication link for transmitting said subset to an application. 2. The apparatus of claim 1, wherein said reference location is a world origin (0, 0, 0) of said world coordinates (Xo, Yo, Zo). 3. The apparatus of claim 1, wherein said at least one invariant feature comprises at least one high optical contrast feature. 4. The apparatus of claim 3, wherein said at least one high optical contrast feature comprises at least one predetermined light source. 5. The apparatus of claim 4, wherein said at least one predetermined light source is selected from the group consisting of screens, display pixels, fiber optical waveguides and light-emitting diodes. 6. The apparatus of claim 5, wherein said light-emitting diode is an infrared emitting diode. 7. The apparatus of claim 3, wherein said at least one high optical contrast feature comprises a retro-reflector. 8. The apparatus of claim 1, wherein said application comprises an output displayed to a user and said subset comprises input for interacting with said output. 9. The apparatus of claim 8, wherein said output comprises at least one visual element. 10. The apparatus of claim 9, wherein said at least one visual element is selected from the group consisting of images, graphics, text and icons. 11. The apparatus of claim 9, further comprising a display for displaying said visual elements. 12. The apparatus of claim 11, wherein said display comprises a touch sensitive display. 13. The apparatus of claim 8, wherein said output comprises audio elements. 14. The apparatus of claim 13, wherein said audio elements are selected from the group consisting of tones, tunes, musical compositions and alert signals. 15. The apparatus of claim 13, further comprising a speaker for playing said audio elements. 16. The apparatus of claim 1, wherein said optical measuring means comprises a light-measuring component having a lens and an optical sensor. 17. The apparatus of claim 16, wherein said optical sensor is a photodetector for sensing light from said at least one invariant feature. 18. The apparatus of claim 17, wherein said at least one invariant feature comprises at least one predetermined light source and said photodetector is a position-sensing device for determining a centroid of light flux from said at least one predetermined light source. 19. The apparatus of claim 18, wherein said at least one predetermined light source comprises a plurality of infrared emitting diodes. 20. The apparatus of claim 19, wherein said plurality of infrared emitting diodes is modulated in a predetermined modulation pattern. 21. The apparatus of claim 1, wherein said optical measuring means comprises an active illumination component for projecting a predetermined light into said real three-dimensional environment and for receiving a scattered portion of said predetermined light from said at least one invariant feature. 22. The apparatus of claim 21, wherein said active illumination component comprises a scanning unit having a scanning mirror for directing said predetermined light into said real three-dimensional environment. 23. The apparatus of claim 1, further comprising an auxiliary motion detection component. 24. The apparatus of claim 23, wherein said auxiliary motion detection component is an inertial sensing device. 25. The apparatus of claim 24, wherein said inertial sensing device is selected from the group consisting of gyroscopes and accelerometers. 26. The apparatus of claim 23, wherein said auxiliary motion detection component is selected from the group consisting of an optical flow measuring unit, a magnetic field measuring unit and an acoustic field measuring unit. 27. The apparatus of claim 1, wherein said manipulated object is selected from the group consisting of pointers, wands, remote controls, three-dimensional mice, game controls, gaming objects, jotting implements, surgical implements, three-dimensional digitizers, digitizing styluses, hand-held tools and utensils. 28. The apparatus of claim 27, wherein said gaming objects are selected from the group consisting of golfing clubs, tennis rackets, squash rackets, guitars, guns, knives, swords, spears, balls, clubs, bats, steering wheels, joysticks and flying controls. 29. The apparatus of claim 1, wherein said application is selected from the group consisting of reality simulations, remote control applications, cyber games, virtual worlds, searches, photography applications, audio applications and augmented realities. 30. A method for processing absolute pose data derived from an absolute pose of a manipulated object, said method comprising: a) placing said manipulated object in a real three-dimensional environment with at least one invariant feature; b) optically inferring said absolute pose from on-board said manipulated object using said at least one invariant feature and expressing said inferred absolute pose with absolute pose data (φ, θ, Ψ, x, y, z) representing Euler rotated object coordinates expressed in world coordinates (Xo, Yo, Zo) with respect to a reference location; c) preparing said absolute pose data using a processor; d) identifying a subset of said absolute pose data; and e) transmitting said subset to an application using a communication link. 31. The method of claim 30, wherein said manipulated object is undergoing a motion and said optical inferring is performed periodically at times ti, such that said absolute pose data describes said motion. 32. The method of claim 30, wherein said at least one invariant feature comprises at least one predetermined light source and said method comprises modulating said at least one predetermined light source. 33. The method of claim 30, wherein said subset comprises at least one orientation parameter. 34. The method of claim 33, wherein said at least one orientation parameter comprises at least one Euler angle. 35. The method of claim 30, wherein said subset comprises at least one position parameter. 36. The method of claim 35, wherein said at least one position parameter comprises at least one Cartesian coordinate. 37. The method of claim 30, wherein said subset comprises all absolute pose data (φ, θ, ψ, x, y, z) and said application comprises one-to-one motion mapping between said real three-dimensional space and a cyberspace of said application. 38. The method of claim 30, wherein said application provides an output and said subset is used for a user interaction selected from the group consisting of text input, modification of said output, re-arrangement of said output. 39. The method of claim 38, wherein said output comprises at least one element selected from the group consisting of audio elements and visual elements. 40. The method of claim 30, wherein said step of optically inferring comprises receiving light from said at least one invariant feature by an optical measuring component having an optical sensor and a lens. 41. The method of claim 30, wherein said step of optically inferring comprises projecting a predetermined light into said real three-dimensional environment containing said at least one invariant feature and receiving a scattered portion of said predetermined light beam from said at least one invariant feature. 42. The method of claim 41, wherein said step of projecting further comprises directing said predetermined light into said real three-dimensional environment by scanning. 43. The method of claim 30, further comprising interpolating said absolute pose with an auxiliary motion detection component. 44. The method of claim 43, wherein said auxiliary motion detection component comprises an inertial sensing device and said step of interpolating is performed when not optically inferring said absolute pose. 45. The method of claim 43, wherein said auxiliary motion detection component is selected from the group consisting of optical flow measuring units, magnetic field measuring units and acoustic field measuring units, and said step of interpolating is performed when not optically inferring said absolute pose. 46. The method of claim 30, wherein said application comprises representing a user by an avatar having a token corresponding to said manipulated object. 47. The method of claim 30, wherein said application superposes at least one virtual element onto said real three-dimensional environment. 48. The method of claim 47, wherein said at least one virtual element is rendered interactive with said manipulated object by said application. 49. The method of claim 47, wherein said virtual element comprises data about at least one aspect of said real three-dimensional environment.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (147)
Cipolla Roberto (Cambridge GBX) Okamoto Yasukazu (Chiba-ken JPX) Kuno Yoshinori (Osaka-fu JPX), 3D human interface apparatus using motion recognition based on dynamic image processing.
Buermann,Dale H.; Gonzalez Banos,Hector H.; Mandella,Michael J.; Carl,Stewart R., Apparatus and method for determining an inclination of an elongate object contacting a plane surface.
Yutaka Usuda JP; Ichirou Takeuchi JP; Sueo Amemiya JP; Jun Namiki JP; Naoki Miyauchi JP, Coordinate input pen, and electronic board, coordinate input system and electronic board system using the coordinate input pen.
Katsuyuki Omura JP; Kunikazu Tsuda JP; Makoto Tanaka JP, Coordinate position inputting/detecting device, a method for inputting/detecting the coordinate position, and a display board system.
Katsuyuki Omura JP; Takao Inoue JP, Coordinate position inputting/detecting device, a method for inputting/detecting the coordinate position, and a display board system.
Teufel Thomas,DEX ; Keller Gerhard,DEX, Data acquisition device for optical detection and storage of visually marked and projected alphanumerical characters, graphics and photographic picture and/or three dimensional topographies.
Fagin, Ronald; Megiddo, Nimrod; Morris, Robert John Tasman; Rosen, Hal Jervis; Rajagopalan, Sridhar; Zimmerman, Thomas Guthrie, Digital pen using speckle tracking.
McSheery Tracy D. ; Black John R. ; Nollet Scott R. ; Johnson Jack L. ; Jivan Vinay C., Distributed-processing motion tracking system for tracking individually modulated light points.
Chris Dominick Kasabach ; John Michael Stivoric ; Francine Duskey Gemperle ; Christopher Pacione ; Eric Teller, Handheld apparatus for recognition of writing, for remote communication, and for user defined input templates.
Grajski Kamil A. (San Jose CA) Chow Yen-Lu (Saratoga CA) Lee Kai-Fu (Saratoga CA), Handwriting signal processing front-end for handwriting recognizers.
Baron Ehud (Haifa ILX) Prishvin Alexander (Jerusalem ILX) Bar-Itzhak Zeev (Haifa ILX) Korsensky Victor (Haifa ILX), Handwritting input apparatus for handwritting recognition using more than one sensing technique.
Carl,Stewart R.; Alboszta,Marek; Mandella,Michael J.; Gonzalez,Hector H; Hawks,Timothy, Implement for optically inferring information from a jotting surface and environmental landmarks.
Akira Morishita JP; Hiroshi Mizoguchi JP, Information input device, position information holding device, and position recognizing system including them.
Aoyagi Tetsuji (Kanagawa JPX) Miura Takeshi (Aomori JPX) Suzuki Hajime (Kanagawa JPX) Sanchez Russell I. (Seattle WA) Svancarek Mark K. (Redmond WA) Suzuki Toru (Kanagawa JPX) Paull Mike M. (Seattle , Input device for providing multi-dimensional position coordinate signals to a computer.
Howell David N. L. (Anel Gate ; Jubillee Drive Colwall ; Malvern ; Worcestershire WR13 6DQ GB2) Hilton Colin S. (20 Harrier House ; Falcon Road Battersea - London SW11 2NW GB2) Bridle John S. (14 Cra, Method and apparatus for capturing information in drawing or writing.
Buermann,Dale H.; Mandella,Michael J.; Carl,Stewart R.; Zhang,Guanghua G.; Gonzalez Banos,Hector H., Method and apparatus for determining absolute position of a tip of an elongate object on a plane surface with invariant features.
Dolfing Jannes G. A.,NLX ; Hab-Umbach Reinhold,DEX, Method and apparatus for on-line handwriting recognition based on feature vectors that use aggregated observations derived from time-sequential frames.
Kasabach, Chris Dominick; Stivoric, John Michael; Gemperle, Francine Duskey; Pacione, Christopher; Teller, Eric, Method and apparatus for recognition of writing, for remote communication, and for user defined input templates.
Stork David G. ; Angelo Michael ; Wolff Gregory J., Method and apparatus for tracking a hand-held writing instrument with multiple sensors that are calibrated by placing the writing instrument in predetermined positions with respect to the writing sur.
Smithies Christopher P. K. (Wimborne GB2) Newman Jeremy M. (Somerset GB2), Method and system for the capture, storage, transport and authentication of handwritten signatures.
Smithies Christopher Paul Kenneth (Corfe Mullen ; Wimborne GB2) Newman Jeremy Mark (Frome ; Somerset GB2), Method and system for the verification of handwritten signatures.
Bi Depeng ; Cohen Gary Steven ; Cortopassi Michael ; George Jose T. ; Gladwin S. Christopher ; Hsiung Harry ; Lim Peng ; Parham John Allan ; Soucy Alan Joseph ; Voegeli Derick W. ; Wilson James Y., Mouse emulation with a passive pen.
Wright ; Jr. ; Sanford J. ; Anderson ; Peter T. ; Grimes ; Ralph S., Multi-modal data input/output apparatus and method compatible with bio-engineering requirements.
Sontag Heinz (Friedrichshafen DEX) Elias Hartmut (Meersburg DEX) Ludwig Wolfgang (Taegerwilen CHX) Fritzsch Walter (Markdorf DEX) Eschner Wolfgang (Daisendorf DEX) Hundhausen Rainer (Immenstaad DEX) , Optical position detection.
Lazzouni Mohamed (Worcester MA) Kazeroonian Ali Seyed (Framingham MA) Gholizadeh Dariush (Framingham MA) Ali Omar (Roslindale MA 4), Pen and paper information recording system.
Lazzouni Mohamed (Worcester MA) Yousaf Mohamed (Shrewsbury MA) Qureshi Rizwan A. (Worcester MA) Nazir Naveed A. (Shrewsbury MA), Pen and paper information recording system using an imaging pen.
Hotelling, Steven Porter; King, Nicholas Vincent; Kerr, Duncan Robert; Low, Wing Kong, Remote control systems that can distinguish stray light sources.
Hotelling, Steven Porter; King, Nicholas Vincent; Kerr, Duncan Robert; Low, Wing Kong, Remote control systems that can distinguish stray light sources.
Pittel,Arkady; Schiller,Ilya; Liberman,Sergey; Shleppi,Garry; Funk,Ethan A.; Subach,Vladimir V.; Goldman,Andrew M.; Reznik,Leonid; Selitsky,Simon; Stein,Mario A., Tracking motion of a writing instrument.
Skurnik, David; Sprague, Randall Brian; Jones, Geoffrey Hugh; Abbott, Eric Charles; Law, Waisiu, Very high speed photodetector system using a PIN photodiode array for position sensing.
Paull Mike M. ; Sanchez Russell I. ; Svancarek Mark K. ; Aoyagi Tetsuji,JPX, Wireless input device, for use with a computer, employing a movable light-emitting element and a stationary light-receiv.
Cho, Sanghyun; Kim, Joomin; Kim, Gyuseung; Lee, Janghee; Lee, Jaekyung; Lim, Youngwan; Kim, Sijin; Kwon, Youk; Lee, Kunsik, 3-D pointing device, DTV, method of controlling the DTV, and DTV system.
King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Adding information or functionality to a rendered document via association with an electronic counterpart.
King, Martin T.; Stephens, Redwood; Mannby, Claes-Fredrik; Peterson, Jesse; Sanvitale, Mark; Smith, Michael J.; Daley-Watson, Christopher J., Automatically providing content associated with captured information, such as information captured in real-time.
King, Martin Towle; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Quentin, Capturing text from rendered documents using supplement information.
Tretter, Daniel R; Kang, Jinman; Tan, Kar Han; Hong, Wei; Short, David Bradley; Sievert, Otto, Determining a segmentation boundary based on images representing an object.
King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Determining actions involving captured information and electronic content associated with rendered documents.
Bandt-Horn, Benjamin D., Device and user interface for visualizing, navigating, and manipulating hierarchically structured information on host electronic devices.
Bandt-Horn, Benjamin D., Device and user interface for visualizing, navigating, and manipulating hierarchically structured information on host electronic devices.
Langlois, Michael George; Rydenhag, Daniel Tobias; Kuo, Margaret Elizabeth; Johansson, Daniel, Electronic device and method of displaying information in response to a gesture.
Lazaridis, Mihal; Rydenhag, Daniel Tobias; Lindsay, Donald James; Hamilton, Alistair Robert; Lessing, Robert Simon; Griffin, Jason Tyler; Benedek, Joseph Eytan; Wood, Todd Andrew, Electronic device and method of displaying information in response to a gesture.
Lazaridis, Mihal; Rydenhag, Daniel Tobias; Lindsay, Donald James; Hamilton, Alistair Robert; Lessing, Robert Simon; Griffin, Jason Tyler; Benedek, Joseph Eytan; Wood, Todd Andrew, Electronic device and method of displaying information in response to a gesture.
Lazaridis, Mihal; Rydenhag, Daniel Tobias; Lindsay, Donald James; Hamilton, Alistair Robert; Lessing, Robert Simon; Griffin, Jason Tyler; Benedek, Joseph Eytan; Wood, Todd Andrew, Electronic device and method of displaying information in response to a gesture.
Lessing, Robert Simon; Langlois, Michael George; Rydenhag, Daniel Tobias; Benedek, Joseph Eytan; Lazaridis, Mihal; Andersson Reimer, Nils Roger; Lindsay, Donald James, Electronic device and method of displaying information in response to a gesture.
Lessing, Robert Simon; Langlois, Michael George; Rydenhag, Daniel Tobias; Benedek, Joseph Eytan; Lazaridis, Mihal; Andersson Reimer, Nils Roger; Lindsay, Donald James, Electronic device and method of displaying information in response to a gesture.
Bocking, Andrew Douglas; Lindsay, Donald James; Rydenhag, Daniel Tobias, Electronic device and method of providing visual notification of a received communication.
Bocking, Andrew Douglas; Lindsay, Donald James; Rydenhag, Daniel Tobias, Electronic device and method of providing visual notification of a received communication.
King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Handheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device.
Eberl, Roland H. C.; Mayer, Matthais; Eberl, Heinrich A.; Dickerson, David, Information system and method for providing information using a holographic element.
Zalewski, Gary M.; Marks, Richard; Mao, Xiadong, Method and apparatus for tracking three-dimensional movements of an object using a depth sensing camera.
Zalewski, Gary M.; Marks, Richard; Mao, Xiaodong, Method and apparatus for tracking three-dimensional movements of an object using a depth sensing camera.
Benhimane, Selim; Lieberknecht, Sebastian, Method for determining a parameter set designed for determining the pose of a camera and/or for determining a three-dimensional structure of the at least one real object.
Benhimane, Selim; Lieberknecht, Sebastian, Method for determining a parameter set designed for determining the pose of a camera and/or for determining a three-dimensional structure of the at least one real object.
Benhimane, Selim; Ulbricht, Daniel, Method for determining correspondences between a first and a second image, and method for determining the pose of a camera.
Benhimane, Selim; Ulbricht, Daniel, Method for determining correspondences between a first and a second image, and method for determining the pose of a camera.
Benhimane, Selim; Lieberknecht, Sebastian; Huber, Andrea, Method for estimating a camera motion and for determining a three-dimensional model of a real environment.
BenHimane, Selim; Ulbricht, Daniel, Method of determining reference features for use in an optical object initialization tracking process and object initialization tracking method.
Yen, Wei; Wright, Ian; Tu, Xiaoyuan; Reynolds, Stuart; Powers, III, William Robert; Musick, Charles; Funge, John; Dobson, Daniel; Bererton, Curt, Methods and systems for dynamic calibration of movable game controllers.
King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Methods and systems for initiating application processes by data capture from rendered documents.
Townsend, Reed L.; Tu, Xiao; Scott, Bryan D.; Torset, Todd A.; Geidl, Erik M.; Pradhan, Samir S.; Teed, Jennifer A., Multi-touch manipulation of application objects.
Townsend, Reed L.; Tu, Xiao; Scott, Bryan; Torset, Todd A.; Geidl, Erik M.; Pradhan, Samir S.; Teed, Jennifer A., Multi-touch manipulation of application objects.
Townsend, Reed L.; Tu, Xiao; Scott, Bryan; Torset, Todd A.; Geidl, Erik M.; Pradhan, Samir S.; Teed, Jennifer A., Multi-touch manipulation of application objects.
King, Martin T.; Stephens, Redwood; Mannby, Claes-Fredrik; Peterson, Jesse; Sanvitale, Mark; Smith, Michael J., Performing actions based on capturing information from rendered documents, such as documents under copyright.
King, Martin T.; Kushler, Clifford A.; Stafford-Fraser, James Q.; Grover, Dale L., Processing techniques for visual capture data from a rendered document.
King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Search engines and systems with handheld document data capture devices.
King, Martin Towle; Stafford-Fraser, James Quentin; Kushler, Clifford A.; Grover, Dale L., System and method for information gathering utilizing form identifiers.
Kawano, Yoichiro; Tu, Xiaoyuan; Musick, Jr., Charles; Powers, III, William Robert; Reynolds, Stuart; Wilkinson, Dana; Wright, Ian; Yen, Wei, Systems and methods for utilizing personalized motion control in virtual environment.
King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Triggering actions in response to optically or acoustically capturing keywords from a rendered document.
King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Triggering actions in response to optically or acoustically capturing keywords from a rendered document.
King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Triggering actions in response to optically or acoustically capturing keywords from a rendered document.
King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Triggering actions in response to optically or acoustically capturing keywords from a rendered document.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.