Methods for object recognition and related arrangements
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G06K-009/62
G06F-017/30
G06T-017/00
G06K-009/46
출원번호
US-0251229
(2014-04-11)
등록번호
US-9269022
(2016-02-23)
발명자
/ 주소
Rhoads, Geoffrey B.
Bai, Yang
Rodriguez, Tony F.
Rogers, Eliot
Sharma, Ravi K.
Lord, John D.
Long, Scott
MacIntosh, Brian T.
Eaton, Kurt M.
출원인 / 주소
Digimarc Corporation
대리인 / 주소
Digimarc Corporation
인용정보
피인용 횟수 :
13인용 특허 :
25
초록▼
Methods and arrangements involving portable user devices such smartphones and wearable electronic devices are disclosed, as well as other devices and sensors distributed within an ambient environment. Some arrangements enable a user to perform an object recognition process in a computationally- and
Methods and arrangements involving portable user devices such smartphones and wearable electronic devices are disclosed, as well as other devices and sensors distributed within an ambient environment. Some arrangements enable a user to perform an object recognition process in a computationally- and time-efficient manner. Other arrangements enable users and other entities to, either individually or cooperatively, register or enroll physical objects into one or more object registries on which an object recognition process can be performed. Still other arrangements enable users and other entities to, either individually or cooperatively, associate registered or enrolled objects with one or more items of metadata. A great variety of other features and arrangements are also detailed.
대표청구항▼
1. A method employing one or more computer processors to perform acts including: storing, within an object registry, model data and object metadata corresponding to a plurality of physical reference objects, the model data for each physical reference object including data characterizing plural non-c
1. A method employing one or more computer processors to perform acts including: storing, within an object registry, model data and object metadata corresponding to a plurality of physical reference objects, the model data for each physical reference object including data characterizing plural non-coplanar surface regions of different extents and locations, and hybrid-P feature data, the model data also including, for each of several different physical reference objects, multiple sets of feature information, each set of feature information being associated with a particular viewpoint towards a physical reference object;obtaining query data representing a physical object-of-interest, wherein generation of the query data is initiated by a user, and said query data includes object profile data representing an edge of a silhouette of the physical object-of-interest;performing an object recognition process on the query data, the object recognition process including processing the query data, in conjunction with the stored model data, to determine whether the object-of-interest corresponds to any of the plurality of physical reference objects, said object recognition process including identifying plural sets of said feature information that may correspond to the query data, thereby identifying a first candidate set of plural physical reference objects that possibly match said physical object-of-interest, and performing a clustering operation on viewpoints associated with said identified plural sets of feature information, to determine a preliminary candidate viewpoint towards a matching physical reference object;the object recognition process further including, for each of said first candidate set of physical reference objects, obtaining reference object profile data corresponding to said preliminary candidate viewpoint, and for one or more additional viewpoints; andperforming a profile matching operation to identify certain of said obtained reference object profile data that correspond to the object profile data representing the physical object-of-interest, thereby identifying a second candidate set of physical reference objects that possibly match said physical object-of-interest, said second candidate set being smaller than said first candidate set; andupon determining that the object-of-interest corresponds to at least one of the physical reference objects, transmitting a result to a user device associated with the user, the result including object metadata associated with the at least one of the physical reference objects determined to correspond to the object-of-interest;wherein the hybrid-P feature data is based on an accumulation of multiple profiles of the reference object from multiple viewpoints. 2. The method of claim 1, wherein the query data is obtained from the user device. 3. The method of claim 1, wherein the query data is obtained from a camera-equipped ambient device distinct from said user device. 4. The method of claim 1, wherein said object metadata associated with at least one of the physical reference objects determined to correspond to the object-of-interest includes object identifying information identifying the reference object. 5. The method of claim 1, further comprising identifying model data corresponding to a sub-set of the plurality of physical reference objects based on auxiliary information, wherein processing the model data comprises processing the identified model data to determine whether the object-of-interest corresponds to any physical reference object within said sub-set of the plurality of physical reference objects. 6. The method of claim 1 wherein said act of performing an object recognition process includes three stages or operation, (b), (c) and (d), wherein stage (c) is performed after stage (b) and before stage (d), and wherein: stage (b) is based on said hybrid P-feature data;stage (c) is based on M-feature data; andstage (d) is based on I-feature data. 7. The method of claim 6 wherein stage (c) is based on the hybrid P-feature data, as well as on said M-feature data. 8. The method of claim 6 wherein stage (d) is based on the M-feature data and said hybrid P-feature data, as well as on said I-feature data. 9. The method of claim 6 wherein the object recognition process includes a stage (a), wherein stage (a) is performed before stage (b), and wherein stage (a) includes match searching based on color histogram data. 10. The method of claim 1 wherein performing the object recognition process includes receiving, at the user device, stored model data transmitted from a retail store, enabling the user device to recognize retail objects at said retail store. 11. The method of claim 1 in which said act of obtaining query data includes obtaining data from first and second cameras, the first camera comprising part of the user device, the second camera comprising an ambient camera. 12. The method of claim 11 in which said generation of the query data is initiated by a user gesture indicating the physical object-of-interest, wherein the method includes, in response to said gesture, and in response to sampled location or user device orientation information, identifying an ambient camera to serve as said second camera—from among plural ambient cameras. 13. The method of claim 1 that includes sensing object data from one of said plurality of physical reference objects while illuminating such physical reference object with an optical system that produces collimated illumination, said physical reference object having a maximum dimension of N centimeters, and said optical system having an aperture greater than N centimeters. 14. The method of claim 1 wherein said optical system includes a light source that is tunable across the visible light spectrum. 15. The method of claim 1 wherein said model data for one of said physical reference objects defines more than 1 million planar surface components in a mesh form. 16. The method of claim 1 wherein said object recognition process includes sleuthing a projective viewpoint of the query data relative to the model data for one of said physical reference objects. 17. The method of claim 1 wherein said feature information comprises color histogram information. 18. The method of claim 1 that further includes: for each physical reference object in said second set, determining a viewpoint that most closely corresponds to said query data, and generating a match score employing match metrics for profile, Morse, and image features associated with said reference object and said determined viewpoint; andidentifying, as a final match to said physical object of interest, a physical reference object for which said match score is best. 19. The method of claim 18 in which said match score takes the form of a polynomial equation: aKpd+bKie+cKmf where a, b, and c, are weighting factors, Kp, Ki and Km are match-metrics for the profile, image and Morse features, and d, e and f are corresponding exponential factors. 20. The method of claim 1 in which the act of obtaining query data includes obtaining query data across five different spectral bands. 21. The method of claim 1 in which said model data includes data characterizing reflectance for each of said surface regions. 22. The method of claim 1 in which: the act of obtaining query data includes applying a visual saliency model to imagery of the physical object-of-interest to identify visually-salient portions thereof; andthe act of performing an object recognition process includes examining the stored model data to identify physical reference objects having portions that may be candidate matches to said visually-salient portions. 23. The method of claim 1 wherein said object metadata transmitted to said device comprises information about recycling the physical object-of-interest. 24. The method of claim 1 wherein said object metadata transmitted to said device comprises information identifying a location at which an instance of said physical object-of-interest was previously recognized. 25. The method of claim 5 in which the auxiliary information comprises information about a location of the user. 26. The method of claim 5 in which the auxiliary information comprises information about a location of the user, wherein said sub-set of physical reference objects comprises objects the user may encounter at said location. 27. The method of claim 5 in which the auxiliary information comprises information predicting or estimating a future location of the user, wherein said sub-set of physical reference objects comprises objects the user may encounter at said future location. 28. The method of claim 5 in which the auxiliary information comprises information about audio in an environment of the user, wherein said sub-set of physical reference objects comprises objects associated with said audio. 29. The method of claim 5 in which the auxiliary information comprises information about a previously-recognized physical object. 30. The method of claim 5 in which the auxiliary information comprises information about ambient lighting conditions within an environment of the physical object-of-interest. 31. The method of claim 1 wherein the hybrid P-feature data is based on an accumulation of multiple profiles of the reference object derived from multiple images of said object, captured by a fixed camera while said object rotated around an axis. 32. The method of claim 1 wherein the hybrid-P feature data represents a hybrid silhouette of the reference object that does not match any real physical silhouette of the reference object.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (25)
Trytten, Chad D.; Angerilli, Anthony J. V.; Mackay, Kenneth J., Application-independent and component-isolated system and system of systems framework.
Falkenstern, Kristyn R.; Reed, Alastair M.; Holub, Vojtech; Rodriguez, Tony F., Digital watermarking and data hiding with narrow-band absorption materials.
Prouty, Jeff; Ullyott, Logan; Canary, Grant, Forestry information management systems and methods streamlined by automatic biometric data prioritization.
Fathi, Habib; Serrano, Miguel M.; Gross, Bradden John; Ciprari, Daniel L., Systems and methods for extracting information about objects from scene information.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.