Method and system for identifying books on a bookshelf
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/72
G06K-009/00
G06K-009/32
G06K-009/46
출원번호
US-0745066
(2015-06-19)
등록번호
US-9977955
(2018-05-22)
발명자
/ 주소
Hudson, Peter Michael Bruce
Muja, Marius Constantin
McNair, Andrew Cameron
McCann, Sancho Juan Carlos Dean Rob Roy
출원인 / 주소
Rakuten Kobo, Inc.
대리인 / 주소
Oblon, McClelland, Maier & Neustadt, L.L.P.
인용정보
피인용 횟수 :
0인용 특허 :
5
초록▼
A method and system for identifying books located on a bookshelf. Photographs of the bookshelf are captured and processed to identify individual books. Processing involves segmenting the photograph into individual book spines and extracting and analyzing features of the book spines. Analysis may inc
A method and system for identifying books located on a bookshelf. Photographs of the bookshelf are captured and processed to identify individual books. Processing involves segmenting the photograph into individual book spines and extracting and analyzing features of the book spines. Analysis may include database matching and/or optical character recognition. Book spines for which a match is not found are human labeled, and the label information is added to the database. User feedback is also used to update the database.
대표청구항▼
1. A method for identifying books located on a bookshelf, the method comprising: capturing one or more photographic images of the bookshelf;segmenting the photographic images into regions, each of the regions corresponding to a respective book spine;analyzing at least one of the regions to identify
1. A method for identifying books located on a bookshelf, the method comprising: capturing one or more photographic images of the bookshelf;segmenting the photographic images into regions, each of the regions corresponding to a respective book spine;analyzing at least one of the regions to identify a book corresponding thereto, wherein analyzing the at least one of the regions comprises: extracting one or more visual features descriptive of the at least one of the regions, the one or more visual features including machine-recognized text and a location of the machine-recognized text contained within the at least one of the regions, wherein the machine-recognized text and the location of the machine-recognized text are used as analogues of visual features;performing a matching operation based on the one or more visual features, the matching operation performed against stored data associating plural book identities with corresponding visual features;when the matching operation returns one of the book identities sufficiently closely matched with the one or more visual features, identifying the at least one of the regions as representing said one of the book identities;when the matching operation fails to return one of the book identities sufficiently closely matched with the one or more visual features, initiating a further analysis of the at least one of the regions to identify the book corresponding thereto; andwhen the further analysis returns a further book identity sufficiently closely matched with the one or more visual features, identifying the at least one of the regions as representing the further book identity; andbrowsing another user's bookshelf, wherein browsing another user's bookshelf comprises: comparing a first book title list of a first bookshelf belonging to a first user with a second book title list of a second bookshelf belonging to a second user, wherein the first book title list and the second book title list include book titles identified as a result of analyzing the at least one of the regions; andenabling the first user to access the second book title list of the second bookshelf when there is at least a predetermined amount of overlap between the book titles of the first user's bookshelf and the book titles of the second user's bookshelf. 2. The method of claim 1, further comprising, when the matching operation returns said one of the book identities, updating the stored data to reflect association between said one of the book identities and the one or more visual features. 3. The method of claim 1, further comprising, when the further analysis returns the further book identity, updating the stored data to reflect association between the further book identity and the one or more visual features. 4. The method of claim 1, wherein the further analysis includes providing the at least one of the regions to a human labeller and receiving the further book identity from the human labeller. 5. The method of claim 1, further comprising prompting a user who captured said one or more photographic images of the bookshelf to mark the returned one of the book identities or the returned further book identity as being correct or incorrect. 6. The method of claim 5, further comprising, when the user marks the returned one of the book identities or the returned further book identity as being incorrect, prompting the user to provide a user-supplied book identity corresponding to the at least one of the regions, and, upon receipt of the user-supplied book identity, updating a training set to reflect association between the user-supplied book identity and the one or more visual features. 7. The method of claim 1, wherein the stored data comprises models stored in a training set. 8. The method of claim 1, wherein the matching operation comprises querying a database comprising records of book identities and visual features associated with said book identities. 9. The method of claim 1, wherein the one or more visual features further include one or more of: texture, colour and shape of the at least one of the regions. 10. The method of claim 1, wherein the matching operation comprises performing a naïve-Bayes inference over a categorical, bag-of-words occurrence model on the machine-recognized text to determine a plurality of high-probability-of-match candidate book identities, and a visual-feature-based nearest neighbor search performed on the high-probability-of-match candidate book identities. 11. The method of claim 1, wherein the matching operation comprises performing an approximate nearest neighbor search based simultaneously on all of the extracted one or more visual features. 12. The method of claim 1, wherein the matching operation comprises performing deep neural network similarity learning. 13. The method of claim 1, wherein the matching operation comprises performing a geometric consistency check on locations of the machine-recognized text relative to locations of text observed in training examples contained within the stored data, and wherein match closeness increases with geometric consistency. 14. The method of claim 1, further comprising determining which of the identified books on the bookshelf are associated with offers for corresponding digital assets, and presenting a user with said offers. 15. A system for identifying books located on a bookshelf, the system comprising: a mobile device configured to capture one or more photographic images of the bookshelf;a computer server configured to receive the captured one or more photographic images and to:segment the photographic images into regions, each of the regions corresponding to a respective book spine;analyze at least one of the regions to identify a book corresponding thereto, wherein the computer server is further configured, in furtherance of analyzing the at least one of the regions, to: extract one or more visual features descriptive of the at least one of the regions, the one or more visual features including machine-recognized text and a location of the machine-recognized text contained within the at least one of the regions, wherein the machine-recognized text and the location of the machine-recognized text are used as analogues of visual features;perform a matching operation based on the one or more visual features, the matching operation performed against stored data associating plural book identities with corresponding visual features;when the matching operation returns one of the book identities sufficiently closely matched with the one or more visual features, identify the at least one of the regions as representing said one of the book identities;when the matching operation fails to return one of the book identities sufficiently closely matched with the one or more visual features, initiate a further analysis of the at least one of the regions to identify the book corresponding thereto; andwhen the further analysis returns a further book identity sufficiently closely matched with the one or more visual features, identify the at least one of the regions as representing the further book identity; andbrowse another user's bookshelf, wherein browsing another user's bookshelf comprises: comparing a first book title list of a first bookshelf belonging to a first user with a second book title list of a second bookshelf belonging to a second user, wherein the first book title list and the second book title list include book titles identified as a result of analyzing the at least one of the regions; andenabling the first user to access the second book title list of the second bookshelf when there is at least a predetermined amount of overlap between the book titles of the first user's bookshelf and the book titles of the second user's bookshelf. 16. The system of claim 15, wherein the computer server is further configured when the matching operation returns said one of the book identities, to update the stored data to reflect association between said one of the book identities and the one or more visual features. 17. The system of claim 15, wherein the computer server is further configured when the further analysis returns the further book identity, to update the stored data to reflect association between the further book identity and the one or more visual features. 18. The system of claim 15, wherein the further analysis includes providing the at least one of the regions to a human labeller and receiving the further book identity from the human labeller. 19. The system of claim 15, wherein the mobile device is further configured to prompt a user who captured said one or more photographic images of the bookshelf to mark the returned one of the book identities or the returned further book identity as being correct or incorrect. 20. The system of claim 19, wherein the mobile device is further configured, when the user marks the returned one of the book identities or the returned further book identity as being incorrect, to prompt the user to provide a user-supplied book identity corresponding to the at least one of the regions, and, upon receipt of the user-supplied book identity, the system is configured to update a training set to reflect association between the user-supplied book identity and the one or more visual features. 21. The system of claim 15, wherein the stored data comprises models stored in a training set. 22. The system of claim 15, wherein the matching operation comprises querying a database comprising records of book identities and visual features associated with said book identities. 23. The system of claim 15, wherein the one or more visual features further include one or more of: texture, colour and shape of the at least one of the regions. 24. The system of claim 15, wherein the matching operation comprises performing a naïve-Bayes inference over a categorical, bag-of-words occurrence model on the machine-recognized text to determine a plurality of high-probability-of-match candidate book identities, and a visual-feature-based nearest neighbor search performed on the high-probability-of-match candidate book identities. 25. The system of claim 15, wherein the matching operation comprises performing an approximate nearest neighbor search based simultaneously on all of the extracted one or more visual features. 26. The system of claim 15, wherein the matching operation comprises performing deep neural network similarity learning. 27. The system of claim 15, wherein the matching operation comprises performing a geometric consistency check on locations of the machine-recognized text relative to locations of text observed in training examples contained within the stored data, and wherein match closeness increases with geometric consistency. 28. The system of claim 15, wherein the mobile device and the computer server are further cooperatively configured to determine which of the identified books on the bookshelf are associated with offers for corresponding digital assets, and to present a user with said offers.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (5)
Shi, Lei; Tian, Mingjun, Bootstrapping text classifiers by language adaptation.
Hudson, Peter Michael Bruce; Muja, Marius Constantin, Methods and systems for verifying ownership of a physical work or facilitating access to an electronic resource associated with a physical work.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.