System and method for detecting objects in an image
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G06K-009/32
G06K-009/22
G06K-009/46
G06K-009/66
G06N-003/04
G06N-003/08
G06T-003/40
G06T-005/00
G06K-009/36
출원번호
US-0344605
(2016-11-07)
등록번호
US-9754163
(2017-09-05)
발명자
/ 주소
Segalovitz, Yair
Shoor, Omer
Lipman, Yaron
Tzemah, Nir
Verter, Natalie
출원인 / 주소
PHOTOMYNE LTD.
대리인 / 주소
May Patents Ltd.
인용정보
피인용 횟수 :
2인용 특허 :
109
초록▼
A method for cropping photos images captured by a user from an image of a page of a photo album is described. Corners in the page image are detected using corner detection algorithm or by detecting intersections of line-segments (and their extensions) in the image using edge, corner, or line detecti
A method for cropping photos images captured by a user from an image of a page of a photo album is described. Corners in the page image are detected using corner detection algorithm or by detecting intersections of line-segments (and their extensions) in the image using edge, corner, or line detection techniques. Pairs of the detected corners are used to define all potential quads, which are then are qualified according to various criteria. A correlation matrix is generated for each potential pair of the qualified quads, and candidate quads are selected based on the Eigenvector of the correlation matrix. The content of the selected quads is checked using a salience map that may be based on a trained neuron network, and the resulting photos images are extracted as individual files for further handling or manipulation by the user.
대표청구항▼
1. A method for detecting one or more rectangular-shaped object regions from a background in a captured image, the method comprising: obtaining the captured image by a digital camera;analyzing the captured image using a deep convolutional neural network for detecting the object regions;enhancing the
1. A method for detecting one or more rectangular-shaped object regions from a background in a captured image, the method comprising: obtaining the captured image by a digital camera;analyzing the captured image using a deep convolutional neural network for detecting the object regions;enhancing the image in each of the detected regions; andcropping or extracting from the captured image each of the enhanced detected regions into a respective file,wherein the objects are rectangular-based photographs, receipts, business cards, sticky notes, printed newspapers, or stamps, andwherein the neural network is further trained to recognize or classify the rectangular-based object regions in the captured image. 2. The method according to claim 1, wherein further comprising training the neural network to detect the rectangular-based object regions in the captured image. 3. The method according to claim 1, wherein the neural network is further trained to recognize or classify the objects in the captured image and having multiple stages or layers, and wherein the analyzing of the content of each of the regions uses an output of an intermediate stage or layer in the neural network. 4. The method according to claim 3, wherein the neural network is ImageNet having 26 stages or layers, the intermediate stage or layer is an eighth stage or layer, the output includes 256 saliency maps, and wherein the analyzing of the content comprises generating an output map calculated by a weighted average of the 256 saliency maps. 5. A non-transitory tangible computer readable storage media comprising code to perform the steps of the method of claim 1. 6. A device housed in a single enclosure and comprising in the single enclosure the digital camera, a memory for storing computer executable instructions, and a processor for executing the instructions, the processor configured by the memory to perform acts comprising the method of claim 1. 7. The device according to claim 6, wherein the single enclosure is a portable or a hand-held enclosure and the device is battery-operated and consists of, comprises, or is part of, a notebook, a laptop computer, a media player, a cellular phone, a tablet, a Personal Digital Assistant (PDA), or an image-processing device. 8. The method according to claim 1, wherein the background comprises a pattern and the captured image comprises an entire or part of a page of a photo album. 9. The method according to claim 1, wherein the obtaining of the captured image comprises capturing the captured image by the digital camera, and wherein the digital camera is part of, or comprises, a single enclosure that is a portable or a hand-held enclosure that includes a battery for powering the digital camera, and the single enclosure further comprises a notebook, a laptop computer, a media player, a cellular phone, a tablet, a Personal Digital Assistant (PDA), or an image-processing device. 10. The method according to claim 1, wherein the obtaining of the captured image comprises fetching the captured image from a volatile memory or a non-volatile memory that consists of, or comprises, a Hard Disk Drive (HDD), a Solid State Drive (SSD), RAM, SRAM, DRAM, TTRAM, Z-RAM, ROM, PROM, EPROM, EEROM, Flash-based memory, CD-RW, DVD-RW, DVD+RW, DVD-RAM BD-RE, CD-ROM, BD-ROM, and DVD-ROM. 11. The method according to claim 1, wherein the captured image is a single image file that is in a compressed format that is according to, based on, or consists of, Portable Network Graphics (PNG), Graphics Interchange Format (GIF), Joint Photographic Experts Group (JPEG), Exchangeable image file format (Exif), Tagged Image File Format (TIFF). 12. The method according to claim 1, wherein the captured image is using a color image that is using a color model that is according to, or based on, the RGB color space. 13. The method according to claim 1, further comprising capturing a first image by the digital camera, and forming the captured image by converting the first image from the color model to a grayscale format, wherein the converting of the captured image to a grayscale format is according to, based on, or using, linearly encoding grayscale intensity from linear RGB. 14. The method according to claim 1, further comprising capturing a first image by the digital camera, and downscaling the first image to form the captured image by downscaling the first image using a downscaling algorithm, wherein the captured image is having less than 10%, 7%, 5%, 3%, or 1% pixels of the first image or wherein the captured image is having less than 10,000, 5,000, 2,000, or 1,000 pixels. 15. The method according to claim 14, wherein the downscaling algorithm is according to, is based on, or uses, an adaptive or non-adaptive image interpolation algorithm. 16. The method according to claim 15, wherein the non-adaptive image interpolation algorithm consists of, comprises, or is part of, nearest-neighbor replacement, bilinear interpolation, bi-cubic interpolation, Spline interpolation, Lanczos interpolation, or a digital filtering technique. 17. The method according to claim 1, wherein the enhancing the region image comprises generating a color balanced image by correcting the color balance of the image. 18. The method according to claim 17, wherein the region image color space is using RGB whereby each pixel is defined by (r, g, b), and the correcting of the color balance comprises: obtaining a gray reference pixel values (rref, gref, bref);calculating average pixel values of the region image (ravg, gavg, bavg);calculating a color shift (rsft, gsft, bsft) of the region image according to, or based on, (rsft, gsft, bsft)=(ravg, gavg, bavg)−(rref, gref, bref); andcalculating the color balanced image having pixels values (rc, gc, bc), where each pixel value is calculated (rc, gc, bc)=(r, g, b)−(rsft, gsft, bsft). 19. The method according to claim 18, wherein the obtaining of a gray reference pixel values is based on, or equal to, an average of pixel values in multiple images. 20. The method according to claim 19, wherein the obtaining of the gray reference pixel values is based on, or equal to, an average of pixel values in all of the extracted or cropped regions. 21. The method according to claim 17, wherein the enhancing of the image comprises generating a color-balanced image by correcting the color balance of the region image followed by enhancing the contrast in the color-balanced image. 22. The method according to claim 21, wherein the enhancing of the contrast, comprises, uses, or is based on, a linear contrast enhancement that is Min-Max Linear Contrast Stretch, Percentage Linear Contrast Stretch, or Piecewise Linear Contrast Stretch. 23. The method according to claim 21, wherein the enhancing of the contrast, comprises, uses, or is based on, a non-linear contrast enhancement that is Histogram Equalizations, Adaptive Histogram Equalization, Unsharp Mask, or Homomorphic Filter. 24. The method according to claim 1, wherein the analyzing of the captured image further comprises producing a list of quads in the captured image. 25. The method according to claim 24, for use with plurality of detected corners in the captured image, wherein each of the quads is a quadrilateral having two or more vertices that are selected from the detected corners. 26. The method according to claim 25, further comprising detecting the corners in the captured image, wherein each corner is defined by a point location and two directions from the point. 27. The method according to claim 26, wherein the detecting of the corners is according to, is based on, or consists of, a corner detection algorithm. 28. The method according to claim 27, wherein the detecting of the corners comprises detecting straight-line segments in the captured image according to, or based on, a pattern recognition algorithm. 29. The method according to claim 24, further comprising: estimating for each pair of quads selected from the list a correlation value corresponding to underlying similarities of the quads pair;selecting quads from the list based on the estimated correlation values;analyzing content of each of the regions defined by the selected quads; anddetermining that the regions defined by the selected quads correspond to the objects regions. 30. The method according to claim 1, wherein the analyzing of the captured image comprises generating a saliency map identifying salient pixels or saliency region in the captured image.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (109)
Ji, Shuiwang; Xu, Wei; Yang, Ming; Yu, Kai, 3D convolutional neural networks for automatic human action recognition.
Tzur, Meir; Shaick, Ben-Zion; Dvir, Itsik; Pinto, Victor, Detecting objects in an image being acquired by a digital camera or other electronic image acquisition device.
Buhr John D. ; Goodwin Robert M. ; Koeng Frederick R. ; Rivera Jose E., Digital photofinishing system including scene balance, contrast normalization, and image sharpening digital image processing.
Roberts Marc K. (Burke VA) Chikosky Matthew A. (Springfield VA) Speasl Jerry A. (Vienna VA), Electronic still video camera with direct personal computer (PC) compatible digital format output.
Steffens Johannes Bernhard ; Elagin Egor Valerievich ; Nocera Luciano Pasquale Agostino ; Maurer Thomas ; Neven Hartmut, Face recognition from video images.
Parulski Kenneth A. (Rochester NY) Hamel Robert H. (Walworth NY) Acello John J. (East Rochester NY), Hand-manipulated electronic camera tethered to a personal computer.
DeBan Abdou F. (Dayton OH) Xu Tianning (Dayton OH) Tumey David M. (Huber Heights OH) Arndt Craig M. (Dayton OH), Identification and verification system.
Steinberg, Eran; Corcoran, Peter; Prilutsky, Yury, Method and apparatus for initiating subsequent exposures based on determination of motion blurring artifacts.
Spence Clay Douglas ; Pearson John Carr ; Sajda Paul, Method and apparatus for training a neural network to learn hierarchical representations of objects and to detect and classify objects with uncertain training data.
Matraszek, Tomasz A.; Fedorovskaya, Elena A.; Endrikhovski, Serguei; Parulski, Kenneth A., Method for creating and using affective information in a digital imaging system.
Huang, Yong; Hui, Lucas; Wang, Haiyun; Lebowsky, Fritz, System and process for image rescaling using adaptive interpolation kernel with sharpness and overshoot control.
Mashimo Yukio (Tokyo JA) Sakurada Nobuaki (Kanagawa JA) Ito Tadashi (Kanagawa JA) Ito Fumio (Kanagawa JA) Shinoda Nobuhiko (Tokyo JA), System for exposure measurement and/or focus detection by means of image senser.
Mashimo Yukio (Tokyo JPX) Sakurada Nobuaki (Kanagawa JPX) Ito Tadashi (Kanagawa JPX) Ito Fumio (Kanagawa JPX) Shinoda Nobuhiko (Tokyo JPX), System for exposure measurement and/or focus detection by means of image sensor.
Kojima Kazuaki (Nagaokakyo JPX) Kuno Tetsuya (Nagaokakyo JPX) Sugiura Hiroaki (Nagaokakyo JPX) Yamada Takeshi (Nagaokakyo JPX), Video signal processor for detecting flesh tones in am image.
Yang, Ruiduo; Bi, Ning; Yang, Sichao; Wu, Xinzhou; Guo, Feng; Ren, Jianfeng, Object detection using location data and scale space representations of image data.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.