Method and apparatus for combining panoramic image
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/36
G06T-003/20
G06T-003/40
G06T-007/00
출원번호
US-0882273
(2011-04-14)
등록번호
US-9224189
(2015-12-29)
우선권정보
CN-2010 1 0528052 (2010-11-02)
국제출원번호
PCT/CN2011/072820
(2011-04-14)
§371/§102 date
20130429
(20130429)
국제공개번호
WO2012/058902
(2012-05-10)
발명자
/ 주소
Liu, Dongmei
출원인 / 주소
ZTE CORPORATION
대리인 / 주소
Cantor Colburn LLP
인용정보
피인용 횟수 :
0인용 특허 :
6
초록▼
The disclosure discloses a method and an apparatus for combining panoramic image. The method includes: obtaining multiple original images of the same scene, performing folding change and coordinates transformation to the multiple original images, and determining an overlapping area of the multiple o
The disclosure discloses a method and an apparatus for combining panoramic image. The method includes: obtaining multiple original images of the same scene, performing folding change and coordinates transformation to the multiple original images, and determining an overlapping area of the multiple original images; establishing a mathematical model of the multiple original images, aligning the overlapping area of the multiple original images, and transforming the multiple original images to a coordinate system of a reference image; obtaining the space transformation relationship among/between the multiple original images according to the coordinate system of the reference image, selecting an appropriate image combining strategy, and completing the combining of the images. The solution can realize obtaining scene picture with large field of view without reducing image resolution.
대표청구항▼
1. A method for combining panoramic image, which is applied to a camera of a mobile device, comprising: obtaining multiple original images of a same scene, performing folding change and coordinates transformation to the multiple original images, and determining an overlapping area of the multiple or
1. A method for combining panoramic image, which is applied to a camera of a mobile device, comprising: obtaining multiple original images of a same scene, performing folding change and coordinates transformation to the multiple original images, and determining an overlapping area of the multiple original images;establishing a mathematical model of the multiple original images, aligning the overlapping area of the multiple original images, and transforming the multiple original images to a coordinate system of a reference image; andobtaining a space transformation relationship among/between the multiple original images according to the coordinate system of the reference image, selecting an image combining strategy, and completing the combining of the images;wherein aligning the overlapping area of the multiple original images, and transforming the multiple original images to the coordinate system of the reference image comprises:extracting feature points of the multiple original images in a specific way;using a similarity measure Normalized Cross Correlation (NCC) to extract initial feature point pair(s) through a Bidirectional Greatest Correlative Coefficient (BGCC) matching algorithm;getting rid of pseudo feature point pair(s) through a Random Sample Consensus (RANSAC) algorithm to obtain exactly matching feature point pair(s); andperforming an inverse mapping transformation to the multiple original images according to a projection transformation matrix, transforming the multiple original images to the coordinate system of the reference image, and performing image registration according to the exactly matching feature point pair(s). 2. The method according to claim 1, wherein performing folding change and coordinates transformation to the multiple original images, and determining the overlapping area of the multiple original images comprises: performing a basic image processing operation to the multiple original images, establishing a matching template of the images, performing a predetermined transformation to the images, extracting a set of feature points of the images, and determining the overlapping area of the multiple original images. 3. The method according to claim 2, wherein establishing the mathematical model of the multiple original images comprises: obtaining a corresponding position of the matching template or the set of the feature points of the multiple original images in the reference image, calculating each parameter value in the mathematical model according to the position, and establishing the mathematical model of the multiple original images. 4. The method according to claim 1, wherein the feature points are corners, and the specific way is a corner detection algorithm; and extracting the feature points of the multiple original images in the specific way comprises:calculating a lateral first derivative and a longitudinal first derivative of each point in each original image of the multiple original images, and calculating a product of the lateral first derivative and the longitudinal first derivative to obtain three new images corresponding to the each original image, performing convolution with the each original image by using a 3×3 convolution kernel to obtain a partial derivative of the each point of the each original image, and performing Gaussian filtering to the three new images;calculating an R value of each corresponding pixel of the each original image according to a corner response function formula, formula 1, R=Det(M)Trace(M)+ɛ,formula1wherein Det(M)=λ1λ2, Trace(M)=λ1+λ2, M is a 2×2 symmetric matrix, λ1 and λ2 are two feature values of M, and ε is a number with small value;selecting a proper window in the each original image, retaining the pixel with the maximum interest value in the window, deleting other pixels in the window, moving the window to perform screening on the whole each original image, selecting one or more point(s) with the maximum interest value(s) as local extreme point(s) according to a preset threshold, and using a boundary template to remove the corner(s) on boundary with low matching effect; andperforming sub-pixel locating of the corners by using a quadratic polynomial of ax2+by2+cxy+dy+ey+f=R(x,y). 5. The method according to claim 4, wherein using the similarity measure NCC to extract the initial feature point pair(s) through the BGCC matching algorithm comprises: establishing the similarity measure NCC according to formula 2, Cij=∑k=-nn∑l=-nn[I1(ui1+k,vi1+l)-I_1(ui1,vi1)]×[I2(uj2+k,vj2+l)-I_2(uj2,vj2)](2n+1)(2n+1)σi2(I1)×σj2(I2),formula2wherein I1 and I2 are grey levels of two images, n×n is size of the window, setting that the corners in a first image are di, wherein i=1 . . . m, and the corners in a second image are dj, wherein j=1 . . . n, then (ui1,vi1) and (uj2,vj2) are respectively the ith feature point and the jth feature point to be matched in the two images, Ī(u,v) is an average grey level value of a corner window area, wherein I_(u,v)=∑i=-nn∑j=-nnI(u+i,v+j)(2n+1)(2n+1), and standard deviation σ of the window area is σ=∑i=-nn∑j=-nnI2(u+i,v+j)(2n+1)(2n+1)-I_2(u,v);selecting a related window with size of n×n centring on any corner in image I1, selecting a rectangular search area with size of dl×dh centring on the pixel in image I2 with same coordinates as the given corner in the image I1, then performing calculation of correlation coefficient Cij of the given corner in the image I1 and each corner in the search window area in the image I2, and taking the corner with the maximum correlation coefficient as a matching point of the given corner in the image I1, so as to obtain a set of the matching points;selecting any corner in the image I2 as a given corner in the image I2, and searching for the corner with the maximum correlation coefficient in a corresponding window area in the image I1 as a matching point of the given corner in the image I2, so as to obtain another set of the matching points; andsearching for a pair of same matching corners in the obtained two sets of the matching points, and completing initial matching of the corners when confirming that the pair of same matching corners are matching and corresponding to each other. 6. The method according to claim 5, wherein before establishing the similarity measure NCC according to the formula 2, the method further comprises: smoothing the images with a 7×7 median filter, and taking a result obtained from subtraction of the original images and the filtered images as an object for operation. 7. The method according to claim 1, wherein the RANSAC algorithm comprises: repeating random sampling for N times, wherein N is greater than or equal to 1;randomly selecting 4 pairs of matching points, and linearly calculating the projection transformation matrix, wherein the 4 pairs of matching points ensure that any three points in the sample are not on a same straight line;calculating a distance from each matching point to a corresponding matching point after transformation of the projection transformation matrix; andcalculating inliers of the projection transformation matrix based on a principle that the distance of the inliers is less than a distance threshold t, selecting a point set which includes the most inliers, and recalculating the projection transformation matrix in this inlier area, wherein the inliers are the points that meet an estimated parameter. 8. The method according to claim 1, wherein obtaining the space transformation relationship among/between the multiple original images according to the coordinate system of the reference image, selecting the image combining strategy, and completing the combining of the images comprises: calculating grey level values f(x,y) of pixels in the overlapping area of two images according to formula 3 and formula 4, f(x,y)={f1(x,y)f1(x,y)d1×f1(x,y)+d2×f2(x,y)f2(x,y)f2(x,y),formula3(x,y)∈f1f1-f2>door,d1>d2,(x,y)∈(f1⋂f2)f1-f2door,d1door,d1>d2,(x,y)∈(f1⋂f2)f1-f2door,d1
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (6)
Katayama Tatsushi,JPX ; Takiguchi Hideo,JPX ; Yano Kotaro,JPX ; Hatori Kenji,JPX, Apparatus and method for combining a plurality of images.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.