The purpose of this study is to propose an optimal fusion method of aerial multi - sensor data to improve the accuracy of land cover classification. Recently, in the fields of environmental impact assessment and land monitoring, high-resolution image data has been acquired for many regions for quant...
The purpose of this study is to propose an optimal fusion method of aerial multi - sensor data to improve the accuracy of land cover classification. Recently, in the fields of environmental impact assessment and land monitoring, high-resolution image data has been acquired for many regions for quantitative land management using aerial multi-sensor, but most of them are used only for the purpose of the project. Hyperspectral sensor data, which is mainly used for land cover classification, has the advantage of high classification accuracy, but it is difficult to classify the accurate land cover state because only the visible and near infrared wavelengths are acquired and of low spatial resolution. Therefore, there is a need for research that can improve the accuracy of land cover classification by fusing hyperspectral sensor data with multispectral sensor and aerial laser sensor data. As a fusion method of aerial multisensor, we proposed a pixel ratio adjustment method, a band accumulation method, and a spectral graph adjustment method. Fusion parameters such as fusion rate, band accumulation, spectral graph expansion ratio were selected according to the fusion method, and the fusion data generation and degree of land cover classification accuracy were calculated by applying incremental changes to the fusion variables. Optimal fusion variables for hyperspectral data, multispectral data and aerial laser data were derived by considering the correlation between land cover classification accuracy and fusion variables.
The purpose of this study is to propose an optimal fusion method of aerial multi - sensor data to improve the accuracy of land cover classification. Recently, in the fields of environmental impact assessment and land monitoring, high-resolution image data has been acquired for many regions for quantitative land management using aerial multi-sensor, but most of them are used only for the purpose of the project. Hyperspectral sensor data, which is mainly used for land cover classification, has the advantage of high classification accuracy, but it is difficult to classify the accurate land cover state because only the visible and near infrared wavelengths are acquired and of low spatial resolution. Therefore, there is a need for research that can improve the accuracy of land cover classification by fusing hyperspectral sensor data with multispectral sensor and aerial laser sensor data. As a fusion method of aerial multisensor, we proposed a pixel ratio adjustment method, a band accumulation method, and a spectral graph adjustment method. Fusion parameters such as fusion rate, band accumulation, spectral graph expansion ratio were selected according to the fusion method, and the fusion data generation and degree of land cover classification accuracy were calculated by applying incremental changes to the fusion variables. Optimal fusion variables for hyperspectral data, multispectral data and aerial laser data were derived by considering the correlation between land cover classification accuracy and fusion variables.
* AI 자동 식별 결과로 적합하지 않은 문장이 있을 수 있으니, 이용에 유의하시기 바랍니다.
제안 방법
Since airborne LiDAR data is composed of a single band, data fusion was performed in 6 types from 2 times(total 2 bands) to 12 times(total 12 bands) in order to obtain the optimal variable(Nl), and then effective variables were derived. For the spectral graph height adjustment parameter(Sl) between the hyperspectral and airborne LiDAR data, data fusion was performed with the fusion ratio of the standard deviation of hyperspectral data and the standard deviation of airborne LiDAR data from 25 % to 175 % and then effective variables were derived. Fig.
In the case of multispectral data, as basic band is consist of 3 bands, in order to obtain the optimal parameter(Nm) due to the cumulative fusion, data fusion is performed in 6 types from 1 time(total 3 bands) to 6 times and effective variables were derived. For the spectral graph height adjustment parameter(Sm) between hyperspectral and multispectral data, data fusion was performed with 7 types of fusion ratio of the standard deviation of hyperspectral data and that of multispectral data, from 25 % to 175 %, effective variables were derived. Fig.
As a fusion method of aerial multisensor, we proposed a pixel - based adjustment method, a band accumulation method, and a spectral graph adjustment method. Fusion parameters such as fusion rate, band accumulation, spectral graph expansion ratio were selected according to the fusion method, and the accuracy of fusion data generation and of land cover classification were calculated by applying incremental changes to the fusion variables. Optimal fusion variables for hyperspectral data, multispectral data and aerial laser data were derived by considering the correlation between land cover classification accuracy and fusion variables.
For the spectral graph height adjustment, the multispectral data and the airborne LiDAR data are normalized to have the same statistical distribution and standard deviation using statistical distribution and standard deviation of hyperspectral data. If the spectral graph height of the normalized data is assumed to be 100 %, the height of the spectral graph is reduced or enlarged and the slope of the spectral graph is reduced or enlarged so that the classification accuracy and correlation of fused data is analyzed with adjusted graph.
By performing land cover classification for each fusion method and by considering classification accuracy and visual evaluation, optimal variables were derived. In order to obtain the optimal parameters of the pixel ratio adjustment fusion of hyperspectral data and airborne LiDAR data, Data fusion was performed with the fusion ratio variable(Rh: Rl) of 7 types from 2:8 to 8:2, and effective variables were derived. Since airborne LiDAR data is composed of a single band, data fusion was performed in 6 types from 2 times(total 2 bands) to 12 times(total 12 bands) in order to obtain the optimal variable(Nl), and then effective variables were derived.
In this paper, optimal fusion method of aerial mutisensor data was studied including hyperspectral sensor data, multispectral sensor data, aerial laser sensor data. From the study, following conclusions could be drawn.
Fusion parameters such as fusion rate, band accumulation, spectral graph expansion ratio were selected according to the fusion method, and the accuracy of fusion data generation and of land cover classification were calculated by applying incremental changes to the fusion variables. Optimal fusion variables for hyperspectral data, multispectral data and aerial laser data were derived by considering the correlation between land cover classification accuracy and fusion variables.
(2011) extracted mosquito candidate sites by using hyper spectral and multispectral sensor data to prevent diseases caused by mosquitoes. Reflection intensity data, DTM (Digital Terrain Model), height data of trees, buildings were extracted from airborne LiDAR data, and vegetation index was calculated from multispectral data and converted into image data. Each images were used as an input band using CART (Classification And Regression Tree) image classification method.
Second, optimal fusion methods and variabels are derived by applying three fusion methods for hyper spectral data and aerial laser data.
대상 데이터
Hyperspectral and multispectral sensor, and aerial laser sensor data were duplicated. The location of the study area has coastal terrain features such as beach sand, rock, gravel, and inland terrain features such as buildings, roads, forests.
The study area is the area of Ewangdong, Jung-gu, Incheon, and covers an area of about 320,000 square meters. Hyperspectral and multispectral sensor, and aerial laser sensor data were duplicated.
이론/모형
Reflection intensity data, DTM (Digital Terrain Model), height data of trees, buildings were extracted from airborne LiDAR data, and vegetation index was calculated from multispectral data and converted into image data. Each images were used as an input band using CART (Classification And Regression Tree) image classification method. As a result of applying to land cover classification, it was useful to classify vegetation distinction and shade area.
(2018) studied land cover classification of complex forest areas through the fusion of hyperspectral data, airborne LiDAR reflection intensity data and DTM. Land cover classification was performed for the hyperspectral data by fusing airborne LiDAR data and using SVM (Support Vector Machines) and GML (Gaussian Maximum Likelihood) classification method. The study result showed classification accuracy which increased more than 2.
성능/효과
7 and Table 6 is the result of applying data fusion by band-accumulation of hyperspectral data and multispectral data, and it show the change of classification accuracy as the band of multispectral data is accumulated. Accuracy of AS and RG tended to increase gradually, and items excluding WS, GR, and GL showed high accuracy of more than 90 %. Accuracy of WS, GR, and GL were decreased and a sharp decrease of more than 20 %p was shown in WS.
In case of WS, the accuracy was increased by 100 %, and then decreased. Accuracy of RG and GR was showed a tendency to decrease drastically over 16 %p, and accuracy of classification items except for RG and GR was generally increased. The classification accuracy related to the vegetation was increased, while that of non-vegetation related items was generally decreased.
Assuming correlation index more than 0.9 as strong correlation, the CF, BF, and GR items showed strong correlation in the pixel ratio adjustment fusion, and the BF, DS, RG, WS, and GL items showed strong correlation in the band accumulation fusion. WS items showed strong correlation in the spectral graph reconcilation fusion.
In case of pixel ratio adjestment fusion, the accuracy was decreased overall. Assuming the correlation index over 0.9 as strong correlation, CF, BF, DS, RG, AS, and OB were showed to have strong correlation, and CF, BF, DS, RG, AS, OB, and GR items were showed to have srong correlation in band accumulation fusion. RG, OB, and GR items were showed to have strong correlation in spectral graph reconciliation fusion.
As a result of land cover classification of fusion data, classification accuracy was lower than other fusion results. Considering band accumulation fusion and spectral graph reconciliation fusion method, accuracy enhancement was expected for CF, DS, AS and GL, but low accuracy was estimated for GR.
Fourth, as a result of analyzing the correlation of the accuracy depending on the fusion method and the fusion variable, correlation more than 90 % was shown in deciduous forest, dry sand, rock gravel, wet sand, asphalt and building. From this, the improvement of classification accuracy of land cover was shown possible depending on proper fusion method and variables.
6 and Table 5 show the results of applying the fusion method of the hyperspectral and the multispectral data by the pixel ratio adjustment, and it shows the change of classification accuracy by items as the increase of fusion rate of multispectral data. RG increased more than 30 % p and increase width of AS was small until 5:5 ratio fusion, but rapidly increased in 3:7 ratio, WA, BB and OB showed more than 90 % accuracy regardless of fusion ratio. BF and GL showed a gradual lowering tendency of classification accuracy.
Second, the classification accuracy of land cover was decreased in case of pixel ratio adjustment method, and classification accuracy of land cover was improved in band accumulation method and spectral graph adjustment method. From this, in order to improve the accuracy of land cover classification, it was found that band-based adjustment method should be used rather than pixel-based adjustment method.
As the fusion rate of airborne LiDAR data increases, the accuracy of BF and GR was decreased rapidly. The accuracy of AS and WS was increased overall, and WA, BB, and OB showed more than 90 % accuracy regardless of fusion ratio. Although the increase width of RG and DS was small, the accuracy was lowered gradually and the accuracy of both vegetation and non-vegetation related items tended to be lowered.
The higher ratio of multispectral data, the more classification accuracy is changed. The accuracy of RG increased sharply by more than 18 %p while DS showed a sharp decrease of more than 27 %p The accuracy of AS and GL was high but that of WS decreased, the classification items related to vegetation were generally increased in accuracy, and the accuracy of non-vegetation related classification items were generally decreased.
Land cover classification was performed for the hyperspectral data by fusing airborne LiDAR data and using SVM (Support Vector Machines) and GML (Gaussian Maximum Likelihood) classification method. The study result showed classification accuracy which increased more than 2.6 % on average compared to the result before fusion. In addition, classification accuracy between similar classification items could be improved by using hyperspectral data.
후속연구
The convergence method based on the results of this study will enable the quality improvement of the land cover mapping project, and the accuracy of land cover classification will be improved if further research is done including accuracy improvement method of vegetation related land cover classification items with reduced accuracy, analysis of seasonal fusion characteristics, and fusion method of medium infrared and thermal infrared data.
참고문헌 (12)
Ail, S. S., Dare, P. and Jones, S. D. (2008),Fusion of remotely sensed multispectral imagery and LiDAR data for forest structure assessment at the tree level, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. 37, No. 2, pp. 1089- 1094.
Ashmawy, N., Shaker, A. and Yan, W. Y. (2011), Pixel vs object-based image classi?cation techniques for LiDAR intensity data, International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 38, Vol. 3812, pp. 43-48.
Dalponte, M. (2008), Fusion of hyperspectral and LiDAR remote sensing data for classi?cation of complex forest areas, IEEE Transactions on Geoscience and Remote Sensing, Vol.46, No.5, pp. 1416-1427.
Elaksher, A.(2008), Fusion of hyperspectral images and LiDAR based DEMs for coastal mapping, The international Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. 38, pp.725-730.
Jang, S.J. (2006), A Study of Automated Production and Update Method for Land Cover/Land Use Using Hyperspectral Satellite Image, Ph.D. dissertation, Kyunghee University, Seoul, Korea, 111p.
Land cover classifcation using aerial hyperspectral imagery, Master's Thesis, Kumoh National Institute of Technology, Gyeongsangbuk-do, Korea, 82p.
KHOA (2011), Report on the coastal survey in the west sea and islands 11-1611234-000206-0, Ministry of Land, Transport and Maritime Affairs, KHOA(Korea Hydrographic and Oceanographic Agency), Incheon, Korea, pp.348-365.
Kim, S.H. (2013), A Study on the Improvement of Aerial Hyperspectral Image Classifcation Accuracy Using PCA, Master's thesis, Kyonggi University, Gyeonggi-do, Korea, 43p.
Kwon, O.S. (2014), Improvement of Land Cover Classifcation Accuracy by Optimal Fusion of Aerial Multi-Sensor Data, Ph.D. dissertation, Incheon National University, Incheon, Korea, 180p.
Kwon, O.S., Kim, S.S. and Back S.Y.(2014), A study on Hyperspectral Image Classi?cation Accuracy Improvement using Multispectral Data Fusion, Korean Society for GeoSpatial Information Science, 15-16 May, Jeju, Korea , pp. 119-120.
Kyle, A. H., Katheryn, I. L. and Willem, J. D. (2011), Fusion of high resolution aerial multispectral and LiDAR Data : Land cover in the context of urban mosquito habitat, Remote Sensing 2011, Vol. 3, No. 11, pp. 2364-2383.
Lee, J. H. (2013), Necessity and Implementation Plan of Coastal Waters, The Hydrographic Society of Korea, Vol. 2, No. 2, pp. 3-14.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.