匹配点云结合HSI色彩分量的无人机RGB影像信息提取方法

    Method for extracting UAV RGB image information based on matching point cloud and HSI color component

    • 摘要: 无人机通常搭载可见光波段传感器获取红-绿-蓝(Red-Green-Blue,RGB)影像,由于无人机RGB影像波段较少,影像的地物信息提取存在一定难度。该研究提出了一种匹配点云结合色调-饱和度-亮度(Hue-Saturation-Intensity,HSI)空间色彩分量的无人机RGB影像信息提取方法。首先以饱和度分量和红光波段构造了饱和度与红光比值指数,再结合可见光波段差异植被指数以及由匹配点云获得的地形特征对研究区正射影像进行分类。试验结果表明,本文方法的总体分类精度达到了91.11%,Kappa系数为0.895,证明匹配点云结合HSI空间色彩分量的方法提取无人机RGB影像信息是可行的,提取结果具有较高精度。相较于基于光谱特征的传统方法,该文方法引入匹配点云可以简单高效地提取影像中高程差异明显的地物,同时,结合HSI色彩分量能够有效弥补无人机RGB影像光谱特征较少的不足。

       

      Abstract: Abstract: Unmanned Aerial Vehicle (UAV) remote sensing can be widely used to capture large-scale high overlapping and high spatial resolution images from the ground in a low-cost way. However, the UAV Red-Green-Blue (RGB) images are usually obtained, because the UAV sensors can only contain the wavebands with the visible light range, such as the red, green, and blue. It is difficult to extract the feature information from the UAV RGB images because of the fewer wavebands. Taking the UAV RGB orthophoto image as the research object, an image extraction was proposed to integrate the point cloud matching with the Hue-Saturation-Intensity (HSI) color component. Firstly, the high overlapping ground images were acquired by UAV, thereby constructing the point cloud matching using dense matching, where a digital surface model was generated. Then, the ground points and non-ground points in the study area were obtained by a cloth simulation filtering. Specifically, the ground points were interpolated to generate the digital elevation model. As such, a normalized digital surface model was obtained via the difference between the digital surface model and digital elevation model. Secondly, a saturation-red ratio index was established to extract the bare land using the color components of HSI color space and visible light bands. A bare land extraction was then carried out to verify the model. Thirdly, the expansion index was constructed by multiplying the formula of saturation-red ratio index by the standard deviation of the red band to extract the water with the sediment. Finally, two models and three indexes were achieved to classify the UAV RGB orthophoto images in the study area, including the digital elevation model, the normalized digital surface model, the visible-band difference vegetation index, the saturation-red ratio index, and the expansion index. Afterwards, an experiment was then carried out to verify the extraction model. The results showed that the saturation-red ratio index presented a higher accuracy in the bare land extraction from the UAV RGB images. The normalized digital surface model performed better to obtain the key feature that distinguished from the trees and grasslands using the matching of point cloud interpolation. Furthermore, the river and bare land were successfully extracted, using the saturation-red ratio and the expansion index under the HSI color components, while the digital elevation model features that obtained from the point cloud matching. The overall accuracy of the rule-based classification was 91.11%, where the Kappa coefficient was 0.895. In the classification, the highest producer's accuracy and user's accuracy were the grass and rivers, which reached 100% and 99.15%, respectively. Moreover, there was an outstanding elevation difference in the high-precision digital elevation model, digital surface model, and normalized digital surface model. Additionally, the saturation-red ratio and expansion indexes were composed of the visible light band and the color component of saturation, particularly for the extraction of bare land and sand water. Consequently, it can be feasible to combine the point cloud matching and the color components of the HSI color space in the extraction of UAV RGB orthophoto image information.

       

    /

    返回文章
    返回