基于改进YOLOv8的粘连大米图像分割模型

    An improved YOLOv8 model for segmentation of adhesive rice images

    • 摘要: 基于机器视觉的大米外观品质检测已广泛应用,但在实际处理中,籽粒粘连常导致分割结果不稳定、不精准,现有方法在自然散落场景下易出现过分割或边缘模糊问题。为此,提出了一种基于YOLOv8的改进型粘连大米分割模型GT-YOLOv8,构建具有更大空间感受野和动态卷积能力的主干网络GTNet,以提升模型对边缘特征的提取能力和整体分割效率。同时,融合多尺度注意力机制,并在损失函数中引入WIoU损失函数,增强模型的特征提取能力与全局信息捕捉能力。构建了基于多品种大米、多光照拍摄条件、多种粘连程度的粘连大米实例分割数据集STR-900验证模型的有效性。结果表明,所提出的GTNet主干结构在保持较低参数量的同时,仍然保持了与ConvNeXt主干相当的性能,充分体现了其在轻量化与高效性之间的良好平衡。特别地,基于GTNet构建的GT-YOLOv8模型在大米粘连图像的实例分割任务中表现突出,相比原始YOLOv8模型在Mask mAP指标上提升了5.93百分点,显著增强了对粘连籽粒的分割准确性和边缘清晰度。综上,GT-YOLOv8模型在复杂粘连条件下表现出较高的分割精度与泛化能力,具有良好的应用前景。

       

      Abstract: Rice appearance quality inspection based on machine vision has been widely applied. However, in practical scenarios, grain adhesion often leads to unstable and inaccurate segmentation results. Existing methods tend to suffer from over-segmentation or blurred boundaries when dealing with naturally scattered and adhesive rice grain images. To address this issue, this paper proposes an improved rice grain segmentation model, GT-YOLOv8, based on YOLOv8. A novel backbone network, GTNet, is designed with a larger spatial receptive field and dynamic convolution capability to enhance edge feature extraction and overall segmentation efficiency. Additionally, a multi-scale attention mechanism is integrated, and the WIoU loss function is introduced to further improve feature extraction and global information modeling. To validate the effectiveness of the proposed model, a rice instance segmentation dataset named STR-900 was constructed, containing multiple rice varieties, various lighting conditions, and different adhesion levels. Experimental results show that the proposed GTNet backbone achieves performance comparable to the ConvNeXt backbone while maintaining a significantly lower number of parameters, demonstrating a favorable balance between lightweight design and efficiency. Notably, the GT-YOLOv8 model built upon GTNet exhibits outstanding performance in the task of instance segmentation for adhesive rice grains, achieving a 5.93 percentage point improvement in Mask mAP over the original YOLOv8, with significantly enhanced segmentation accuracy and edge clarity for adhered grains. In summary, the GT-YOLOv8 model demonstrates high segmentation accuracy and strong generalization capability under complex adhesion conditions, indicating promising application potential.

       

    /

    返回文章
    返回