Aug 26 – 30, 2024
The Couvent des Jacobins
Europe/Paris timezone

Deep learning with limited data availability: self-supervised learning for crop yield prediction using RGB drone imagery

Aug 30, 2024, 10:00 AM
15m
Les Horizons (2nd floor) (The Couvent des Jacobins)

Les Horizons (2nd floor)

The Couvent des Jacobins

Rennes, France
Oral Synergies of technologies Digital & AI

Speaker

Stefan Stiller (Research Platform “Data Analysis & Simulation”, Leibniz Centre for Agricultural Landscape Research (ZALF), 15374 Müncheberg, Germany; Environment and Natural Sciences, Brandenburg University of Technology Cottbus‐Senftenberg, 03046 Cottbus, Germany)

Description

  1. Introduction:
    Deep learning-based methods have shown success in predicting crop yield. However, it is still a challenge to train a deep learning model to effectively predict crop yield with only a few labeled observations, especially across small agricultural fields with high heterogeneity. Self-supervised learning (Liu et al., 2021) is a new technique addressing the challenge, but no study has examined the potential for crop yield prediction. Our aim is to investigate the potential of self-supervised learning for yield prediction using RGB drone imagery across multiple crop types. This study explores the synergistic potential between self-supervised learning and RGB drone imagery to address this challenge.

  2. Materials and Methods:
    Our study was conducted at the patchCROP agricultural landscape lab in Brandenburg, Germany, examining four summer crops (lupine, maize, soy, and sunflower) in 2020 (Grahmann et al., 2024). The research utilized high-resolution (~2.2cm) RGB images from UAVs, capturing the crops at the stages from late fruit development to ripening, across diverse small field arrangements (0.5 ha each). At each field, a combine harvester collected around 120 yield points. We employed the self-supervised learning algorithm, VICReg (Bardes et al., 2022), to train a deep learning model on a dataset comprising four crop types and multiple fields to learn key morphological patterns without using labels (amount of fields: lupine = 3, maize = 6, soy = 2, and sunflower = 3). Successively, we adapted the task of the same model for predicting crop yield. The prediction performance was compared to a conventional, supervised baseline model.

  3. Results:
    A key empirical finding was the ability of self-supervised learning to distinguish between crop types without labels based on morphological features. For crop yield prediction, we evaluated the model performance in two ways. Self-supervised showed a high prediction performance when compared between observations and predictions regardless of crop types (Pearson’s r 0.83). When prediction performance was evaluated for each crop type, it decreased (lupine, r = 0.4; maize, r = 0.78; soy, r = 0.15; and sunflower, r = 0.43) but was still substantially better for three crop types than the conventional supervised learning model (lupine, r = 0.07; maize r = 0.9; soy, r = 0.07; and sunflower = 0.24). Overall, a median score of r for the self-supervised model was 0.42, and for the supervised model was 0.16.

  4. Discussion:
    Our study demonstrates the promising potential of self-supervised learning in diversified agriculture. We showed that self-supervised learning can make use of large, unlabeled, combined image datasets across different crop and management types to discover key morphological patterns, and then the model can be used for crop yield prediction across crop types at good accuracy. This finding is important because currently so many deep learning models have been developed for different crop and management types independently. Self-supervised learning can unify the efforts and develop a more generalized model that can be applied in various cases. Yet, we also identified inconsistencies in prediction accuracy across and within crop types, emphasizing the importance of careful model evaluation and further development. Our findings advocate for the use of self-supervised learning to overcome data limitations and improve predictive modeling in small-scale agriculture.

  5. References:
    Bardes, A., Ponce, J., and LeCun, Y. (2022). VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning. doi: 10.48550/arXiv.2105.04906
    Grahmann, K., Reckling, M., Hernández-Ochoa, I., Donat, M., Bellingrath-Kimura, S., and Ewert, F. (2024). Co-designing a landscape experiment to investigate diversified cropping systems. Agricultural Systems 217, 103950. doi: 10.1016/j.agsy.2024.103950
    Liu, X., Zhang, F., Hou, Z., Wang, Z., Mian, L., Zhang, J., et al. (2021). Self-supervised Learning: Generative or Contrastive. IEEE Trans. Knowl. Data Eng., 1–1. doi: 10.1109/TKDE.2021.3090866

Keywords Deep Learning; Small Data; Yield Prediction; UAV; Self-Supervised

Primary author

Stefan Stiller (Research Platform “Data Analysis & Simulation”, Leibniz Centre for Agricultural Landscape Research (ZALF), 15374 Müncheberg, Germany; Environment and Natural Sciences, Brandenburg University of Technology Cottbus‐Senftenberg, 03046 Cottbus, Germany)

Co-author

Prof. Masahiro Ryo (Research Platform “Data Analysis & Simulation”, Leibniz Centre for Agricultural Landscape Research (ZALF), 15374 Müncheberg, Germany; Environment and Natural Sciences, Brandenburg University of Technology Cottbus‐Senftenberg, 03046 Cottbus, Germany)

Presentation materials

There are no materials yet.