

Legume Genomics and Genetics, 2025, Vol. 16, No. 4
Received: 28 Jun., 2025 Accepted: 15 Aug., 2025 Published: 30 Aug., 2025
Peas, as a globally significant legume crop, play a crucial role in food production and human nutrition. However, field pests and diseases, drought, high temperature and nutrient deficiency and other stresses seriously affect the yield and quality of peas. Timely and efficient stress monitoring is of great significance for ensuring pea production. This study reviews the advantages of unmanned aerial vehicle (UAV) remote sensing platforms in farmland stress monitoring, including high-throughput phenotypic acquisition and the roles of multispectral, RGB, and thermal infrared sensors. It also analyzes the applicability of commonly used deep learning models (CNN, RNN, Transformer) in crop stress detection. And the application progress of image classification, object detection, and semantic segmentation technologies in identifying crop stress types, a technical framework combining unmanned aerial vehicles and deep learning was constructed. The data collection and annotation process, data preprocessing and enhancement methods, as well as the pipeline for fusing multi-source images with deep learning models were expounded. The identification, degree classification and spatial distribution visualization of physiological stress (drought, nutrient deficiency, high temperature) and biological stress (diseases, pests) were mainly discussed. In the case of field stress detection of peas, the recognition performance of the deep learning model and its application value in field management decisions were verified. The combination of unmanned aerial vehicle (UAV) remote sensing and deep learning technology can achieve precise detection of field stress in peas. This study aims to provide strong support for precision agricultural management.
1 Introduction
Pea (Pisum sativum L.) is one of the important legume crops in the world and holds a significant position in the human diet. It is rich in protein, vitamins and minerals, providing people with important plant protein and a variety of nutrients. Peas are grown in almost all countries, with Canada, China, Russia and India having the highest production. In China, peas are not only important food and vegetable crops, but are also often used in food processing and feed, and have high economic and nutritional value (Yue et al., 2023). However, with the expansion of planting area and the intensification of climate change, pea production has encountered many challenges, especially the risk of reduced yield caused by various field stresses has become increasingly prominent (Chaloner et al., 2021). Studies show that climate change increases the risk of disease and pest outbreaks, which in turn threatens the yields of major crops. Therefore, how to monitor and alleviate the biotic and abiotic stresses in pea fields is of great significance for ensuring food security and enhancing agricultural production.
During the growth of peas, various stresses are often encountered. Common ones include pests and diseases, drought, high temperature and nutrient deficiency. Pests and diseases can cause spots or yellowing on leaves. In severe cases, they can lead to early leaf drop or deformation of pods, ultimately affecting yield and quality. For instance, fungal diseases can disrupt photosynthesis, leading to poor grain development. Aphids and other pests suck SAP and can also spread viruses, reducing the number of pods (Mahajan et al., 2018). Drought can cause plants to wilt and their growth to be hindered. In severe cases, the yield may drop by more than half. High temperatures can affect flowering and pollination, increasing the number of empty pods. Insufficient nitrogen can lead to stunted plants, yellowing leaves, and a decrease in the protein content and yield of grains. Field experiments have found that different stresses have different effects on yield, but drought and salinization are often the main factors restricting the growth of peas. Moreover, multiple stresses often occur simultaneously, making the losses even more severe (Mazhar et al., 2023). In the context of precision agriculture, more efficient methods are needed to achieve early detection and accurate positioning. In recent years, the development of unmanned aerial vehicle (UAV) remote sensing has made large-scale and low-cost field monitoring possible. Combined with the image recognition technology of deep learning, the accuracy and automation degree of stress detection can be greatly improved (Subeesh and Chauhan, 2025). Nowadays, the combination of unmanned aerial vehicles (UAVs) and deep learning has been widely applied in the monitoring of pests, diseases and environmental stress, becoming a hot topic in agricultural research. This trend is of great significance for enhancing the risk resistance and refined management of crops such as peas.
This study mainly explores the feasibility and effectiveness of combining unmanned aerial vehicle (UAV) remote sensing and deep learning to detect field stresses in peas (including pests, diseases, and drought, etc.), attempts to construct a corresponding technical system, and analyzes its application prospects in smart agriculture. This study introduces the advantages of the unmanned aerial vehicle (UAV) remote sensing platform in field monitoring, such as the ability to collect phenotypic data at high throughput and obtain multi-sensor information. The application progress of deep learning in crop stress detection was summarized, and the applicability of different models and image methods was compared. A technical framework combining unmanned aerial vehicles (UAVs) with deep learning was proposed, and designs were made from data collection, preprocessing to model training and fusion. For different types of stress, this study also discussed the key points of identification and monitoring methods of physiological stress such as drought, nutrient deficiency, and high temperature, as well as biological stress such as diseases and pests, including the visualization of classification and spatial distribution. Through case studies, it demonstrated the practical effect of this technology in field stress detection of peas and its supporting role in field management and decision-making. This study aims to provide a reference for field stress monitoring of peas and other crops, and promote the application of unmanned aerial vehicles and artificial intelligence in smart agriculture.
2 Advantages of UAV Remote Sensing Platforms in Field Monitoring
2.1 Application characteristics of UAVs in high-throughput crop phenotyping
Unmanned aerial vehicle (UAV) remote sensing is an emerging method for crop phenotypic monitoring, featuring flexibility, high resolution and relatively low cost. Compared with traditional manual investigation or satellite remote sensing, unmanned aerial vehicles (UAVs) can fly at low altitude multiple times during the crop growing season to quickly obtain canopy information of large areas of farmland (Yue et al., 2023). For instance, as long as the flight route is planned in advance, the drone can complete the shooting of the experimental field in just a few minutes, obtaining data such as leaf area index, canopy coverage and plant height, and the accuracy is very high. This high-throughput capacity enhances the efficiency of breeding and field management, and also enables farmers and researchers to monitor the growth of crops in real time. Drones can also choose appropriate times and angles as needed, such as taking pictures during the high-incidence period of diseases or at noon when shadows are avoided, thereby enhancing the monitoring effect.
In recent years, unmanned aerial vehicles (UAVs) have been widely used in crop phenomics research, such as rapid screening of stress-resistant varieties and high-throughput determination of plant traits. Especially in terms of stress monitoring, drones can sensitively capture the early phenotypic changes of crops after they are under stress, thus winning precious time for timely intervention measures. It should be pointed out that the data acquisition from the unmanned aerial vehicle (UAV) platform also provides rich training samples for deep learning models, which is conducive to improving the models' ability to learn the characteristics of crop stress in complex field backgrounds. Therefore, the application potential of UAVs in high-throughput phenotypic and stress monitoring in agriculture is huge, laying a foundation for achieving precise and efficient phenotypic monitoring of crops.
2.2 Role of multispectral, RGB, and thermal infrared sensors in data acquisition for pea fields
Drones can be equipped with various sensors, which enables them to collect information from multiple aspects. RGB cameras are the most commonly used. They can take color photos and directly reflect the color and shape of leaves. For instance, when peas are sick or lack nutrients, their leaves will turn yellow or develop spots. RGB images can record these situations. However, RGB images are easily affected by lighting and cannot be quantitatively analyzed well. Multispectral cameras can capture light in specific bands, such as blue light, red light and near-infrared light. Through these bands, vegetation indices can be calculated to quantify the health status of plants. NDVI and red edge positions are sensitive to nitrogen levels and growth status and can be used to detect nutrient deficiency or early diseases (Osco et al., 2020). In pea fields, multispectral imaging can help identify areas with poor growth or abnormal leaves, suggesting possible stress. Thermal infrared cameras are mainly used to monitor moisture conditions. During drought, stomata close, transpiration decreases, and the temperature of leaves rises. Thermal imaging can clearly detect this change. In orchards and farmlands, thermal infrared remote sensing has been used to monitor moisture and guide irrigation. For peas, thermal imaging can also detect drought signs in a timely manner, enabling diagnosis without contact (Meerdink et al., 2025).
2.3 Advantages and limitations of UAV monitoring in spatial and temporal resolution
In terms of spatial and temporal resolution, unmanned aerial vehicles (UAVs) have obvious advantages over traditional methods. In terms of space, drones can fly at low altitudes and obtain centimeter-level images. This enables small-scale diseases or pests in the fields to be observed, while satellite images often lack sufficient resolution. For instance, a poorly growing community or a few scattered diseased plants can all be clearly identified in the drone images. In terms of time, drones can take off at any time without waiting for satellites to pass through. It can also conduct flight monitoring immediately during the critical growth period of crops or after adverse weather conditions. It can even fly multiple times a day to track changes in the situation of coercion. This is particularly useful when diseases or pests spread rapidly.
But drones also have their drawbacks. One is that the coverage area is limited. A single flight usually can only cover tens to hundreds of hectares, unlike satellites that can simultaneously cover tens of thousands of hectares. Therefore, when applied on a large scale, multiple flights or the use of multiple drones are required. Secondly, unmanned aerial vehicles (UAVs) are greatly affected by the weather. Strong winds and rainfall can limit flight, and changes in light on sunny days can also affect image quality, which requires correction in data processing (Sandino et al., 2022). For instance, direct sunlight and shadows can affect the monitoring of soil moisture in thermal infrared images, and a light correction model is needed to make corrections. Thirdly, the volume of data processing is large. A single flight may generate hundreds of high-resolution images, which require stitching, correction and analysis. However, with the development of computer vision and deep learning, this problem is being gradually solved. New algorithms can process these big data faster and more accurately.
3 Applications of Deep Learning Techniques in Crop Stress Detection
3.1 Suitability analysis of common deep learning models (CNN, RNN, Transformer)
The success of deep learning in the field of computer vision also provides a powerful tool for crop stress detection. The commonly used deep learning models at present mainly include convolutional neural networks (CNN), recurrent neural networks (RNN), and the Transformer model that has emerged in recent years, etc. These models each have their own structural characteristics and play different roles in the analysis of agricultural remote sensing images. Convolutional Neural Network (CNN) has become one of the preferred models for crop stress recognition due to its excellent image feature extraction ability. CNN can automatically learn spatial features in images, such as the shape of disease spots, texture patterns and color distribution, and has achieved remarkable results in the recognition of pest and disease images. For example, for the detection of leaf disease spots, CNN can extract low-level features such as the edge and size of the lesion, as well as more abstract disease patterns, thereby achieving high-precision classification of healthy and diseased leaves. In pea stress detection, CNN is expected to identify the wilting and curling morphology of leaves caused by drought or abnormal color and texture due to diseases (Kamarudin and Ismail, 2022). Recurrent neural networks (RNNS) are adept at handling time series information. Typical structures include Long short-term memory networks (LSTMS) and gated recurrent units (GRUs), etc. In farmland stress monitoring, RNN is often used to analyze multi-temporal remote sensing data or time series indicators. For instance, by inputting the vegetation index sequences extracted from drone images at different time phases throughout the entire growth period into the RNN model, dynamic predictions of the occurrence time and trend of stress can be achieved. Some studies have successfully achieved the early detection of soybean damp-off disease by fusing multi-temporal satellite images with the gated recurrent unit (GRU) model. Similarly, for pea fields, if multiple times of drone monitoring data are obtained, the RNN model can help capture the evolution characteristics of stress over time, improving the accuracy and stability of recognition. It should be pointed out that RNN's ability to capture long-term dependencies gives it a unique advantage in disaster early warning and growing season analysis. The Transformer model is a deep learning architecture that has emerged in recent years. Based on the self-attention mechanism, it can efficiently model the global correlation of sequences. Visual Transformer (ViT) applies Transformer to image analysis and has demonstrated potential in tasks such as plant disease detection. Compared with the local convolution of CNN, Transformer can focus on the global feature relationships of the image, which is helpful for identifying coercion patterns over a larger range. For instance, in the classification of crop diseases, models introducing Transformer can achieve comparable or even better results than CNNS with limited samples and have stronger interpretability. For the distribution of stress patches in the complex background of pea fields, the Transformer model is expected to better distinguish crops from soil, shadows, etc., and improve the accuracy of segmentation and classification (Subeesh and Chauhan, 2025).
3.2 Applications of image classification, object detection, and semantic segmentation in stress identification
The main task forms of deep learning in the field of images include image classification, object detection and semantic segmentation, each of which has its own strengths in crop stress recognition. Image classification refers to the process of categorizing an entire image into a pre-defined type. For crop stress detection, it can be used to determine whether a crop in a field photo is under certain stress. For instance, a CNN classifier can be trained to classify images into two categories: "healthy" or "stressed", or further subdivide them into "drought stress", "disease stress", etc. The image classification method has a simple process and high computational efficiency, and is suitable for rapidly assessing the occurrence of overall stress in large-scale crops. In the pea field application, the whole-field image can be divided into small blocks for classification through sliding window or block analysis, thereby assessing the stress risk in different areas (Zou et al., 2024). However, pure classification methods cannot calibrate positional information and can only provide a judgment on whether there is coercion. Object detection, on the basis of classification, adds the ability of target location and outputs the bounding boxes and their categories of the target objects in the image. Classic object detection algorithms such as Faster R-CNN and the YOLO series have been applied in the agricultural field to identify disease spots or pest entities on crops. For instance, some research has taken flying pests (such as locusts) as detection targets and, by training detection models, achieved automatic marking of pest occurrence areas in drone images.
In the monitoring of pea pests and diseases, target detection can be used to mark the locations of plants with obvious lesions in the field or identify plant clusters affected by pests, thereby facilitating the location of key control areas. Compared with image classification, object detection provides synchronous information for positioning and recognition, and is of greater practical value. However, it has high requirements for the size and shape of the target. For spreading patches like diseases, sometimes the boundaries are not distinct, and the detection frame may not be able to precisely outline them. Semantic segmentation can classify each pixel in an image and is highly suitable for extracting the precise shape of disease or stress areas in farmland stress detection. Typical semantic segmentation models include Fully convolutional Network (FCN), U-Net, and the improved DeepLab, etc. On crop disease images, semantic segmentation models can distinguish lesion pixels from healthy leaf pixels to obtain a fine distribution map of disease mottled patterns (Rani et al., 2024). For example, for wheat stripe rust, the irregularly shaped lesion areas can be accurately segmented and identified through the improved U-Net model (such as Ir-U-Net with an infrared channel introduced), which improves the accuracy and robustness of the detection. Semantic segmentation is also applicable to leaf spot disease, powdery mildew and other diseases of peas, which can generate layers of the coverage range of the disease on the leaf crown, providing a basis for assessing the severity of the disease. In the remote sensing identification of pest stress, segmentation models can also play a significant role. If the rice in the paddy field is damaged by caterpillar, it will cause the heart leaves to wither and form "withered heart seedlings". Through the image segmentation of unmanned aerial vehicles, the withered heart areas can be separated from the healthy rice clusters. Some studies have successfully achieved the automatic segmentation of the pest stress areas of rice leaf roller by fusing the RGB images of unmanned aerial vehicles with digital surface model (DSM) data through a multimodal segmentation network. The results show that after fusing the height information, the model maintained a stable recognition rate throughout the entire growth period, and the incidence of pests it recognized was highly correlated with the results of manual investigation. This indicates that semantic segmentation technology can provide pixel-level recognition of crop stress, which is conducive to local policy implementation in precision agriculture (Wang et al., 2024).
3.3 Performance of model training, validation, and transfer learning in different field scenarios
The quality of a deep learning model largely depends on the quantity and quality of the training data. In crop stress detection, to train a reliable model, it is necessary to have high-quality and accurately labeled field image data. During training, the data is usually divided into a training set, a validation set and a test set so as to examine the model's performance on new data. A stress detection model needs to be strictly evaluated. Commonly used indicators include accuracy rate, recall rate and F1 score, which are used to determine whether it is stable in different situations.
The field environment is very complex, and the performance of the model varies greatly in different scenarios. For instance, a model trained on pea disease data in a certain area may experience a decline in performance when used in another field. This is the problem of insufficient generalization ability. Transfer learning can alleviate this situation. It can use models pre-trained on large datasets (such as ImageNet), and also transfer models trained on one crop to another crop or scene. This can reduce the impact brought by insufficient data and distribution differences (Abdalla et al., 2024). In agriculture, a common practice is to first extract features with a pre-trained CNN and then fine-tune them on the data of the target crop. This approach usually requires less data and has a shorter training time, but the results are often more accurate. For instance, in a study on weed identification in cotton fields, a detection model pre-trained on the COCO dataset was fine-tuned to cotton field images, and the results were very good. Similarly, in pea stress detection, images of stress from other crops can also be borrowed for transfer training. This can not only improve the initial accuracy but also accelerate the training speed.
In addition to transfer learning, increasing the diversity of data is also very important. If the training set contains data from different years, different locations and different varieties, the features learned by the model will be more universal. For instance, Xie et al. (2024), when studying water stress in summer maize, combined multi-year and multi-regional data to train a BP neural network. The result was that the model was stable under various conditions, with R2 reaching 0.84. This indicates that the more diverse the data is, the better the model can adapt to the new environment. For peas, if the training data includes both drought scenarios in the north and disease scenarios in the south, the model will be more adaptable to the conditions of different fields.
Finally, before it is actually put into use, the model still needs to undergo local validation and adjustment in the target scenario. A small number of new samples can be used for retraining or calibration to reduce environmental differences. When applied in real time in the field, it can also continuously collect new data to update the model, enabling it to learn new types or manifestations of stress. In this way, the model can gradually achieve "adaptive" improvement.
4 Technical Framework for Integrating UAVs and Deep Learning
4.1 Design of data acquisition and annotation workflows
The first step in establishing a stress detection system combining unmanned aerial vehicles (UAVs) and deep learning is to design a good process for data collection and annotation. Only by obtaining high-quality training data and accurately marking the type and degree of stress can the model learn more reliably. In terms of data collection, the aerial photography plan of the drone should be arranged based on the growth stage of peas and the patterns of stress occurrence. It is generally advisable to take pictures during critical periods (such as the flowering period or the granulation period) or when symptoms are obvious. For instance, it is best to collect data multiple times for disease monitoring when the lesion first appears and during its expansion period. Drought monitoring is carried out after several consecutive days of water shortage, which makes it easier to capture the differences (Osman, 2015). During the day, it is best to choose a time when the light is relatively stable to fly, such as a little later in the morning or a little earlier in the afternoon. Avoid the strong light at noon and the low light in the morning and evening to prevent overexposure of the image or too long shadows. The flight altitude and image overlap should be determined based on the camera resolution and the size of the field. This way, it can not only cover the entire field but also ensure that the resolution is high enough to clearly see the characteristics of individual plants. For multispectral or thermal infrared cameras, reflectance whiteboards and temperature calibration plates should also be placed in the field. Reference data should be collected before and after each flight for radiation correction and temperature calibration, so that the data at different times are comparable (Dong et al., 2024; Wang, 2024). After the aerial photography is completed, a large number of images will be obtained. These images need to be stitched together and calibrated in coordinates to generate an orthographic map and geographic information covering the entire field. This step can lay the foundation for the subsequent extraction of samples by community or area.
Data annotation is the next important step. It is usually experts or trained personnel who identify and label the coercive situations in the images. A common practice is to mark the stress areas on the stitched orthographic map based on field records or on-site investigations, and add labels such as disease types or drought levels. For instance, in pea disease experiments, if certain plots are inoculated with specific pathogens, they can be directly marked as "disease stress", and the severity of the symptoms can be noted. For naturally occurring pests, it is necessary to rely on investigation data to circle the affected areas on the images. When marking, a uniform standard should be maintained to ensure that the marking of the same type of area is consistent. The results of different personnel also need to be checked against each other to reduce human errors.
In semantic segmentation tasks, pixel-level annotation is required. Researchers often use professional tools to gradually outline the lesion or withered area after magnifying the image. Although it is time-consuming, it can make the training data more accurate and help the model learn clearer features. To enhance efficiency, an active learning approach can also be adopted: first, a preliminary model is used for prediction, and then the erroneous parts are corrected manually. By repeatedly iterating in this way, not only can annotation be accelerated, but also the performance of the model can be improved. After the annotation is completed, the dataset needs to be divided into a training set, a validation set and a test set. When dividing, attention should be paid to the uniform distribution of data in different plots and on different dates to avoid overfitting or result bias in the model (Chen et al., 2020).
4.2 Data preprocessing and augmentation methods (illumination, angle, noise handling)
Before field drone images are input into deep learning models, they usually need to undergo a series of preprocessing and data augmentation to improve image quality and increase model robustness. The first step in data preprocessing is the radiation and color correction of the image. Due to the possible changes in lighting conditions during drone aerial photography, directly using the original image will cause the features learned by the model to be related to brightness and contrast, affecting the grasp of the essential characteristics of the stress. To this end, methods such as histogram equalization and color channel standardization can be adopted to make the brightness and hue distribution of different batches of images more consistent. For multispectral data, the numerical values should be converted into reflectance to eliminate the influence of solar altitude and exposure parameters. Thermal infrared data needs to be converted from gray values to true temperatures based on calibration plates to ensure that temperatures on different dates are comparable. The second is geometric correction, which includes eliminating lens distortion and image registration. Correct the cylindrical distortion through the camera calibration parameters to ensure that the plant shape is not distorted. Then, by using high-precision differential GPS and ground control points, geometric registration is performed on multi-temporal and multi-spectral images to align pixels at the same position on images of different time phases and different bands. This is crucial for the subsequent multi-source data fusion and time series analysis. After completing the basic correction, image cropping and noise filtering may also be required. Pruning involves removing irrelevant areas in the field (such as field ridges and open Spaces), leaving only the crop areas to minimize interference. Noise filtering, on the other hand, applies smoothing filtering or denoising algorithms to address potential sensor noise and motion blur in unmanned aerial vehicle (UAV) images. Especially for images taken in the early morning or evening, insufficient light may introduce more noise. At this time, methods such as convolutional denoising networks can be considered to improve the clarity of the image. After preprocessing, data augmentation techniques are often used to expand the data volume and enhance the model's adaptability to different situations. Data augmentation of agricultural images can be designed in combination with the characteristics of field scenes. The basic enhanced operations include: rotation, translation, and flipping, simulating the shooting situations of unmanned aerial vehicles at different angles and positions. Especially when the directionality of the same stress in the field is not strong, rotation and flipping can provide additional equivalent samples (Peng et al., 2023). Brightness and contrast adjustment is another important enhancement method. By randomly changing the image brightness, the model can adapt to different light intensity environments. The contrast change simulates the alteration of image contrast from a cloudy day to a sunny day. Considering the influence of shadows existing in the field, methods such as gamma correction can be adopted to randomly change the local light and shade relationship of the image, or "occlusion" simulation can be added during training, for example, randomly superimposing some shadow shapes on the image, so that the model can still correctly judge when some plants are covered by shadows. For multispectral images, noise perturbation or combination between bands can also be carried out to enhance the model's tolerance for sensor errors. Some advanced data augmentation methods, such as Cutout (covering an image at a random position) and Mixup (mixing two images in a certain proportion), can also increase data diversity and help prevent overfitting of the model.
4.3 Integration pathways and pipeline construction of UAV imagery with deep learning models
The core link of the entire technical process is to effectively input the remote sensing image information of unmanned aerial vehicles into a deep learning model and convert the model output into interpretable agricultural information. Generally speaking, the pipeline for the integration of drones and deep learning consists of three stages: data reading, model inference, and result visualization. The first is the data reading and preprocessing module. After basic processing and enhancement, the field image data needs to be read in the format and size required by the model. For convolutional neural network models, large field images are usually cropped into small blocks of fixed size (such as 256×256 pixels) to accommodate GPU memory and increase the sample size (Xie et al., 2024). When cropping, the sliding window method can be used to ensure that the entire image is covered and there is a certain overlap between the blocks, avoiding the target falling exactly on the edge and resulting in incomplete information. Meanwhile, normalize or standardize the image pixel values (subtract the mean and divide by the standard deviation) to accelerate model convergence and enhance numerical stability. For multispectral/thermal infrared data, fusion or individual input can be selected based on specific applications. For instance, the visible light RGB image is concatenated with the multispectral reflectance and multi-temporal index of the corresponding pixels in the channel dimension to form a multi-channel tensor input into the CNN, enabling the model to simultaneously learn color texture and spectral numerical features. This multi-source data fusion method has been verified in improving the performance of pest stress detection: the fusion of RGB and digital Surface Model (DSM) has increased the mIoU of paddy field pest segmentation by 7.8%. Therefore, in the detection of pea stress, we also attempted to fuse the information of visible light and near-infrared bands for input, with the aim of enhancing the recognition of latent stress. The next part is the model inference module, which uses a pre-trained deep learning model to analyze and process the input images. Depending on the task, the inference interface of the image classification, detection or segmentation model can be called within the pipeline: for classification models, each block is input into the CNN to obtain the stress probability output; For the detection model, the entire image is input into the network to obtain the candidate box set of the lesion or pest area. For the segmentation model, the category prediction graph of each pixel is output.
5 Identification and Precision Monitoring of Stress Types
5.1 Detection of physiological stresses: drought, nutrient deficiency, and heat stress
Physiological stress refers to the adverse conditions of crops caused by abiotic factors, including drought, waterlogging, high temperature, low temperature, nutrient deficiency, etc. In pea cultivation, drought stress and nutrient (especially nitrogen) deficiency are the most common, and high-temperature stress can also affect pea production in some regions during summer. Drought stress can cause pea plants to lose water and wilt, their leaves to curl and turn grayish-green, and in severe cases, they may age prematurely and turn yellow. Unmanned aerial vehicle (UAV) multispectral and thermal infrared remote sensing provide effective means for drought monitoring. On the one hand, multispectral images can be used to calculate drought-sensitive vegetation indices, such as the Normalized Water Body Index (NDWI) and the Normalized Differential Drought Index (NDDI), which can reflect changes in vegetation water content. When peas suffer from drought, these index values drop significantly. On the other hand, thermal infrared imaging can measure canopy temperature and achieve quantitative diagnosis by applying indicators such as the Crop Water Stress Index (CWSI). When pea plants are short of water, the leaf temperature rises. Studies have shown that CWSI obtained by unmanned aerial vehicle thermal imaging can accurately characterize the degree of water stress in crops such as corn, and is highly correlated with leaf water potential and stomatal conductance (Liu et al., 2023).
Nutritional deficiencies are often dominated by macronutrients such as nitrogen, phosphorus and potassium, among which nitrogen deficiency has a significant impact on the growth of peas. Nitrogen deficiency can cause the lower leaves of peas to turn yellow and the plants to be stunted, which in turn affects the development of pods. Unmanned aerial vehicle (UAV) multispectral remote sensing can rapidly identify differences in nitrogen nutrition levels in the field (Nannim and Asabar, 2018). Nitrogen deficiency leads to a decrease in chlorophyll content in plants, which is reflected in images as increased green light reflection, weakened red light absorption, and a decline in NDVI values. The red edge and near-infrared bands are sensitive to chlorophyll changes. Some studies have constructed relationship models between the position parameters of the red edge and nitrogen concentration, which can be used for nitrogen nutrition diagnosis. Based on multispectral images of unmanned aerial vehicles (UAVs) and deep learning algorithms, some scholars have also established monitoring methods for nitrogen nutrition in whole fields. For instance, some studies have proposed a deep learning method that combines image segmentation to classify the nutritional status of fields and automatically detect areas lacking nutrients.
5.2 Detection of biotic stresses: early diagnosis of diseases and pests
Biological stress mainly refers to crop stress caused by pathogenic microorganism infection (diseases) and pest damage (pests). Common diseases of peas include powdery mildew, root rot, rust, etc., while pests include aphids, pod borer, etc. This kind of coercion often spreads rapidly and causes serious harm, so early diagnosis is crucial for prevention and control. The combination of unmanned aerial vehicle (UAV) remote sensing and deep learning provides a new approach for pest and disease monitoring, which can break through the limitations of manual visual inspection and achieve large-scale rapid census and precise positioning. For disease detection, the alteration of vegetation spectral and texture features is the main basis. Take pea powdery mildew as an example. White powdery mold spots will appear on the surface of the affected leaves, and then the leaves will turn yellow and age prematurely.
On the visible light images of drones, powdery mildew spots appear as bright gray spots, creating a contrast with the green and healthy tissues. The image recognition model based on deep learning can learn this kind of textured mottled feature, achieving automatic recognition and segmentation of the lesion area. Meanwhile, multispectral data can provide abnormal spectral reflections in the lesion area, such as increased reflection in the red light band and decreased reflection in the near-infrared band, which are also typical spectral features of the disease. Researchers have successfully used unmanned aerial vehicles (UAVs) for multispectral detection of areas where rust and rice blast occur on crops such as wheat and rice. For instance, the Ir-UNet model proposed by Zhang et al. (2021) for wheat stripe rust combines the visible light and infrared images of unmanned aerial vehicles to segment the rust patches. This not only solves the problem of irregular lesion shapes but also achieves accurate positioning on large-scale fields. This method is also applicable to the detection of rust or leaf spot disease in peas. After the deep learning model is trained, the suspected disease areas can be highlighted on the aerial photos. Based on this, farmers can precisely spray pesticides for prevention and control, minimizing the amount and scope of pesticide application.
The early symptoms of diseases are sometimes hard to detect with the naked eye, but multispectral images may capture early physiological changes in vegetation, such as a decrease in chlorophyll content. This gives drone monitoring the opportunity to detect abnormalities at the early stage of the disease. Take pea rust as an example. Before the visible spore mass appears, the leaves may have undergone weak spectral changes due to infection. If the deep learning model is carefully trained, it is expected to identify these subtle differences and achieve early warning earlier than the human eye (Abdulridha et al., 2023). For pest detection, the main approach is to utilize the destructive characteristics of pests on the morphology and spectrum of crops. Sucking pests such as pea aphids often cause leaf curling and yellowing and spread viruses, while leaf-eating pests like pea pod borer can cause leaf holes or damage to pods. In drone images, large-scale pest infestations will be manifested as an increase in vegetation sparsity or a change in color. For instance, if the tender leaves at the top of peas curl and turn yellow due to aphid damage, a deep learning object detection model can be trained to identify these curled tender shoots and mark them out. For instance, when the pod borer larvae feed on the pods, causing premature aging and yellowing of the plants, the image will show a distribution of small, mottled withered patches. The segmentation model can outline the range of these withered plants. Compared with diseases, pests often break out in clusters in space (such as "pest patches" that occur in large areas), and the large-scale top-down perspective of drones can precisely see this patchy pattern.
5.3 Stress severity grading and spatial distribution visualization
When peas are found to be under stress in the field, grading the degree of stress and mapping its spatial distribution is very helpful for agricultural decision-making. The results of deep learning models can not only determine whether there is coercion or not, but also further calculate the intensity of the coercion and generate a distribution map. The common classification method is based on the proportion of affected plants or the degree of deviation of indicators. For instance, in disease stress, mild (less than 10% of diseased leaves), moderate (10% - 30%), and severe (more than 30%) can be set. By using the results of the segmentation model to calculate the pixel ratio of the disease spots on each plant or in each plot, the severity of the disease can be quantified. Compared with the visual estimation of the investigators, this unmanned aerial vehicle quantification method is more objective and detailed, and can cover the entire field without any omissions.
For continuous stresses such as drought or nutrient deficiency, they can be classified by the extent of exponential decline. For example, droughts are classified into mild (less than 20%), moderate (20%-40%), and severe (more than 40%) based on the decline rate of the unmanned aerial vehicle vegetation index (Dong et al., 2024). Nutrient deficiency can also be classified by the reduction of nitrogen content or greenness index in leaves. If the output of the deep learning model is a regression value, it can also directly predict continuous indicators (such as the nitrogen content in leaves or the decrease in soil moisture), and then classify them according to these values.
The visualization of spatial distribution is to use maps to show the location and severity of the occurrence of stress. Drone images come with geographical coordinates, so they are displayed more intuitively and accurately. Combined with GIS, the identified results can be superimposed onto the field coordinates to create a stress distribution map. A common approach is to use a heat map, with different colors indicating the degree, such as green for normal, yellow for mild, and red for severe. In this way, the health condition of the entire pea field can be seen at a glance.
Through multi-temporal photography by drones, animations can also be used to demonstrate the changing trend of coercion over time, helping to understand the expansion or mitigation process of coercion. For instance, by playing the drought distribution maps at different times in chronological order, one can visually observe how the drought spreads from a local area to a large area or eases after rainfall. This kind of dynamic visualization information is of great reference value for irrigation arrangements and pest and disease control. It is worth noting that the spatial distribution of different stresses often varies: diseases may occur in sporadic spots, pests tend to break out in large areas, and nutrient deficiency is usually related to soil conditions, presenting as a problem in a fixed area.
6 Case Study: Application of UAV-Based Stress Detection in Pea Fields
6.1 Case background: establishment of experimental fields and multi-temporal UAV imagery acquisition
In the past five years, many researchers at home and abroad have conducted field experiments, applying unmanned aerial vehicle (UAV) remote sensing and deep learning to the detection of pea stress. In the research of Liu et al. (2025), the team set up normal areas and salt stress treatment areas in the experimental field, and used unmanned aerial vehicles equipped with RGB cameras and multispectral sensors to conduct high-throughput monitoring of the entire growth period of peas (Figure 1). They extracted structural features such as plant height and canopy coverage from multi-time series images, and also analyzed texture and spectral information to study the impact of salt stress on growth (Figure 2). In a case in North America, researchers used drones multiple times at different growth stages of peas, equipped with high-resolution RGB cameras, LiDAR and multispectral imaging systems, to conduct stereoscopic monitoring of crops (Bazrafkan et al., 2024). For instance, during the flowering and granulation periods, they obtained multi-source data, generated orthometric images and three-dimensional point clouds, which were used to meticulously record the growth and the development of stress symptoms. In response to drought stress, some studies have employed thermal infrared cameras to monitor canopy temperatures and combined them with multispectral vegetation indices (such as NDVI, PRI, etc.) to draw distribution maps of farmland water stress, clearly identifying drought conditions in different regions. In terms of pest and disease monitoring, researchers often choose to conduct multiple aerial photography during the high-incidence period of pests and diseases to record the dynamic expansion of disease spots and the occurrence of pests. These studies indicate that by designing experimental fields, equipping different sensors and obtaining multi-time series drone images, a solid data foundation can be provided for the detection of field stress in pea fields.
![]() Figure 1 The layout of the experimental field (Adopted from Liu et al. 2025) |
![]() Figure 2 Diagram showing workflow. pH: plant height; CC: canopy coverage; VIs: vegetation indices; AGB: aboveground biomass (Adopted from Liu et al. 2025) |
6.2 Performance of deep learning models in pea disease recognition and stress detection
Applying deep learning to drone images can significantly enhance the efficiency and accuracy of stress detection in pea fields. In terms of disease identification, Girmaw and Muluneh (2024) collected 1,600 photos of both healthy and diseased leaves. They used the transfer learning method to establish the DenseNet-121 convolutional neural network model to classify three major leaf diseases (coarse spot disease, powdery mildew, and leaf spot disease). The training results show that the accuracy rate of this model on the test set exceeds 98%, and the performance is very good. Such a deep CNN method can significantly reduce the time and error of manual field inspection.
In the detection of the degree of coercion, Chinese research teams have attempted to combine machine learning with deep learning. For instance, in salt stress research, researchers utilized features extracted from multi-source drone images to train integrated models such as CatBoost and LightGBM respectively, which were used to predict the aboveground biomass (AGB) and leaf SPAD values of peas. The results show that the determination coefficients R² reach 0.70 and 0.60 respectively, and the prediction error is controlled within 15%. This indicates that the learning model combined with remote sensing data can estimate the nutritional status of crops relatively accurately. Based on these results, the research team also established the Pea salt Tolerance Score (PSTS), whose predicted results were highly consistent with the measured values.
Compared with traditional machine learning, deep learning has more advantages in handling complex image features. For instance, in field disease spot detection, target detection models like YOLO can quickly locate disease areas in drone images and identify fungal spots or bacterial fusarium wilt on leaves in real time. If a more detailed annotation of the lesion distribution is required, semantic segmentation networks such as U-Net can classify each pixel in the image to see whether it belongs to healthy tissue or affected tissue. In the research of wheat stripe rust, the multispectral U-Net model trained with high-resolution unmanned aerial vehicle images achieved a recognition accuracy of 97.13%. Different models have their own advantages and disadvantages: YOLO is fast and suitable for large-scale inspections, but its accuracy is slightly lower; However, segmentation or classification networks like U-Net have higher accuracy and are suitable for lesion localization and quantitative analysis, but they involve a large amount of computation. Deep learning also performs well in pest detection. Some studies have pointed out that deep neural networks can automatically extract texture and spectral features from drone images, accurately distinguish between plants infected by pests and healthy ones, and identify the range and severity of pests. In the detection of diseases and various stresses in peas, these models also have a very high accuracy rate, often exceeding 95%. Through multi-source data fusion, quantitative assessments of stresses such as drought and nutrient deficiency can also be conducted. The differences among various models suggest that we should choose the appropriate method based on actual needs and strike a balance between speed and accuracy.
6.3 Application value of model predictions in field management and decision-making
Applying the detection results obtained by combining drones with deep learning to agricultural management can significantly enhance the precise management capabilities of pea production. In terms of pest and disease control, early warning based on drone field patrols enables farmers and technicians to promptly grasp the distribution of pests and diseases. The model can mark the concentrated areas of disease spots and pests, thereby generating precise pesticide application prescription maps and achieving "on-demand spraying". In this way, plant protection drones or ground sprayers only spray pesticides at designated points in the affected areas, without having to apply pesticides throughout the entire field. This not only reduces the amount and cost of pesticides used but also lowers environmental pollution and crop damage caused by pesticides. Disease early warning combined with local control helps to control the spread of pests and diseases at an early stage, and stabilize yield and quality. In terms of drought management, intelligent irrigation systems that combine unmanned aerial vehicle (UAV) remote sensing and deep learning have also achieved results. For instance, the precision irrigation system developed by the Chinese Academy of Agricultural Sciences can detect water shortage signals in crops using drone images, and then analyze the water conditions in the fields and the water requirements of peas through an Internet of Things platform and support vector machine models. Once a water shortage in a certain area is diagnosed, the system will generate a differentiated irrigation plan and remotely control the sprinkler to replenish water quantitatively. The application results show that this system can shorten the irrigation cycle of 300-500 mu of land to 3 days, save water by more than 20%, and significantly improve the water utilization rate and labor efficiency (Zhou et al., 2021).
In terms of nutrient management, the nitrogen content distribution retrieved from multispectral images of unmanned aerial vehicles can be directly used to guide variable fertilization. Once the model identifies nitrogen-deficient plots, farmers can apply fertilizers in a targeted manner. This can not only avoid waste but also prevent excessive fertilization. It is reported that even in fields where wheat, broad beans and peas are mixed, such intelligent fertilization systems can distinguish the nutrient requirements of different crops and achieve precise fertilizer supply. The role of deep learning combined with remote sensing in breeding and agronomic decision-making is becoming increasingly prominent. For instance, in the study of salt tolerance, researchers used drones to establish a pea salt tolerance scoring system (PSTS), successfully screening out salt-tolerant varieties, and the results were highly consistent with the field performance. This provides practical tools for promoting salt-tolerant varieties in saline-alkali land and also accelerates the progress of stress-resistant breeding. Overall, the combination of unmanned aerial vehicles (UAVs) and deep learning in stress detection not only enhances monitoring accuracy but more importantly, it can transform the results into specific measures, such as precise spraying, differentiated irrigation, intelligent fertilization, and variety screening. These practices embody the approach of "treating the symptoms with the right medicine", significantly enhancing the scientific nature and efficiency of pea field management, and are of great significance for stable production and sustainable agricultural development.
7 Conclusion
This study introduces the application prospects of unmanned aerial vehicles (UAVs) combined with multispectral, RGB and thermal infrared images, along with deep learning models, in the detection of field stress in pea fields. In the future, multimodal data fusion may be an important direction for improving monitoring accuracy. In addition to being equipped with multiple sensors, drones can also combine hyperspectral remote sensing data with ground sensor data, which enables the acquisition of more comprehensive crop information. Hyperspectral imaging features continuous bands and high resolution, capable of capturing the subtle differences in the physiological and biochemical states of plants. For instance, hyperspectral data can be used to distinguish yellowing of leaves caused by different reasons, and drought and nutrient deficiency can be distinguished. Combining hyperspectral technology with unmanned aerial vehicles (UAVs) can not only achieve high spatial resolution but also utilize spectral features, which is very helpful for research and precise monitoring. However, hyperspectral data processing is rather complex and costly, and it is currently mainly used in scientific research experiments. With technological advancements, if hyperspectral cameras can become smaller and cheaper, they may be more widely applied. At that time, hyperspectra, RGB and multispectra could be input into deep learning models together, enabling the models to learn features of more dimensions and thereby improving the accuracy of recognizing complex stresses.
Although deep learning models perform well in some experiments, they often perform poorly when moved to other regions or environments. The agricultural environment is very complex, and the manifestations of stress vary greatly, so the model may be unstable in new scenarios. For instance, the climate and soil conditions vary significantly among different regions. In humid areas, peas are more prone to diseases. In arid regions, water shortage is the main problem. If the model is only trained on data from arid areas, it may misjudge diseases as other conditions in humid areas. The color of the soil can also have an impact. Just as red soil, black soil and sandy soil have different colors, the tones of the images are also different. The model needs to be exposed to a sufficient variety of data to adapt. The differences in varieties and growth stages can also make it difficult for the model to generalize. For instance, when disease-resistant varieties are infected, there may only be a few lesions, while susceptible varieties will show extensive yellowing and withering. If the model has only seen susceptible varieties, it will be very difficult to identify mild symptoms. Therefore, data from different varieties and different growth stages should be incorporated during training. Lighting and shooting parameters are also an issue. The solar altitude and light intensity vary greatly in different regions, resulting in different brightness and shadow conditions of the images. If the model is highly sensitive to these differences, its generalization ability will be poor. There are two solutions: One is to incorporate data augmentation under various lighting conditions during training; The second is to standardize the images during application to make them closer to the distribution during training.
The ultimate goal of integrating unmanned aerial vehicle (UAV) remote sensing and deep learning detection into agricultural management is to establish an intelligent agricultural decision-making system and achieve digital and precise management. Based on this research, such a system can be envisioned: The farm collects data in real time through drones and sensors, uploads it to the cloud, and the deep learning model automatically analyzes it to generate crop health and stress early warning reports, guiding farmers' decision-making. This will change the way traditional agriculture is managed. Although there are still challenges at present, such as model generalization and application costs, the trend is already quite obvious. In the next 5 to 10 years, with the development of 5G/6G, the Internet of Things, big data and artificial intelligence, such smart agricultural systems may become widespread. At that time, drone operators, data analysts and agronomists will work together to serve agriculture, and farmers will only need to use a mobile phone APP to view the "health report" of their crops in real time and receive scientific advice. Agriculture will thus become more efficient, more environmentally friendly and more sustainable.
Acknowledgments
I thank Mr Z. Wu from the Institute of Life Science of Jiyang College of Zhejiang A&F University for his reading and revising suggestion.
Conflict of Interest Disclosure
The author affirms that this research was conducted without any commercial or financial relationships that could be construed as a potential conflict of interest.
Abdalla A., Wheeler T.A., Dever J., Lin Z., Arce J., amd Guo W., 2024, Assessing fusarium oxysporum disease severity in cotton using unmanned aerial system images and a hybrid domain adaptation deep learning time series model, Biosystems Engineering, 237: 220-231.
https://doi.org/10.1016/j.biosystemseng.2023.12.014
Abdulridha J., Min A., Rouse M., Kianian S., Isler V., and Yang C., 2023, Evaluation of stem rust disease in wheat fields by drone hyperspectral imaging, Sensors, 23(8): 4154.
https://doi.org/10.3390/s23084154
Bazrafkan A., Navasca H., Worral H., Oduor P., Delavarpour N., Morales M., Bandillo N., and Flores P., 2024, Predicting lodging severity in dry peas using UAS-mounted RGB, LiDAR, and multispectral sensors, Remote Sensing Applications: Society and Environment, 34: 101157.
https://doi.org/10.1016/j.rsase.2024.101157
Chaloner T.M., Gurr S.J., and Bebber D.P., 2021, Plant pathogen infection risk tracks global crop yields under climate change, Nature Climate Change, 11(8): 710-715.
https://doi.org/10.1038/s41558-021-01104-8
Chen P., Xiao Q.X., Zhang J., Xie C.J., and Wang B., 2020, Occurrence prediction of cotton pests and diseases by bidirectional long short-term memory networks with climate and atmosphere circulation, Computers and Electronics in Agriculture, 176: 105612.
https://doi.org/110.1016/j.compag.2020.105612
Dong H., Dong J., Sun S., Bai T., Zhao D., Yin Y., Shen X., Wang Y., Zhang Z., and Wang Y., 2024, Crop water stress detection based on UAV remote sensing systems, Agricultural Water Management, 303: 109059.
https://doi.org/10.1016/j.agwat.2024.109059
Girmaw D.W., and Muluneh T.W., 2024, Field pea leaf disease classification using a deep learning approach, PLOS ONE, 19(4): e0307747.
https://doi.org/10.1371/journal.pone.0307747
Kamarudin M.H., and Ismail Z., 2022, Lightweight deep CNN models for identifying drought stressed plant, IOP Conference Series: Earth and Environmental Science, 1091: 012043.
https://doi.org/10.1088/1755-1315/1091/1/012043
Liu Q., Zhang Z., Liu C., Jia J., Huang J., Guo Y., and Zhang Q., 2023, Improved method of crop water stress index based on UAV remote sensing, Transactions of the Chinese Society of Agricultural Engineering, 2: 68-77.
https://doi.org/10.11975/j.issn.1002-6819.202210136
Liu Z.H., Jiang Q.Y., Ji Y.S., Liu L., Liu H.Q., Ya X.X., Liu Z.X., Wang Z.R., Jin X.L., and Yang T., 2025, Assessment of salt tolerance in peas using machine learning and multi-sensor data, Plant Stress, 17: 100902.
https://doi.org/10.1016/j.stress.2025.100902
Mahajan R., Dar A., Mukthar S., Zargar S., and Sharma S., 2018, Pisum improvement against biotic stress: current status and future prospects, Pulse Improvement, 6: 109-136.
https://doi.org/10.1007/978-3-030-01743-9_6
Mazhar M.W., Ishtiaq M., Maqbool M., Ullah F., Sayed S., and Mahmoud E.A., 2023, Seed priming with iron oxide nanoparticles improves yield and antioxidant status of garden pea (Pisum sativum L.) grown under drought stress, South African Journal of Botany, 162: 577-587.
https://doi.org/10.1016/j.sajb.2023.09.047
Meerdink S.K., Roberts D.A., King J.Y., Roth K., Gader P.D., and Caylor K.K., 2025, Using hyperspectral and thermal imagery to monitor stress of Southern California plant species during the 2013-2015 drought, ISPRS Journal of Photogrammetry and Remote Sensing, 220: 580-592.
https://doi.org/10.1016/j.isprsjprs.2025.01.015
Nannim S., and Asabar K., 2018, Response of yield and yield components of field pea (Pisum sativum L.) to application of nitrogen and phosphorus fertilizers, International Journal of Scientific and Research Publications, 8(8): 14-19.
https://doi.org/10.29322/IJSRP.8.8.2018.P8004
Osco L., Junior J., Ramos A., Furuya D., Santana D., Teodoro L., Gonçalves W., Baio F., Pistori H., da Silva Junior C., and Teodoro P., 2020, Leaf nitrogen concentration and plant height prediction for maize using UAV-based multispectral imagery and machine learning techniques, Remote Sensing, 12(19): 3237.
https://doi.org/10.3390/rs12193237
Osman H.S., 2015, Enhancing antioxidant-yield relationship of pea plant under drought at different growth stages by exogenously applied glycine betaine and proline, Annals of Agricultural Sciences, 60(2): 389-402.
https://doi.org/10.1016/j.aoas.2015.10.004
Peng Y., Jin T., Wang L., Wu Y., Yang H., and Du X., 2023, An improved YOLOv5 model based on data augmentation and coordinate attention for crop pest detection in the field, Computers and Electronics in Agriculture, 208: 107785.
https://doi.org/10.1016/j.compag.2023.107785
Rani N., Krishna A.S., Sunag M., Sangamesha M.A., and Pushpa B.R., 2024, Infield disease detection in citrus plants: integrating semantic segmentation and dynamic deep learning object detection model for enhanced agricultural yield, Neural Computing and Applications, 36: 22485-22510.
https://doi.org/10.1007/s00521-024-10451-4
Sandino J., Castano A., Jiménez J.J., Castañeda A., and Forero M.G., 2022, Review of drone use in precision agriculture for crop monitoring: present and future trends, Computers and Electronics in Agriculture, 198: 107034.
https://doi.org/10.1016/j.compag.2022.107034
Subeesh A., and Chauhan N., 2025, Deep learning based abiotic crop stress assessment for precision agriculture: a comprehensive review, Journal of Environmental Management, 381: 125158.
https://doi.org/10.1016/j.jenvman.2025.125158
Wang H.M., 2024, The application and progress of deep learning in bioinformatics, Computational Molecular Biology, 14(2): 76-83.
https://doi.org/10.5376/cmb.2024.14.0009
Wang X.M., Sun G.H., Xu H.D., Liu C.Y., and Wang Y.P., 2024, The impact of marker-assisted selection on soybean yield and disease resistance, Bioscience Methods, 15(6): 255-263.
https://doi.org/10.5376/bm.2024.15.0026
Xie P.L., Zhang Z.T., Ba Y.L., Dong N., Zuo X.Y., Yang N., Chen J.Y., Cheng Z.K., Zhang B., and Yang X.F., 2024, Diagnosis of summer maize water stress based on UAV image texture and phenotypic parameters, Transactions of the Chinese Society of Agricultural Engineering, 40(10): 136-146.
https://doi.org/10.11975/j.issn.1002-6819.202311031
Yue X.J., Song Q.K., Li Z.Q., Zheng J.Y., Xiao J.Y., and Zeng F.G., 2023, Research status and prospect of crop information monitoring technology in field, Journal of South China Agricultural University, 44(1): 43-56.
https://doi.org/10.7671/j.issn.1001-411X.202209042
Zhang T., Xu Z., Su J., Yang Z., Liu C., Chen W.H., and Li J., 2021, Ir-UNet: irregular segmentation U-shape network for wheat yellow rust detection by UAV multispectral imagery, Remote Sensing, 13(19): 3892.
https://doi.org/10.3390/rs13193892
Zhou J., Fu Z., Zhang H., and Wu M., 2021, SVM-based precision irrigation decision-making model driven by UAV remote sensing and IoT, Sustainability, 13(22): 12629.
https://doi.org/10.3390/su132212629
Zou K., Shan Y., Zhao X., Ran D.C., and Che X., 2024, A deep learning image augmentation method for field agriculture, IEEE Access, 12: 37432-37442.
https://doi.org/10.1109/ACCESS.2024.3373548
. HTML
Associated material
. Readers' comments
Other articles by authors
. Minghua Li

Related articles
. Pea

. Stress monitoring

. Unmanned aerial vehicle

. Deep learning

. Crop phenotype

Tools
. Post a comment