PhD student, College of Science and Engineering, James Cook University
Giving weed control robots sharper eyes
Environmental weeds are plants that invade native ecosystems and adversely affect the survival of indigenous flora and fauna. This can include foreign plants accidentally or intentionally introduced or native plants that have become insidious due to inappropriate management or unsuited inhabitation.
In pastoral lands; weeds invade crops, smother pastures and occasionally poison livestock. In a 2012 survey conducted by Landcare Australia, weed and pest control was ranked as the most significant land management problem by nearly half of Australia’s primary producers.
Weed species recognition remains a major obstacle to the development and industry acceptance of robotic weed control technology. All weed control robots need to find weeds in order to kill them. The focus of my research is to enhance the effectiveness of weed spraying robots by developing new image recognition algorithms and technologies to improve their ability to detect weeds under realistic rangeland conditions.
Detecting weeds using machine vision is simple in the highly controlled environment of intensive cropping where the land is flat, the vegetation is homogeneous, and the light conditions may be controlled with external lighting/shading or time-of-use. However, for rangeland and rough pastures, the problem is far more difficult. Many different species of weeds and native plants may be present in the same scene, all at varying distances from the camera, all experiencing different levels of lighting/shading, with some weeds being occluded. This presents a number of issues for imaging and identification including depth of field and dynamic range limitations of camera systems.
Past experience in these difficult environments has indicated that using conventional image analysis techniques to identify leaf colour, shape or texture are not sufficient and new systems are required. The goal of my research is to develop fully tested recognition systems using a range of imaging and spectrometric properties which can be applied to any robotic platform.
The main contributions of this research will be: the publication of methods to reliably detect significant Australian weeds, adaptable to any agricultural vehicle or terrain; and the creation of the first public image dataset of some important Australian weed species for testing new detection methods in the future.
Abstract: We use deep convolutional neural networks to identify the plant species captured in a photograph and evaluate different factors affecting the performance of these networks. Three powerful and popular deep learning architectures, namely GoogLeNet, AlexNet, and VGGNet, are used for this purpose. Transfer learning is used to fine-tune the pre-trained models, using the plant task datasets of LifeCLEF 2015. To decrease the chance of overfitting, data augmentation techniques are applied based on image transforms such as rotation, translation, reflection, and scaling. Furthermore, the networks' parameters are adjusted and different classifiers are fused to improve overall performance. Our best combined system has achieved an overall accuracy of 80% on the validation set and an overall inverse rank score of 0.752 on the official test set. A comparison of our results against the results of the LifeCLEF 2015 plant identification campaign shows that we have improved the overall validation accuracy of the top system by 15% points and its overall inverse rank score on the test set by 0.1 while outperforming the top three competition participants in all categories. The system recently obtained a very close second place in the PlantCLEF 2016.
Pub.: 11 Jan '17, Pinned: 31 Jul '17
Abstract: Plant classification has a broad application prospective in agriculture and medicine, and is especially significant to the biology diversity research. As plants are vitally important for environmental protection, it is more important to identify and classify them accurately. Plant leaf classification is a technique where leaf is classified based on its different morphological features. The goal of this paper is to provide an overview of different aspects of texture based plant leaf classification and related things. At last we will be concluding about the efficient method i.e. the method that gives better performance compared to the other methods.
Pub.: 18 Jun '13, Pinned: 31 Jul '17
Abstract: Abstract Plant classification based on the leaf images is an important and tough task. For leaf classification problem, in this paper, a new weight measure is presented, and then a dimensional reduction algorithm, named semi-supervised orthogonal discriminant projection (SSODP), is proposed. SSODP makes full use of both the labeled and unlabeled data to construct the weight by incorporating the reliability information, the local neighborhood structure and the class information of the data. The experimental results on the two public plant leaf databases demonstrate that SSODP is more effective in terms of plant leaf classification rate.Abstract Plant classification based on the leaf images is an important and tough task. For leaf classification problem, in this paper, a new weight measure is presented, and then a dimensional reduction algorithm, named semi-supervised orthogonal discriminant projection (SSODP), is proposed. SSODP makes full use of both the labeled and unlabeled data to construct the weight by incorporating the reliability information, the local neighborhood structure and the class information of the data. The experimental results on the two public plant leaf databases demonstrate that SSODP is more effective in terms of plant leaf classification rate.
Pub.: 01 Nov '16, Pinned: 31 Jul '17
Abstract: Spatial features of hyperspectral imagery (HSI) have gained an increasing attention in the latest years. Considering deep convolutional neural network (CNN) can extract a hierarchy of increasingly spatial features, this paper proposes an HSI reconstruction model based on deep CNN to enhance spatial features. The framework proposes a new spatial features-based strategy for band selection to define training label with rich information for the first time. Then, hyperspectral data is trained by deep CNN to build a model with optimized parameters which is suitable for HSI reconstruction. Finally, the reconstructed image is classified by the efficient extreme learning machine (ELM) with a very simple structure. Experimental results indicate that framework built based on CNN and ELM provides competitive performance with small number of training samples. Specifically, by using the reconstructed image, the average accuracy of ELM can be improved as high as 30.04%, while performs tens to hundreds of times faster than those state-of-the-art classifiers.
Pub.: 18 Oct '16, Pinned: 31 Jul '17
Abstract: Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the "navigation via classification" task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.
Pub.: 13 Jun '17, Pinned: 31 Jul '17