Premise: Quantitative plant traits play a crucial role in biological research. However, traditional methods for measuring plant morphology are time-consuming and have limited scalability. We present LeafMachine2, a suite of modular machine learning and computer vision tools that can automatically extract a base set of leaf traits from digital plant datasets.
Methods: LeafMachine2 was trained on 494,766 manually prepared and expert-reviewed annotations from 5597 herbarium images obtained from 288 institutions, representing 2663 species and employs object detection and segmentation algorithms to isolate individual leaves and petioles. Our landmarking network identifies and measures nine pseudo-landmarks that occur on most broadleaf taxa. Archival processing algorithms prepare labels for optical character recognition and interpretation, while reproductive organs are scored.
Results: LeafMachine2 can extract trait data from at least 245 angiosperm families and calculate pixel-to-metric conversion factors for 26 commonly used ruler types.
Discussion: LeafMachine2 is a highly efficient tool for generating large quantities of plant trait data, even from occluded or overlapping leaves, field images, and non-archival datasets. Our project, along with similar initiatives, has made significant progress in removing the bottleneck in plant trait data acquisition from herbarium specimens and shifted the focus towards the crucial task of data revision and quality control.
Weaver, W. N., and S. A. Smith. 2023. From leaves to labels: Building modular machine learning networks for rapid herbarium specimen analysis with LeafMachine2. Applications in Plant Sciences. e11548. doi:10.1002/aps3.11548
Weaver, W. N., J. Ng, and R. G. Laport. 2020. LeafMachine: Using machine learning to automate leaf trait extraction from digitized herbarium specimens. Applications in Plant Sciences 8(6): e11367. doi:10.1002/aps3.11367
Weaver, W. N., and S. A. Smith. 2023. FieldPrism: A system for creating snapshot vouchers from field images using photogrammetric markers and QR codes. Applications in Plant Sciences 11(5): e11545. doi:10.1002/aps3.11545
Weaver, W. N., B. R. Ruhfel, K. J. Lough, and S. A. Smith. 2023. Herbarium specimen label transcription reimagined with large language models: capabilities, productivity, and risks. American Journal of Botany. doi:10.1002/ajb2.16256
Weaver WN, Lough K, Smith SA, Ruhfel B (2023) The Future of Natural History Transcription: Navigating AI advancements with VoucherVision and the Specimen Label Transcription Project (SLTP). Biodiversity Information Science and Standards 7: e113067. doi.org/10.3897/biss.7.113067
If you cannot see the form, please visit our inquiry form here
Figure 1: LeafMachine2 workflow. A batch of images is processed by the ACD (2) and PCD (6) component detection networks. (2) Bounding boxes identifying predicted plant components. Each bounding box identifies a unique component, directing it to the appropriate processing pipeline. (3) The PCD produces cropped images of each plant component. (4) Individual cropped leaves undergo instance segmentation by the Detectron2 network, producing leaf outline masks for ideal leaves (green) and optionally partial leaves (blue). The first set of images shows individual leaves, while the second set shows the compilation of the individual leaves back onto the full specimen image. (5) Cropped ideal leaves are processed by the PLD and individual landmarks are measured. Please see Figure 2 for a description of each landmark annotation. (6) Bounding boxes identifying predicted archival components. (7) Cropped archival components from the ACD are processed and cleaned into binary images for downstream applications, like optical character recognition (OCR) or interpretation by Large Language Models (LLMs). (8) The cropped ruler image is processed by our scanline or template matching algorithms to identify unit markers. Located tick marks are shown as colored dots. Green and cyan lines indicate the converted one- and five-centimeter distances for quality control purposes. For more information about pixel-to-metric conversion, please see Appendices S2 and S3. (9) The final overlay image shows all machine-derived masks, measurements, and identified components. All the visuals in this figure are sourced directly from the output files produced by LeafMachine2.
Figure 2: Qualitative performance of LeafMachine2 by family and task. Qualitative performance of LeafMachine2 across 341 plant families, as identified by the home herbaria. We visually inspected LeafMachine2’s quality control summary images for the 831 species/images in the D-3FAM test dataset produced with default setting and a PCD confidence of 50%. We followed a power ranking scheme to assign qualitative ratings to families with more than one image, conservatively rounding down in the case of split ratings between the images. For leaf segmentation, a “good” rating indicates that most leaf masks are high-quality, a “marginal” rating indicates that usable masks are present but require manual filtering, and a “poor” rating indicates that no usable masks are present. For landmarks, a “good” rating indicates that at least one usable and accurate landmark skeleton was present, a “marginal” rating indicates that only partial landmark skeletons were present, and a “poor” rating means that no landmarks could be identified. For component identification, a “good” rating means that LeafMachine2 scored the presence of all non-laminar organs, but not necessarily all instances of each organ. A “marginal” rating indicates that some non-laminar organs were not identified, while “poor” means that LeafMachine2 misidentified or failed to identify most non-laminar organs. Bolded families were included in the LeafMachine2 training dataset. (A) An image of Lauraceae Umbellularia californica as an example of “good” ratings in all categories. (B) An image of Myricaceae Morella cerifera as an example of “marginal” ratings in all categories. (C) An image of Sarcobataceae Sarcobatus vermiculatus as an example of “good” ratings in all categories.
Figure 3: Leaf detection with archival and non-archival datasets, with varying PCD confidence. The left column is the original image. Ordered by decreasing levels of PCD confidence from left to right are full image masks of ideal leaves (or leaflets). (A) Herbarium voucher of Fagaceae Quercus coccinea. (B) Herbarium voucher of Apodanthaceae Pilostyles blanchetii. (C) Herbarium voucher of Stilbaceae Brookea_tomentosa. (D) FieldPrism-processed field image of Fagaceae Quercus havardii, courtesy of the Morton Arboretum. (E) Leafscan image of Sapindaceae koelreuteria paniculata. (F) iNaturalist-style photograph of Nyssaceae Nyssa sylvatica, photo credit William Weaver.
Figure 4: Segmentation and pseudo-landmark examples. All leaves are from the D-3FAM dataset and were not part of the segmentation of landmarking datasets. Ideal leaves, as predicted by the PCD, are green masks while partial leaves are blue masks. (Leaves A-Q) A sample of leaves demonstrating segmentation performance when leaves have complex outlines, are obstructed by mounting tape, overlap other leaves, or a combination of obstructions, notably leaves L, P, and Q. (Leaves R-V) A sample of leaves showing show pseudo-landmark performance. For landmark overlay images, the red line is lamina width, the cyan line traces the petiole, the solid black line traces the midvein, the dotted white line is the line of best fit for the points that comprise the midvein, the solid white line is the base to tip length, blue bullseye points are lobe tips, pink angles are less than 180 degrees, orange angles are reflex angles greater than 180 degrees, the green dot is the lamina tip, the solitary red dot is the lamina base. Green bounding boxes are the minimal rotated bounding box. Petioles are either pink or orange masks and holes are purple. Leaf V shows bounding boxes around fruit and buds.
Figure 5: Ruler conversion performance. (A) The 37 ruler types that our ruler classifier was trained to recognize, arranged from best performing to worst, left to right. Rulers 30-37 are block-based rulers that can be identified but not converted but are well-suited for our template-matching procedures and will be supported in future iterations. The colored boxes below each ruler correspond to the CF determination success rate within the dataset R-CLASS. The numerator is the proportion visually assessed to be a correct conversion based on the quality control output (see Appendix S2, images 1-38) and the denominator is the total number of rulers of that class present in the dataset R-CLASS. Rulers with a zero can be identified by the ruler classifier but were not present in the R-CLASS. Colored shape identifiers are placed above each ruler image for the ruler classes that are present in both datasets R-CLASS and D-3FAM. (B) A t-test between manually obtained CFs and autonomously generated CFs for 708 rulers in the test dataset D-3FAM. The y-value of each point is the percent difference from the manually converted CF (left y-axis). Points are sorted by autonomous CF pooled standard deviation, lower values to the left and higher values to the right (right y-axis). Inconsistently converted rulers have higher index values, consistent rulers have lower index values. Accurate autonomous conversions fall between the average RSD dotted lines. The two recommended ruler types (rulers 2 and 7) are denoted by green star shaped markers.
This dataset was used to train all LeafMachine2 algorithms. For more details, please see the 2023 publication.
Estimated Release: Here! Used in the 2023 publication
A larger training dataset with more taxonomic diversity. Highlights include:
Estimated Release: Fall 2023
Our first add-on module will be an armature detector capable of locating and measuring prickles, thorns, and spines.
Modules are designed to be more taxonomically focused, extending the detection and measurement support of the base LeafMachine2 capabilities.
Estimated Release: 2024
FieldPrism creates curated snapshot vouchers for quantitative trait collection. This release will apply FieldPrism image processing methods when it LeafMachine2 detects the FieldPrism FieldSheet in an image, streamlining the workflow.
Check out FieldPrism!
Estimated Release: Fall 2023