This summer I’m starting a new project that aims to measure flowering phenology in a new and exciting way: extracting flower counts using deep learning from high-resolution aerial imagery collected with a small UAV. This has brought me to Rocky Mountain Biological Laboratory in Gothic Colorado, which is home to one of North America’s longest-running phenology monitoring programs.
This project has been bouncing around in my head for a while: how can we see and understand the landscape like a pollinator does? The main obstacle is a mismatch between the spatial scales of the processes that we are interested in (pollinator foraging over large landscapes), and the data that we can actually collect (floral abundance and pollinator visitation in tiny plots or transects). If we could develop a way to monitor the timing and abundance of flowering efficiently over large extents, though, we might be able to piece together how the landscapes of floral resources for pollinators change over the course of the season, or even how longer-term trends like climate change are likely to alter these spatial patterns.
Before the development of modern quadcopter drones with high-resolution cameras, it wasn’t feasible to collect imagery data at a high enough resolution to see individual flowers at a large enough extent to say anything about landscape patterns. Essentially flowers are extremely small (2mm - 5cm), and the landscape is extremely large. Even with a 12 megapixel camera (standard on consumer drones until recently) we would have to fly extremely slowly at 5m above the ground to capture imagery with high enough ground resolution (3mm) to see most flowers. Luckily, over the past few years the technology has finally matured enough to make this more feasible: A drone with a 20 megapixel camera can fly at 10m and still capture 3mm imagery mosaics. Flight times of up to 30 minutes gives each flight a survey area of about 1 hectare.
Even with good imagery, counting flowers by hand wouldn’t be feasible over large areas because there can be many tens of thousands of flowers in a hectare of meadow. This is where machine learning comes in. Flowers (or inflorescences) are discrete, compact objects with contrasting colors againsta soil or vegetation background. These objects are great candidates for being accurately counted using new computer vision tools called convolutional neural networks. I plan to use frameworks developed by Planet Labs and others to automate this classification task.
Drones provide a great top-down view of ecosystems, but they can’t see through dense vegetation. How well can photo-based counts of flowers align with more traditional plot-based data? This is what has brought me to Colorado, where researchers have been collecting field data on wilflower phenology in a network of 30 plots since 1973! I’m coordinating with the project field lead Jane Ogilvie to make sure my drone imagery lines up with the timing of their field surveys. How well can we do? Time will tell.