By Jaro Uljanovs, Sharper Shape
If you are in charge of a powerline transmission network, the chances are that errant vegetation is one of the major banes of your life.
Utilities are forced to spend huge sums annually keeping lines clear of encroaching vegetation that could damage or bring down powerlines in bad weather, risking outages, expense and even fire. Like ‘painting the Forth Bridge’, it is an intensely manual task that is already overdue by the time it’s finished.
However, with the application of cutting-edge artificial intelligence (AI) techniques, combined with state-of-the-art data collection, it is now possible to aggressively streamline the process, saving time and money while lowering risk.
Digitalizing vegetation management
Vegetation management feels very analogue. It brings to mind images of trees and forests, a far cry from the artificial imagery we associate with technology. And indeed, the traditional approach to it is very analogue. Workers ‘walking the line’, looking for vegetation that looks like it’s getting close to powerlines, then making a decision on what needs to be done depending on how fast-growing the species in question is.
It may be all fresh air and forest, but in reality this is a data problem. First, the worker must identify instances where, say, a tree branch is getting too close to a powerline. Maybe ‘too close’ is defined in a fuzzy, human way, but that’s a datapoint. Second, that worker must go through their mental database of candidate tree species and combine that initial location data with what they know about that tree. Is it fast-growing? Is it prone to breakage in high winds? Is it deciduous or evergreen? Finally, they must combine and weigh that data to come to a decision: cut it back or leave it be.
Broken down that way, does it seem so far-fetched a computer could do this?
Good results require good data
Garbage in, garbage out – the old tech adage is as true as ever. Just as we wouldn’t trust the judgement of a line walker who had left their glasses in a truck, if we are to digitize vegetation management, we need high-quality data as an input.
For vegetation management, we can specify three datasets necessary to do a good job.
First, we have LiDAR – or light distance and ranging – data. LiDAR sensors create a geospatial point cloud, essentially compiling a 3D map of objects in a surrounding area. LiDAR won’t tell us much about what is there, but at any given point it will tell us whether there is an object present or not. We can therefore tell that a tree-shaped object is x feet away from our powerline, as well as its dimensions.
Second, add RGB (red, green, blue) imaging data to add human-eye visible wavelength color and a level of detail to the point cloud, before adding the third layer to the picture: hyperspectral imagery.
Resiliency Planning and Prep is an educational track at DISTRIBUTECH International 2022, set for January 26-28, 2022 in Dallas, Texas. We’re accepting speaking ideas now. If you have worked on an innovative vegetation management project and would like to present it, please submit your idea now. Here’s the link.
Hyperspectral sensors measure parts of the electromagnetic spectrum invisible to the human eye, and combined with RGB data this gives a richer, more informative view of a given object than a human observer will get, resulting in higher fidelity data. Crucially, different species of vegetation reflect the sun’s electromagnetic spectrum with different amplitudes at different wavelengths of the spectrum, meaning that hyperspectral imagery combined with an accurate reference database can even identify species from the foliage. Tests have shown this to be at least as accurate as species identification by an informed human, at scales impractical for human labour applications.
So, by combining these three data sets, we have a geospatial understanding of where vegetation is in relation to powerlines, plus an understanding of what that particular piece of vegetation is and, therefore, how it might behave. All the inputs needed for our decision.
As we know though, the intelligent part of artificial intelligence isn’t collecting data – it’s knowing what to do with it. In other words: making the decision.
Getting to a point where a machine can make that decision is not simple, quick, or easy. However, hard work and technological advancement have now allowed us to get there.
First, we can start with some simpler AI tasks. Returning to the breakdown above, we can for example develop specific Neural Network algorithms to use the LiDAR point cloud data to recognize the shapes of certain objects, e.g. a transformer, a pole, a tree. Separately, we could develop algorithms to classify different trees and subspecies of tree using hyperspectral data to prioritize the most and least dangerous.
This is already a highly sophisticated approach to vegetation management, capable of saving hundreds of man-hours in the field. Right now, we first analyse the LiDAR pointcloud data to create a canopy height model, discarding vegetation under a certain height as non-threatening (as long as it is not directly under the powerlines). On the remaining pixels in the point cloud, we overlay the hyperspectral data for species classification to make a risk assessment.
However, recent advances in graph neural networks (GNNs) have made headway on what is possible for machine learning and AI, even with sparse pointclouds. Using a combination of open-source and proprietary technology, it will not be too long before our AI can combine the datasets into a single input automatically, and rapidly classify vegetation according to species, health and risk, before making an automated recommendation on how to proceed.
Imagine that: instant, automatic insight into vegetation danger-spots on the network. Resources could be more intelligently and economically deployed to manage vegetation, and risk of damage, outages and wildfires would be drastically lower – all based on data which itself could be harvested automatically by drone in the not-too-distant-future. It’s a future where AI has upended the challenge of vegetation management – and it’s tantalizingly close.
No silver bullets
Of course, big promises have been made for all sorts of technologies, and we must be careful not to overhype AI-based vegetation management. There is significant work to do upfront in such a project to collect, label and annotate high-quality data for the machine learning algorithms to learn from. This can be difficult for utilities with tight budgets hostile to upfront costs, however thanks to our previous projects we have a wealth of LiDAR data, labeling key infrastructure components to work from, and dedicated experts who can sift through terabytes of data. This results in clean, error-free datasets to train algorithms on. In other words, there is a significant upfront investment required in expert human time to pre-process, label and annotate the data to then train the algorithms.
However, once that stage is completed, ongoing operational costs can be extremely low. The vegetation management program could be repeated ad infinitum so long as data is kept up to date, and planners could even use it to run ‘what-if’ scenarios for events such as major storms. What’s more, unlike many transmission utility investments, there are no ‘buried’ costs. If you install a new insulator, for example, it will sit there for years and upgrading it probably means replacing it and extensive labor – so you want to be 100 percent sure before going ahead with it. With software, upgrades are continual and non-disruptive, meaning an investment that improves with time, not one superseded by better options.
Based on the clear business need and the rapid improvement of technology in the space, I am confident that AI-based vegetation management will become the gold-standard for advanced transmission utilities in a short span of years.
About the Author
Jaro Uljanovs, Director of Cognitive Algorithms, Sharper Shape is a Machine Learning expert and a Data specialist with experience in a variety of fields. He completed his master’s degree in physics at the University of York, UK where he applied Machine Learning techniques disruption prediction in Nuclear Fusion reactors. Having worked with the Joint-European Torus (JET) in Oxfordshire in collaboration with Aalto University, he’s no stranger to big data analysis, large scale collaborative efforts and problem solving. His current focus lies in Artificial Intelligence and its applications to automated data analysis. Non-standard applications of Neural Networks are his main interest; Graph Neural Networks, Few Shot Learning, Spatial-Spectral Convolutions. These areas are what has helped SharperShape excel at key AI application areas such as automated LiDAR segmentation, automated component detection & assessment, and deep hyperspectral data analysis.