Earlier this year, the city of Los Angeles put out a call for help to the California Institute of Technology (Caltech). The last survey of the city's trees was done over 20 years ago. At that point there were about 700,000 street trees, but a major drought in recent years has undoubtedly affected urban green spaces and the city wanted an update.
That's where Caltech comes in. A typical tree survey would require hiring people to walk every city street and identify and count trees, which would cost about $3 million. The city's Bureau of Street Services figured there had to be a better and cheaper way using technology and it learned about a big project going on at Caltech.
The university has been working on a new machine learning program that can create a tree inventory by using data from satellites and street level images from Google Maps.
The team at Caltech is lead by Pietro Perona, a professor of electrical engineering and a leader in the field of computer vision and pattern recognition, a major part of machine learning. His team works with experts at Cornell on a project called "Visipedia" that recently developed an algorithm that can identify North American bird species from a single picture. They eventually want Visipedia to be able to identify any living thing from an image.
The team started focusing on trees when they noticed the effects of the California drought on the trees around Pasadena. As water restrictions were put into place, people stopped watering their yards and trees were dying. The team wanted to know if the trees that were dying were just non-native species that required more water or whether native species were being equally affected and a massive change was taking place.
They created an artificial neural network and began "training" it to recognize trees in satellite and street level views of Pasadena from Google Maps by showing it hundreds of examples of each type of tree.
Once the algorithm could recognize trees in a photo, the team then had to train it to identify the species of each one. The team used data from a 2013 tree inventory done in Pasadena that included the species, measurements and locations of each of the 80,000 trees in the city to show the algorithm examples of the different species.
The team trained the algorithm to identify the 18 most common of the 200 species in Pasadena and then compared the algorithm's ability to identify images of those trees against the data from the 2013 survey. The algorithm was shown four images of each tree taken from different angles and distances and then it gave a list of possible species with a percentage of certainty for each one. The result was that the algorithm could identify the tree species from Google Maps images with 80 percent accuracy.
"This was much better than we had expected, and it showed that our method can produce similar results to a tree survey done by humans," said Steve Branson, a postdoctoral scholar in electrical engineering. "A human tree expert can identify species at a higher accuracy than our algorithm, but when these large city tree surveys are done they can't be 100 percent accurate either. You need lots of people to spread out around the city and there will be mistakes."
The goal is to get the vision software to the point where cities could keep a continual log of its trees. The software would collect new data every time satellite and street level images were updated, which would happen every few months. That way a city's leadership could make more informed urban planning decisions and better protect urban forests.
Caltech is working on the algorithm for Los Angeles. You can browse the current L.A. and Pasadena demo catalogs here.