Artificial Intelligence in Wildlife Camera Trapping
Just a small sampling of the many camera trap projects hosted on Zooniverse! |
If you’ve ever checked out the Zooniverse Projects Page, you are well aware that there are dozens upon dozens of research initiatives that rely on camera trapping to capture information about animal behavior. Technological advances in camera trap hardware over the last two decades have allowed scientists to collect previously undreamed-of amounts of ecological data. The bottleneck we’re faced with now is extracting information from these large data-sets: I once calculated that if I had to classify all of the images used in my PhD dissertation myself (at one photo every 20 seconds, working eight hour days, seven days a week without breaks or holidays), it would have taken me over 42 straight years to even get to the point where I could begin asking questions of my data! Citizen science has been a HUGE boon to camera trap studies, using the power of the multitudes to unlock the data trapped inside our photographs. It is hard to overstate the massive amount of research that has been made possible by the growth of the citizen science movement.
The very first online citizen science camera trap project, Snapshot Serengeti, spawned a new era in camera trap data processing! |
However, a guiding principal for us using citizen scientists in our work is that we don’t misuse volunteer’s time and effort by setting them on projects that could be accomplished any other way. Machine learning is now emerging as a new tool to help us move through the increasing streams of data that we’re gathering on the natural world. Computer vision algorithms for camera trap photos have now been developed which can identify species, count individuals, and even guess as to what behaviors the animals are performing! Which is not to say that these algorithms will be replacing human volunteers any time soon – if you’ve looked at even a handful of camera trap pictures, you’ll know that distance from camera, lighting, weather, and a whole host of other issues can make certain identifications incredibly tricky for even the human brain. While we still have a long way to go on getting computers as ‘clever’ as volunteers in terms of accurate animal classifications, one of our goals for the upcoming year is to combine the powers of machine learning and citizen science in ways that open up new realms of data processing and analysis.
Output from a machine learning algorithm trained on Snapshot Serengeti camera trap data by Norouzzadeh et al. (2017) |
The type of machine learning algorithms we'll be using are "unsupervised": we’re aren’t pre-programming them to recognize specific features in an image to key in on what animal they're seeing. Rather, this system operates more like a human brain and actually teaches itself (learns!) what features it needs to identify particular objects. In fact, these algorithms are called “artificial neural networks” because of their brain-like design. The “neurons” in these algorithms are arranged in a number of hierarchical levels, such that those in each successive layer take input from the previous layer to build up a complete picture of what is being “seen”.
Norouzzadeh's explanation of how the layers of neurons in an artificial neural network work to recognize animals in an image |
Our computer neural network is fed a number of training examples – hundreds to thousands of images that already have the correct labels associated with them – in order to learn what features it needs to abstract in order to make a correct classification. The output of running the algorithm on new images is a classification of what is likely in the image along with an associated probability or certainty that this is the correct ID.
In the new year, we’ll be working to integrate a machine learning algorithm created by a group of researchers in Colorado. They trained
their algorithm on camera trap images of North American wildlife gathered from
projects across five US states: California, Colorado, Florida, South Carolina,
and Texas. We’re not going to start, however, by having the computer try to ID
all of our pictures – this task would simply be too hard and the accuracy too
low at the moment to be worthwhile! Our plan is first have the algorithm identify
and retire all images that don’t contain any species or are populated by cars,
humans, or deer. By removing these easy (and, to be honest, some of the most boring - see our post about cars from a few days ago!) images, we can focus all of the human brain power on evaluating the most difficult, the trickiest, and the most unusual images we have. This means more canids, more coons, and more interesting puzzles for everyone! Bringing together human and machine will help us keep pace with all of the exciting new imagery coming our way from Cedar Creek: Eyes on the Wild!
Too tough for computers, but not impossible for the human brain! |
shrbfyso
ReplyDeletecialis 5 mg resmi satış sitesi
glucotrust official website
viagra
cialis eczane
sight care
kamagra 100 mg
cialis 20 mg eczane