Hi everyone, it’s been a while but here’s another update on what’s been going on with the camera trap stuff…
Session(s) at 2022 Gathering in Panama!
@Harold, @hikinghack, @laola and I hope to run a session or two at our Gathering in Panama this week. Here are some ideas:
- Harold will show and tell a set up for linking two camera traps together to obtain stereo images. See earlier in this thread for details.
- Laura will brainstorm with us what kinds of artistic creations could come from camera trap.
I’ll do my best to hack together some code to turn images from Harold’s stereo camera set up into a depth map with which we can judge the distance of objects in it.
Other developments
Through Wildlabs, I tuned into a series of recent talks on camera traps and artificial intelligence. There’s a research group in Germany that has been publishing their work on distance estimation from camera trap data, including using videos from just one camera, or building their own stereo camera trap from scratch. Looks pretty amazing, though I don’t understand the details of the artificial intelligence techniques they used.
That said, I really like Harold’s approach because most ecologists/scientists don’t have the skills, time, or resouces to manufacture camera traps at scale. But Harold’s set up is much easier, especially if the idea of a common interface board could be realised.
And who knows, maybe one day we can try Joshua’s idea of letting a camera trap wear binocular “glasses”, too!
Also, I’m noting here that there was a brief thread with @jmwright about wireless connections for mobile sensors. Not directly related to depth-sensing but wanted to put it here so I don’t lose track of it.