I’ve been slowly moving this forward and here’s a loooooong overdue update on where things are from my perspective.
Open Hardware Researchers group meeting June 2022
On 2022-06-09, I presented the ideas we discussed in this thread to the Open Hardware Researchers group hosted by @jarancio, thanks for letting me share! Here’s a summary from that meeting based on these notes.
Possible resources/sources of support:
- https://conservationxlabs.com/
- About - Prototype Fund Hardware (German partner needed)
- https://jogl.io/
- https://conservationxlabs.com/
- https://dmf-lab.co.uk/
Stereo lenses add on??
Instead of a electronics-based solution as we’ve discussed so far, @jpearce suggested also trying adding “binoculars” in front of the camera lens to give it stereo vision. Essentially the single camera lens would “see” two side-by-side images, which you can then post-process with something like OpenCV. There are even clip-on stereo lenses for smartphones like these:
- https://www.amazon.ca/Artshu-Smartphone-Stereoscopic-Camera-Fisheye/dp/B07JZJBYVF (warning: Jeff Bezos link)
- https://www.amazon.com/3D-Lens-Canon-Digital-Camera/dp/B003V1NS9A (warning: Jeff Bezos link)
- “Record 3D VR videos with Clip-on Lens on any phone!” (warning: YouTube link)
- “REVIEW: Remon 3D VR Camera Lens for Smartphones?!” (warning: YouTube link)
Also, @jpearce mentioned that now lenses can now be 3D-printed with great precision:
- Resource: Creating 3D Printed Lenses and a 3D Printed Camera with Stereolithography
- https://www.diyphotography.net/3d-printing-lenses-is-now-a-thing-and-you-can-make-them-yourself/
I haven’t tried this approach yet, but I think @jmwright got one of those clip-on lenses and a few components. From what I can tell, the challenge is that these clip-on lenses are not perfectly aligned, so the image seen by the camera sensor can be blurry, a bit off, and have various other artefacts. I suspect this could be solved by “personalising” the binocular lenses to match the dimensions of a camera trap, but this will have to be done for each camera trap model. Happy to hear what @jmwright thinks. IMO this binocular add-on approach and the stereo camera add-on approach can be pursued in parallel. And if we all get to meet in Panama (see below), then we can compare notes and tinker in person!
other notes from meeting
GOSH forum threads (including this one):
- Depth Sensing Technologies for Camera Traps
- New Time of Flight (ToF) camera for accurate 3D depth measurement
Wildlabs thread where I asked @akiba about their BoomBox work, including how they hacked into cameras:
After the stereo images are captured, OpenCV can be used to process them into depth maps (this is where I can try to help by hacking the code), e.g.:
Field test
@hikinghack has generously offered to hack together a pair of side-by-sdie camera traps to get them to trigger simultaneously (see earlier posts in this thread for details). I’ve got in touch with a supplier, NatureSpy, who has kindly offered to give me a discount.
@Harold also kindly offered to field test in Singapore where - even if wildlife is not within easy reach - human or canine subjects could be used to get a depth image.
I think what I need to do now is to procure enough cameras so we can at least get the ball rolling on hacking them! I also need to figure out exactly which camera trap buy from them, and consider the international shipping costs/import taxes.
Add-on interface module
I’d like to re-iterate my love for @Harold’s concept of creating an add-on interface module for camera traps (see this post above). This module would be tied into the camera’s triggering mechanism so that we either trigger the camera from an external source (including from another camera), or use the camera to trigger something else, both would be mediated by this add-on.
@Harold has already sketched out how such a PCB could be designed and examined a camera trap’s internal circuitry a bit.
Also see this post about how the BoomBox project examined the circuitry.
Workshop at Panama Gathering!?
After recent email threads with @Harold and @hikinghack, I think we’re hoping to start doing some stuff now, and do a show and tell about where we are with this stereo camera trap idea at the GOSH Gathering in Panama in late October 2022. Additionally, @Harold is happy to demo hacking into the triggering mechanism of camera traps as part of this workshop.
Eventually write this up to be published somewhere…
The academic in me can’t help but think about publishing our results in a peer-reviewed paper(s). Off the top of my head, I can envision the following putative papers:
- Conceptual review that looks at existing depth-sensing tech and describing the need for it in ecological monitoring with camera traps (see this post, which @laola helped me think through). This review could end with presenting the concepts discussed in this thread?
- Alternatively, presenting our novel concepts could be in a separate, opinion piece/perspective article that some scientific journals have…
- Short paper, maybe in the Journal of Open Hardware/Hardware X (???), describing @Harold’s interface add-on once we’ve made it.
- If we can rig more of these stereo camera trap together to run a proper ecological survey, then we can probably publish the results and compare them to other survey methods to see if we get comparable results. This is potentially publishable in a ecological journal.
- If we manage to develop and fabricate the stereo camera add-on, then it might also be publishable in the Journal of Open Hardware/Hardware X?
I’d be curious what the academics among us think of this…
What do you think?
Any feedback is appreciated. It’d be cool if we can do something in Panama together!
There’s probably more stuff I forgot to include in this post, please remind me!
Thanks @hikinghack, @Harold, @jarancio, @jpearce, @jmwright, @laola.