Depth Sensing Technologies for Camera Traps

@hpy had asked me about a quick review of potential technologies that could be used to incorporate depth sensing capabilities into camera traps. The idea is that if camera trappers can have decent depth information from their cameras they can automatically do a lot more stuff with high precision (like estimate the size of the animals passing by with greater accuracy).

I figured I might as well cross post this quick little list I made in case it inspires anyone, or if anyone has other ideas to toss into this arena!

Reminder also that there’s lots of fun ideas for new camera traps out there, but a huge difficulty always seems to be making good cases that can deal with lots of abuse from people, transportation, weather, and animals.

Here’s a quick and dirty list of technologies and possible ideas I talked about with my friends @juul and Matt Flagg:

Active:

TOF arrays (e.g. this 8x8 array from sparkfun SparkFun Qwiic ToF Imager - VL53L5CX - SEN-18642 - SparkFun Electronics )

  • Autonomous Low-power mode with interrupt programmable threshold to wake up the host
  • Up to 400 cm ranging
  • 60 Hz frame rate capability
  • Emitter: 940 nm invisible light vertical cavity surface emitting laser (VCSEL) and integrated analog driver

IR pattern projection (e.g. Kinect, Realsense)

  • Limits - some have difficulty in direct sunlight

calibrated Laser Speckle projection

  • Could flash really bright laser speckle and photograph it
  • could be visible in daylight, or have filters for specific channels
  • could be very sensitive to vibration if the laser shifts and decalibrates

Structural light projection

  • limits- very slow, can’t really work for moving things

LIDAR scanners

  • limits - VERY expensive (like 600$+)

AI Prediction Based

single view depth prediction (e.g. https://www.cs.cornell.edu/projects/megadepth/)
Results are simply an inference of machine learning, not actual depth sensing. Would require lots of calibrated training.

Passive:

Photogrammetry
Personally, the passive methods of depth estimation make me the most excited, since just using 2-D camera images doesn’t add much new hardware into the mix, and helps future-proof designs, since photogrammatic techniques (like https://colmap.github.io/) can improve and still use old 2D images

Pre-calibrated Stereo Depth

  • passive stereo depth (no active illumination), accuracy requires adequate lighting and the texture of objects/scenes. The typical accuracy is approximately 3% of distance but varies depending on the object/actual distance.
  • Accuracy drops as the distance increases.

Off the shelf kits

OPENCV AI Kit lite -stereo grayscale cameras +

Min depth perception: ~18 cm (using extended disparity search stereo mode)
Max depth perception: ~18 m

Multi-camera arrays
(This is my favorite idea, so i even drew some pictures)

  • There are SUPER cheap ESP32 cameras available for like $6-$20 USD
    Like this one for $14 which has a display we wouldn’t need https://www.tindie.com/products/lilygo/lilygo-ttgo-t-camera/ or this one for $20 that even has a case and nicer specs https://www.tindie.com/products/lilygo/lilygo-ttgo-t-camera-mini/

  • You can put these ESP32 boards into “hibernation mode” which requires just like 3-5 microamps to stay on (meaning they could last months)

  • Get 5-10 of these cameras that you could set up as an array (this could cost about the same as a single off-the shelf camera trap

  • The array could be all connected to a single unit that is connected to a tree with telescoping arms

  • or several cameras could be independently connected around an area the animal might go through

  • then the cameras could be woken from hibernation by simple PIR motion detectors, grab images, and transfer them to a central node camera

  • finally the array of photos could be processed through something like COLMAP to get 3D reconstruction of each shot

  • a person may need to walk through the target area after setting up the cameras with something like a chessboard for calibration to make the 3D reconstruction easier

Other camera modules are also available if you want to have fancier optics that the 2MP

4 Likes

This is not something that I work on, but got curious if something had been implemented for OpenMV (which I quite like as an open project). This came out of a quick google search, maybe useful I hope https://openaccess.thecvf.com/content_CVPRW_2020/papers/w28/Peluso_Enabling_Monocular_Depth_Perception_at_the_Very_Edge_CVPRW_2020_paper.pdf

1 Like

The multi-camera array reminds me a lot of a Bullet Time implementation. Other sillinesses are possible.

I think probably stereoscopy would be the simplest to employ. Not necessarily as complicated as Intel’s RealSense or Microsoft’s Kinect (and there are other RGB+D cameras out there), but just 2 regular cameras a distance apart, triggered at the same time. @akiba has been working on an API to trigger regular camera traps, this would be useful since conservationists likely already have camera traps on hand.

1 Like

Sorry I’m a bit late to a topic that I initiated. :sweat_smile: I didn’t know there’s single view depth prediction, amazing stuff.

Thank you so much to @hikinghack for pulling this together! Not to mention cross-posting this to the Wildlabs forum. Awesome to see a response from @Freaklabs there.

I just read this again, and think it might be helpful to think about this from the following angles.

0. Which ecological/conservation questions might depth-sensing data answer?

Some off the top of my head:

  • Estimating wildlife populations - This is a big one, I know there are existing mathematical models that can make use of animal observation data only if there’s a good way to get depth-sensing info. Right now it’s a super labor intensive process that’s not practical (I can explain more if there’s interest).
  • Measuring the size of animals - This can be a proxy for age, which provides demographic information about the species in question.
  • Movement speed - If you can take images in burst mode (e.g. 3-4 photos per second) with depth data, you can estimate how fast an animal is moving.
  • What else?

1. What ecological/conservation questions can each tech help answer?

For example, since structural light project is slow and can’t work on moving things, that constrains the type of data you get. Or, LIDAR scanners are powerful but expensive, so might be hard to deploy a large number of them in an array so you don’t get as much spatially distributed data. What are the implications of each?

2. Common evaluation criteria for each tech

Such as:

  • Resolution
  • Range
  • Power needs
  • Response time
  • Spatial scalability (i.e. how feasible to deploy an array of these devices in the field)
  • Cost $$$
  • Can it sync with or replace existing camera trap images
  • Technical complexity for building a camera trap out of it

How does the above sound? Is there a better approach? Or maybe I should post this to the Wildlabs forum instead. :woman_shrugging:

Eventually, I think it would be super cool to develop an open source hardware camera trap. But like @hikinghack said even the case would be a challenge, not to mention other things like a quick triggering system, power requirements, etc. But I think it’s a worthwhile endeavor especially if we can bring new technology to the table like depth-sensing, something multiple ecologists have dreamed about but don’t have the ability to create.

Maybe a first step is to see if @Freaklab’s BoomBox system can be adapted??

I know @laola has also indicated an interest in this, so please chime in!

Just adding a possible ESP32 board here. The TinyPICO which is released under the CERN OHL 1.2 license!

I just been playing with a real-sense 415. its about 250USD. It’s fun, but too expensive for deploying all over in the wild. Kinect can be bought 2nd hand all over, sometimes super cheap. but they all need a computer attached.

also i bought 2 of those exp 32 ttgo camera modules, but havent played with them yet intensively. also wanted to do camera traps with them to monitor my dormouse population in the basement.

i am in for trying to do this more in-depth. mbe a focus call soon about this?

m

1 Like

Hi @dusjagr that’s super cool! I’m impressed by the high resolution. Who knows, it might be possible to adapt a Kinect into an addon module for a camera trap, or something like that. And even without a full Kinect, if we can figure out what sensor modules are in it, there might be a way to just buy the sensors?? IIRC @Freaklabs from the Wildlabs forum once posted something about how to tap into a camera trap’s triggering mechanism for this purpose, I can try to dig it up.

Yes, I think it would be great to have an initial, exploratory call about all this.

Is anyone else interested? If so, I can set up a poll to find a date soon-ish.

We briefly chatted about this thread’s topic in today’s GOSH Community Call. Thank you everyone pllus @hikinghack, @laola, and @dusjagr for coming!

@hikinghack shared a link to the ESP-32 based DIY camera trap (archived link).

@dusjagr’s group is thinking of a real time wild bear detection system. The idea is that a camera trap would trigger a warning for a village when a bear comes into range. I suggested that since this requires a real time response, the camera trap needs to send its images to a separate device that does onboard automatic recognition of the animal’s species. If it’s a bear, then it would send a warning to the village.

Interesting ideas!

That said, I’d love to hear more of your feedback on how to design a camera trap with depth-sensing abilities. With it, we can use the spatial data from these images to estimate animal populations.

3 Likes

Hi Pen,

I just watched your presentation you gave on camera traps during the May community call. Very interesting! I think we can do some work together.

I’m also interested in interfacing with camera traps, but the other way around: I would like my acoustic recorder to be able to trigger the camera trap. In my application, there would be just 1 camera trap. In your application, there would be 2 (for stereo photography). I think there are several advantages to this backwards approach.

Multiple sensors can be used, with their outputs logically ORed, and different types of sensors can be used concurrently (e.g. endotherms may need to be treated differently).

Sensors can be positioned at some distance from the camera, thereby extending the detection range. For fast moving animals along a trail, 2 sensors can be placed before and after the camera traps.

Since the cameras are triggered from the same source(s), the images captured will be automatically synchronised.

Camera traps can already do IR out of the box.

A relatively dumb sensor system can be used, probably no need even for a microcontroller.

The native motion sensor on the camera can be blindfolded instead of being disabled permanently, or it can be used in parallel with other deployed motion sensors*

Camera inter-ocular distance can be varied over a wide range.

I fully agree with your point about not reinventing the camera trap! And I fully agree with your point about the ruggedising of the final product! We can take advantage of the fact that the camera trap is already a weatherproof optical instrument.

Hacking the camera trap does not seem too difficult. There is a lot of space in it. I’ve been able to bring the trigger signal out to light an LED, and I’ve also been able to parallel a pushbutton switch to inject a trigger signal manually and at will.

I envision inserting a small circuit (on a custom PCB) into the camera trap, which circuit intercepts the trigger signal, and which accepts multiple signal sources.

Inserting this circuit into the camera will take some skill, however. But it won’t be more difficult than regular board level repair.

  • I need to verify that this is true for other camera traps.
1 Like

Hi @Harold!!! Thank you so much for your extensive feedback! :pray: It would be awesome to work with you.

I’ll try to respond to each topic you raised:

Acoustic trigger for camera traps

I think your general idea sounds good in terms of acoustic sensor placement and connections to the camera trap.

Are there particular animal species that you’d like to observer with camera traps? And what cases do you envision will acoustic triggers be more desirable over a camera trap’s built in IR trigger?

I think if we can narrow down the scientific use case for an acoustically-triggered camera trap that will help define the engineering constraints.

Also, what about using the open source Audiomoth module as the acoustic trigger? There is a lot of background noise in the natural environment, so one also needs to consider the threshold over which the trigger will be activated. And what kinds of sounds should the sensor recognise? Is some sort of on-board audio identification be needed?

UPDATE: By the way, I was just looking at this paper on acoustic surveys, might be of interest to @Harold: https://doi.org/10.1111/2041-210X.13873

Stereo camera trap

What do you/others think of the Freaklabs BoomBox add-on module for camera traps? The design files are in this repository. Does the design look amenable to be modified to trigger a stereo camera instead of speakers? How big of a modification would it be? Who would be willing to work on this hardware?

(of course, we can try to contact Freaklabs as well, but I don’t think they are on the GOSH forum yet… @hikinghack any ideas?)

If the end goal is a field-ready stereo camera trap, my approach would be to start with 2 commercial camera traps, and trigger them simultaneously, with 1 camera being the master and the other the slave. This seems the simplest to implement, and the kit is already weatherproof. It does require some hackery however. The hackery is unavoidable but can be minimised, which is what my talk about a custom PCB was about.

The Boombox can be put to use but I don’t see the need. Consider a PIR motion detector that turns on a lamp for a few seconds. This can turn on your stereo camera instead of your lamp.

If you want to also avail yourself of the other features provided by a regular camera trap, like automatically switching to night photography, configurability, ruggedness, low shutter delay, temperature logging etc, I would use 2 regular camera traps as above.

My interest in acoustic recorder + camera is mostly to detect noisy hot rods on local roads. An acoustic recorder with SPH0645 digital microphones (or similar) produces an output that is calibrated and trivially convertible to dB.

2 Likes

Oh I see @Harold! That’s simpler than I thought. If it’s hot rods you want to photograph, you can set the trigger threshold to very high dB, thereby prevent most false triggers. And since the camera traps will be deployed roadside, probably with sufficient sun exposure and relatively easy to service, power consumption is probably not a problem. So I think what you’re suggesting is almost like a speed camera but acoustically instead of optically triggered…

Just curious: Do you think modding an Audiomoth into the acoustic trigger would help? Or just building something from scratch using the PH0645 microphone?

As for stereo camera traps…

Indeed, the motion sensors in camera traps are custom designed to have optimised trigger thresholds and detection angles. Several people I know have tried off the shelf PIR motion detectors, but they are usually waaaaaay too sensitive and you get a mountain of false-triggered images. So as a first step, I think building on an existing camera trap’s trigger would be better.

I did briefly consider setting up two camera traps side-by-side to obtain stereo images, but I think you suggested the critical point which is for both to be trigger simultaneously by the motion sensor on one camera trap.

In any case, I think the biggest core concept of both of our ideas (plus the BoomBox) is to tap into the triggering circuitry of existing camera traps. Right?

The closest thing I am aware for understanding that circuitry is this documentation of the BoomBox. Pages 12 & 13 links to Freaklabs’s documentation on how to tap into the motion sensors in 4 different models of camera traps. I’ve used camera traps extensively in the field, but don’t have the electronics expertise to fully understand if the information in that documentation is enough for us to plan next steps.

Are you or someone able to assess this documentation???

Excited to continue this conversation. I’m already learning lots from you @Harold!

| hpy
May 26 |

  • | - |

Oh I see @Harold! That’s simpler than I thought. If it’s hot rods you want to photograph, you can set the trigger threshold to very high dB, thereby prevent most false triggers. And since the camera traps will be deployed roadside, probably with sufficient sun exposure and relatively easy to service, power consumption is probably not a problem. So I think what you’re suggesting is almost like a speed camera but acoustically instead of optically triggered…

Yes that’s right!

Just curious: Do you think modding an Audiomoth into the acoustic trigger would help? Or just building something from scratch using the PH0645 microphone?

I think the Audiomoth could probably be modified to be an acoustic trigger, but I don’t think it will be easy to integrate an I2S mic. The native mic is a MEMS PDM unit, and it also has an analog port. If the mic – regardless of output type – is calibrated, then it can be used. But then, something like https://sea.banggood.com/Voice-Detection-Sensor-Module-Sound-Recognition-Module-High-Sensitivity-Sensor-Microphone-Module-DC-3_3V-5V-p-1357680.html could also be used, if one were to go to the trouble of calibrating it.

As for stereo camera traps…

[…]

In any case, I think the biggest core concept of both of our ideas (plus the BoomBox) is to tap into the triggering circuitry of existing camera traps. Right?

The closest thing I am aware for understanding that circuitry is this documentation of the BoomBox. Pages 12 & 13 links to Freaklabs’s documentation on how to tap into the motion sensors in 4 different models of camera traps. I’ve used camera traps extensively in the field, but don’t have the electronics expertise to fully understand if the information in that documentation is enough for us to plan next steps.

Visit Topic or reply to this email to respond.

I had not seen this doc! That is exactly correct, the 2 cameras’ PIR sensor output pins need to be combined. Usually the PIR’s output pin goes to a diode or transistor, and it is the output of this diode or transistor that should be wired together (and the grounds of both cameras need to be connected also). Then the slave unit can have its PIR sensor blindfolded so it never triggers. I do believe that’s all: stereo camera trap!

1 Like

Thank you @Harold, a few more questions if I may. :slight_smile:

1.

You mentioned creating a small PCB, one that might even fit inside a camera trap’s casing. Might there be value in turning it into a generalised interface board between the triggering circuitry in a camera trap and external components like your acoustic trigger, other camera traps, stereo cameras, speakers, etc.?

So something like this:

 ┌─────────────────┐        ┌───────────────────┐        ┌─────────────────────┐
 │                 │        │                   │        │                     │
 │ camera trap     │◄───────┤ trigger interface │◄───────┤                     │
 │                 │        │                   │        │ external components │
 │ trigger circuit ├───────►│ board             ├───────►│                     │
 │                 │        │                   │        │                     │
 └─────────────────┘        └───────────────────┘        └─────────────────────┘

The board would expose pins or other ports to serve as a standard interface for external components to either trigger, or be triggered by, a connected camera trap. So, you would connect your acoustic detector to this board and cover the camera trap’s PIR sensor. And for me, I would connect another camera trap to this board.

In other words, this board would give a camera trap the generalised ability to accept expansions. Would you be interested in figuring this out?

2.

Also, based on what you’ve seen in the BoomBox documentation, do you think hacking into the PIR sensor circuits in a camera trap is a difficult job? I used to do electronics, but haven’t soldered anything in 15+ years! I am comfortable opening the case and drilling holes as described in the documentation, but don’t know if my rusty soldering skills (or lack thereof) are up to the job…

3.

Is there a difference in where to tap into a camera trap’s circuitry depending on if you want to use the camera as a trigger vs triggering it externally?

You mentioned creating a small PCB, one that might even fit inside a camera trap’s casing. Might there be value in turning it into a generalised interface board between the triggering circuitry in a camera trap and external components like your acoustic trigger, other camera traps, stereo cameras, speakers, etc.?

So something like this:

 ┌─────────────────┐        ┌───────────────────┐        ┌─────────────────────┐

 │                 │        │                   │        │                     │

 │ camera trap     │◄───────┤ trigger interface │◄───────┤                     │

 │                 │        │                   │        │ external components │

 │ trigger circuit ├───────►│ board             ├───────►│                     │

 │                 │        │                   │        │                     │

 └─────────────────┘        └───────────────────┘        └─────────────────────┘

The board would expose pins or other ports to serve as a standard interface for external components to either trigger, or be triggered by, a connected camera trap. So, you would connect your acoustic detector to this board and cover the camera trap’s PIR sensor. And for me, I would connect another camera trap to this board.

In other words, this board would give a camera trap the generalised ability to accept expansions. Would you be interested in figuring this out?

This is exactly what I planned to do, since it will allow my acoustic recorder to trigger one or more camera traps simultaneously. I was thinking that a 2.5mm audio TS jack would make a suitable signal interface. The interface PCB I mentioned would be glued on the CT’s PCB anywhere convenient, and leads brought out to CT’s circuitry. It may also do some pulse conditioning and prevent continuous triggering.

Any kind of device can listen in on the trigger signal, and they could flash lights or make sounds like Boombox does. One post on wildlabs describes using flailing inflatable tube advertising humanoids as scarecrows to address human-wildlife conflict issues. I’ve heard of swinging thorny branches around for the same purpose.

2.

Also, based on what you’ve seen in the BoomBox documentation, do you think hacking into the PIR sensor circuits in a camera trap is a difficult job? I used to do electronics, but haven’t soldered anything in 15+ years! I am comfortable opening the case and drilling holes as described in the documentation, but don’t know if my rusty soldering skills (or lack thereof) are up to the job…

No, I don’t think it will be difficult. The skill is not hard to pick up.

3.

Is there a difference in where to tap into a camera trap’s circuitry depending on if you want to use the camera as a trigger vs triggering it externally?

No, no difference. The same surgery is performed on all CTs. The signal is active low and is pulled high by a resistor i.e. idles high (let’s say, or we could flip the polarity). Any 1 or more CTs or external triggers can pull it low simultaneously or sequentially or at any time, and this causes ALL CTs to trigger on the falling edge. The only difference between a master and slave CT is the slave will have its PIR window physically blacked out or otherwise disabled so it cannot be a triggering source. In practice I think there should be only 1 master CT, because I don’t know how multiple CTs will behave if 2 triggers occur very close together. This cannot happen with 1 master, but it could with 2 or more.

The trigger signal can also be broadcast wirelessly. The repeater kit would work like a wireless doorbell. There may be uses for this long range trigger signal.

I’m hampered by not having access to a wide variety of camera traps. I can deduce what all CT trigger circuits ought to look like, and I can verify it against my own cheap CT, but I can’t verify it generally. I can use some help here.

1 Like

@hikinghack and I talked about the camera trap work after the GOSH Community Council meeting today. He’s happy to help do some of the surgery we’ve discussed, and also some field testing! Thanks @hikinghack!

We also talked about how OpenCV can help once we obtain the images, namely:

This is exactly what I planned to do, since it will allow my acoustic recorder to trigger one or more camera traps simultaneously. I was thinking that a 2.5mm audio TS jack would make a suitable signal interface. The interface PCB I mentioned would be glued on the CT’s PCB anywhere convenient, and leads brought out to CT’s circuitry. It may also do some pulse conditioning and prevent continuous triggering.

Great! My electronics knowledge is not up to par, but let me know if or how I can assist here.

No, I don’t think it will be difficult. The skill is not hard to pick up.

Good to know.

The trigger signal can also be broadcast wirelessly. The repeater kit would work like a wireless doorbell. There may be uses for this long range trigger signal.

Which, I guess, would be connected to this putative PCB that you’re proposing!

No, no difference. The same surgery is performed on all CTs.

Also, great. Thanks for the explanation.

I’m hampered by not having access to a wide variety of camera traps. I can deduce what all CT trigger circuits ought to look like, and I can verify it against my own cheap CT, but I can’t verify it generally. I can use some help here.

At the discussion today, I expressed a willingness to pitch in a buy a few camera traps for this effort. I will probably buy the cheapest one listed in Freaklab’s BoomBox documentation. Would it help you @Harold if we send you some close up photos of the circuitry once the cases are opened? Or should I send one of the camera traps to you?

While I would certainly love to get my hands on more kit, another option is to have existing owners of camera traps examine their own equipment with a cheap multimeter.

I think I’ll make a simple video explaining how to do this. Basically it’s about locating the output pin of the PIR sensor and tracing (with a multimeter) where it goes. I expect we’ll trade a few closeup photos as we work through this process together (over the forum?) but at the end of this we should have a well-annotated photo that will aid in modifying that particular model of camera trap.

This info will also help in deciding the polarity of the trigger signal. Open collector active low is the obvious choice, but most PIR output pins are active high. This is easy to invert, but it’s easier to not have to invert.

Once that is decided, the interface PCB can be made.

I’m in the middle of field trials right now so the earliest that I can possibly get around to starting this is next week.

1 Like

I’ve been slowly moving this forward and here’s a loooooong overdue update on where things are from my perspective.

Open Hardware Researchers group meeting June 2022

On 2022-06-09, I presented the ideas we discussed in this thread to the Open Hardware Researchers group hosted by @jarancio, thanks for letting me share! Here’s a summary from that meeting based on these notes.

Possible resources/sources of support:

Stereo lenses add on??

Instead of a electronics-based solution as we’ve discussed so far, @jpearce suggested also trying adding “binoculars” in front of the camera lens to give it stereo vision. Essentially the single camera lens would “see” two side-by-side images, which you can then post-process with something like OpenCV. There are even clip-on stereo lenses for smartphones like these:

Also, @jpearce mentioned that now lenses can now be 3D-printed with great precision:

I haven’t tried this approach yet, but I think @jmwright got one of those clip-on lenses and a few components. From what I can tell, the challenge is that these clip-on lenses are not perfectly aligned, so the image seen by the camera sensor can be blurry, a bit off, and have various other artefacts. I suspect this could be solved by “personalising” the binocular lenses to match the dimensions of a camera trap, but this will have to be done for each camera trap model. Happy to hear what @jmwright thinks. IMO this binocular add-on approach and the stereo camera add-on approach can be pursued in parallel. And if we all get to meet in Panama (see below), then we can compare notes and tinker in person!

other notes from meeting

GOSH forum threads (including this one):

Wildlabs thread where I asked @akiba about their BoomBox work, including how they hacked into cameras:

After the stereo images are captured, OpenCV can be used to process them into depth maps (this is where I can try to help by hacking the code), e.g.:

Field test

@hikinghack has generously offered to hack together a pair of side-by-sdie camera traps to get them to trigger simultaneously (see earlier posts in this thread for details). I’ve got in touch with a supplier, NatureSpy, who has kindly offered to give me a discount.

@Harold also kindly offered to field test in Singapore where - even if wildlife is not within easy reach - human or canine subjects could be used to get a depth image.

I think what I need to do now is to procure enough cameras so we can at least get the ball rolling on hacking them! I also need to figure out exactly which camera trap buy from them, and consider the international shipping costs/import taxes.

Add-on interface module

I’d like to re-iterate my love for @Harold’s concept of creating an add-on interface module for camera traps (see this post above). This module would be tied into the camera’s triggering mechanism so that we either trigger the camera from an external source (including from another camera), or use the camera to trigger something else, both would be mediated by this add-on.

@Harold has already sketched out how such a PCB could be designed and examined a camera trap’s internal circuitry a bit.

Also see this post about how the BoomBox project examined the circuitry.

Workshop at Panama Gathering!?

After recent email threads with @Harold and @hikinghack, I think we’re hoping to start doing some stuff now, and do a show and tell about where we are with this stereo camera trap idea at the GOSH Gathering in Panama in late October 2022. Additionally, @Harold is happy to demo hacking into the triggering mechanism of camera traps as part of this workshop.

Eventually write this up to be published somewhere…

The academic in me can’t help but think about publishing our results in a peer-reviewed paper(s). Off the top of my head, I can envision the following putative papers:

  • Conceptual review that looks at existing depth-sensing tech and describing the need for it in ecological monitoring with camera traps (see this post, which @laola helped me think through). This review could end with presenting the concepts discussed in this thread?
    • Alternatively, presenting our novel concepts could be in a separate, opinion piece/perspective article that some scientific journals have…
  • Short paper, maybe in the Journal of Open Hardware/Hardware X (???), describing @Harold’s interface add-on once we’ve made it.
  • If we can rig more of these stereo camera trap together to run a proper ecological survey, then we can probably publish the results and compare them to other survey methods to see if we get comparable results. This is potentially publishable in a ecological journal.
  • If we manage to develop and fabricate the stereo camera add-on, then it might also be publishable in the Journal of Open Hardware/Hardware X?

I’d be curious what the academics among us think of this…

What do you think?

Any feedback is appreciated. It’d be cool if we can do something in Panama together!

There’s probably more stuff I forgot to include in this post, please remind me!

Thanks @hikinghack, @Harold, @jarancio, @jpearce, @jmwright, @laola.

2 Likes

Hi everyone, it’s been a while but here’s another update on what’s been going on with the camera trap stuff…

Session(s) at 2022 Gathering in Panama!

@Harold, @hikinghack, @laola and I hope to run a session or two at our Gathering in Panama this week. Here are some ideas:

  1. Harold will show and tell a set up for linking two camera traps together to obtain stereo images. See earlier in this thread for details.
  2. Laura will brainstorm with us what kinds of artistic creations could come from camera trap.

I’ll do my best to hack together some code to turn images from Harold’s stereo camera set up into a depth map with which we can judge the distance of objects in it.

Other developments

Through Wildlabs, I tuned into a series of recent talks on camera traps and artificial intelligence. There’s a research group in Germany that has been publishing their work on distance estimation from camera trap data, including using videos from just one camera, or building their own stereo camera trap from scratch. Looks pretty amazing, though I don’t understand the details of the artificial intelligence techniques they used.

That said, I really like Harold’s approach because most ecologists/scientists don’t have the skills, time, or resouces to manufacture camera traps at scale. But Harold’s set up is much easier, especially if the idea of a common interface board could be realised.

And who knows, maybe one day we can try Joshua’s idea of letting a camera trap wear binocular “glasses”, too!

Also, I’m noting here that there was a brief thread with @jmwright about wireless connections for mobile sensors. Not directly related to depth-sensing but wanted to put it here so I don’t lose track of it.

1 Like

Regarding LoRa: I’ve seen multiple articles and products that combine camera traps and LoRa since I posted that question. Covert LoRa Trail Camera System (LoRa LB-V3) - Verizon

1 Like