Helpful discussion on design + optics (please read!)

Visualization is a good question… there are some ways we could work it using PhotosynQ that i could try (or maybe @Andriy since he’s familiar with that also).

That’s an amazing paper! I wonder where that guy is at now? We should find him and get his two cents.

Greg

yeah, and there are some good ideas for building the rotating mask system. Using large diameter bearings is an excellent idea.
The bigger question I have, I guess, is about the actual data processing. How do we take multiple Spec measurements and turn it into an image? I’ve been doing reading on compressive sensing and I understand the concept, but all the things I’ve read digress into complex math pretty quickly. It seems like the mask design is very important depending on how the processing is going to happen…

I’m hoping that @rbowman has some skills in this arena.

I was super impressed with that paper, though the requirements for rebuilding the image were >5000 points which was more than I was hoping based on other papers (which was more like 2000 images).

I need to read through it, but I’m wondering why he made the slit so small, seems like a larger slit would allow fewer images, but again, need to read it thoroughly.

With that design there were also relatively few points for each sensor measurement (exposure?). In the description of the InView with the Digital Micromirror system they say they have 50% coverage for each measurement. It makes intuitive sense that if you have a randomized pattern that obscures 50% of your field of view you’d need less exposures than a system that obscures 95%.

My thought exactly. But also he obviously put a lot of thought into it, so I’d be curious to ask him a few questions about why he chose this design path.

The articles about image reconstruction.
https://search.epfl.ch/web.action?q=image+reconstruction+&f=web&lang=en&pageSize=10&sort=

Hey,
I’ve certainly done a little here, though I should be clear that I’ve only ever programmed fairly basic algorithms (and you can get much better performance with more sophisticated routines). The classic way to explain it is to pose it as a matrix: if you represent each mask as a vector (yes, a 1D vector - often you take a 2D image and flatten it to a vector), then each measurement b_i is the dot product of the mask vector m_i and the target t. So you need two things: the vector of measurements that you acquire (b) and the matrix formed by stacking together all the vectors corresponding to masks (m). It’s then a linear matrix equation: b = m*t where b and m are known and t is unknown. The simple way to reconstruct is to use a least squares method to fit t. That’s prone to noise if t has more elements than b, though - and can fall over entirely. The “cunning” methods employ some sort of “regularisation” (essentially adding a penalty for noisy images, that causes it to reconstruct smoother, i.e. higher-quality, images). The “very cunning” methods use computational tricks to make this faster.

Can’t remember if I posted this before, but these two papers:

SPIE article

IEEE article

are definitely worth reading. They don’t have a recipe for the reconstruction method, but they do reference papers that go into it in more detail. They also give a good explanation of how it all works, albeit in a somewhat technical manner. I think it’s worth working through these if you want to understand how a single pixel camera works, and particularly how the number of images affects the noisiness of the result.

Also, I stumbled across a pyrunner post which looks quite promising.

1 Like

Image Reconstruction with Matlab

"One of the central tenets of signal processing is the Shannon/Nyquist sampling theory: the number of samples needed to capture a signal is dictated by its bandwidth. Very recently, an alternative theory of “compressive sampling” has emerged. By using nonlinear recovery algorithms (based on convex optimization), super-resolved signals and images can be reconstructed from what appears to be highly incomplete data. Compressive sampling shows us how data compression can be implicitly incorporated into the data acquisition process, a gives us a new vantage point for a diverse set of applications including accelerated tomographic imaging, analog-to-digital conversion, and digital photography. "

https://statweb.stanford.edu/~candes/l1magic/

quite daunting… I was kind of shutting my eyes and keeping my fingers crossed for an easy library to do the work :slight_smile: Perhaps we need a more intentional development path on this piece… I think I can add a lot of the hardware side, especially mechanical design, but not so much here.

Do you guys think we should divide and conquer on the work? Actually this project lends itself to that approach. I see a few key domains which are largely independent of each other:

mechanical design <-- get the random-enough images using spinning disk, line up optics reproducibly and easily in a way that everyone involved can make it.
disk design + math <-- design the disk and do the math on the data that comes out of the USB serial to reconstruct the image from the raw data
firmware <-- get data collected, in concert with stepper / laser / spinner, and output data in usable format over USB serial
visualization <-- take the data post-math and display the image it in a way that is easy for users and quick to iterate through
testing + application development <-- try it out, see what works, test applications + compare to existing alternatives.

I think I’m best used on the firmware and mechanical design piece.

Maybe something can be adapted starting from this? Or even contacting the authors to see if they would like to contribute?

I think, that before understand how we have to do image reconstruction we must to have the way for making “data cube” for the spectral sampling.

Hi, Greg @gbathree, I know that PhotosynQ firmware has code for working with Hamamatsu sensor. Will this code work with Hamamatsu C12880MA or we must adaptation him?

We definitely can use it - it’ll mostly work and it’ll output our standard JSON format which is nice. However, we need to use the updated library from @cversek which references pedvide’s new teensy library to get faster reads. Not a huge update, but needs to be walked through. May want to check for any recent updates that relate to the CoralspeQ firmware also from PhotosynQ’s github page and implement those.

hey, that looks really interesting - I will have a read (in reply to Kina’s post about spiral masks)

Hey guys
Is this single pixel camera project dead? Was there a working solution at the end?

There is a new single pixel camera project on kickstarter. Instead of moving the sensor around with motors, it moves two mirrors (X and Y direction) to scan the scene. I think its a quite simple and fast solution…
Did a similar solution come up here during the design phase?

Not much news or trials recently with single pixel camera projects except of this one kickstarter. Is it already time to forget about the single pixel concept?

Dan

1 Like

That’s fantastic! I sent him a gushing text via Kickstarter, I’ll ping you if he replys :slight_smile: