$50k fast funding program for low-cost science tools

Hi Marc,

I would be interested to find more details in who does what and especially how would this different from the openflexure one.

Let me know.


1 Like

Thanks everyone for taking the time and sharing all this great, very detailed feedback about the experiment.com experience!



This is my first time experience with the Experiment platform and firstly I like to thank David for making this possible. And thank the reviewers and the rest of the people that participated in this endeavor for their diligent work.

To clarify the context, I collected this feedback based on two projects I have been involved in and several other projects all part of the Low Cost campaign. My impressions were various and I hope this will help improve the platform. I’ll try to make it exhaustive yet concentrating in things that could improve. The overall impression was positive so don’t take that the suggested improvement areas as a negative. I submitted by the first original deadline however some of the other projects I base my feedback were submitted during the extension.

  1. The experience with the UI and flow of the application was excellent.

  2. I liked the fact that the application is pretty slim, restricted to several hundred chars. I saw other processes that required hundreds of pages.

  3. While I still like the idea of a minimalist application, based on the experience I think the character number limitation introduced some major issues.

  4. In my case one of the projects I submitted was a complex project design to offer a full solution. It has several components that could have been standalone projects. The problem is that the character limitations made it pretty much for the reviewers to understand the project. Adding an explanation meant taking another one out and pleasing one reviewer seemed to do the contrary for the other. The result was an extensive back and forth that lasted a long time. This delay seems to have cost me the funding. I am very disappointed with that because it should not be impossible for complex projects to go through Experiment. Suggestion: Leave the original short fields and introduce a detail section for all of them so the reviewers and donors can really see the full information behind the short abstracts and understand complex topics.
    This is probably the most important suggestion and I understand it would take work to implement should you decide to go with it.

  5. Many replies from the Experiment took long time (even weeks) to get and some of the emails were ignored. I think the reviewers are probably overwhelmed. Suggestion: Perform a fast 30 sec read of the emails followed by a fast personalized one line reply after the emails are received. Indicate when a full response should be expected. Managing expectations is a good thing.

  6. Awarding of the funds for accepted projects is not done based on the date of of application and some projects submitted after the original deadline were awarded before the projects submitted in time. Suggestion: Awarding of funds should be done based on the date of submission especially in cases where the original deadline was extended.

  7. All the rules should be published from the beginning not introduced mid-stream.

  8. There were cases where one reviewer marked a section a great while the other reviewer marked the same section as totally unacceptable. Of course reviewer agreement is not perfect yet total disagreement for too many sections should not occur. Suggestion: reviewers should consult the experts before dismissing sections of the projects.

  9. Reviewers did not consult the experts in areas they were not familiar with before giving initial feedback. Suggestion: Reviewers should consult the subject experts at the first phase of review. That will avoid confusion and speed up the process. Consulting experts only at the end, foster misunderstanding and introduce delays.

  10. There seems to be a bug in the character count where the same submission will detect slightly different numbers of characters when repeat revisions are performed. Suggestion: Somebody should try to reproduce that bug on the first page in different browsers.

  11. The email exchange is pretty impersonal and I can understand if in some cased some replies could be misconstrued as condescending. I saw some feedback from GOSH members mentioning that. Suggestion: Pay special attention to that aspect.


When funding is coming through the Experiment the procedure should require an adequate screening. That is very important as Experiment has to make sure the funds go to worthy causes.

The main work is to try to screen out requests that would use the funds for alternative purposes.

a) The process should ask if the group had previous grants and check if the grant work was actually performed. That could be done by asking for the contact info of the grant organization and the repository of the work product.

b) The process should ask if the group had done any work in the area of research.

b’) The process should ask if the project will be done under an University umbrella.

c) The process should screen out requests that do not match the campaign. For instance requests for education funds should not be made through Low Cost Tools. I have seen projects that seem to do just that. Funds for education should be placed through education cam pains.

d) The process should screen out requests that employ bait and switch techniques. All the members that participate in the project should be named. I have seen Campaign Experiment projects that seem to do just that. Simple checks should be able to catch those.

e) The process should screen out or try to avoid requests for salaries (or other monetary compensations) for unnamed persons. I have seen Campaign Experiment projects that seem to do just that.

f) Experiment should be able to provide transparent and accessible feedback on participants on funded projects to other organizations that grant funds.

g) The process should screen out requests for general ideas, especially when those ideas exist and no prior work with concrete deliverables has be done by requestor. I have seen Campaign Experiment projects that seem to do just that. Projects that cannot itemize concrete deliverables will probably not deliver anything.

h) The process should screen out requests for funds for general costs of organizations unless the Campaign is designed for that. I have seen Campaign Experiment projects that seem to do just that.

i) The process should require careful consideration before granting funds where a lot of the funds are taken away by umbrella organization. Experiment should suggest making those projects independent. I have seen discussions of Campaign Experiment projects where an University took up to 50% of the funds.

i) The process should require careful consideration before granting funds where there is no clear or concrete research or scientific outcome. I have seen Campaign Experiment projects that seem to do just that.

We are all aware of the overhead of University research. A Space X projects cost about one fifth of NASA projects so one can estimate the overhead of bureaucracy.
Many crucial scientific discoveries were not make under academic umbrella. While the state allocates ample funding for academic research there is none for independent research.
Based on discussions on this thread and other considerations I think many people mistakenly regard Experiment as an extension of academia. I am happy to hear David clarifying that. I think it’s important to support genuine low overhead scientific research. Experiment is one of the few organizations that can fund that.

I would like to end by thanking everyone at Experiment for enabling research and ideas to become reality. Especially my thanks go to David that created and made this hidden gem a reality. I cannot even imagine the difficulties he had to make the financial part possible and the work he puts in securing funding for the campaigns. Outstanding work!

Thank you,


Update: the project is now live! Open Source Radio Imager for Measuring Plant Health | Experiment