$50k fast funding program for low-cost science tools

Awesome!

1 Like

I have an idea for a proposal. By the time I’ve made three successive modifications proposed by the reviewers on Experiment, my project has been validated today. I hope it’s not too late.

2 Likes

Good luck @TOKO!

I see a few GOSH projects are about to run out of time in the next week or two, within in the 80% funded region. I think quote a few of us managed to pull in a few backers, but not enough to take us over the line. It is hard to see exactly how many projects are have applied as the challenge doesn’t list the projects.

...

It is confusing that it tells you to click on the challenge to see more projects by none are listed. I tried searching “low cost tools” as @davidtlang showed us, however it only every finds 8 projects (though it finds a different 8 if I sort differently :woozy_face:)

Is the plan to extend the deadline and allow a new round of submission if the $150k isn’t spent?

3 Likes

Do I see correctly that this opportunity is still open?
Website says until 15. august now.

We are experimenting (@gaudi @dusjagr ) with a new concept for a low-cost diy microscope using esp-32 and cam, hack the lens and a simple stage setup using PCB materials taht can be assembled into a solid stage structure.

Anybody interested to join the development?

We think the “old” approach with webcams attached to a laptop is a bit outdated.

Using the ESP & cam easier globally, where people usually only have a smart phone. so we can look at the stream in the webbrowser of the phone, and record higher res images to the onboard sd-card.
adding some simple tools for measurements and calibration should be easy with a webapp.

2 Likes

I’m interested

2 Likes

Thanks for pointing this out, Marc. The deadline was June 30th and David is fixing.

2 Likes

I’m also interested.

I am launching a campaign on experiment.com for the Low Cost Science Tools fast funding program: https://experiment.com/projects/open-source-radio-imager-for-measuring-plant-health

Our team is working on an RF imaging system to characterize plant biomass in field settings. This will allow researchers and, eventually, crop advisors and growers the ability to measure things like crop load/yield, plant phenology - such as leaf-out and seasonal evolution of biomass, and the interior structure of trunks, branches, and fruit. The goal is to enable the production of systems, including RF transceivers, antenna arrays, and mounting hardware for tractors and other farm vehicles for a cost of less than $1,000 per unit. Schematics, design files, firmware, and other elements will be made available under an open source license (i.e., one of the CERN open source hardware licenses and OSI-approved software licenses).

We conducted lab measurements in late 2019 that yielded some promising results, but the project was shelved indefinitely as the world went through some things for the next few years. I would like to restart the project so we can pursue larger research funding opportunities in 2024. We propose to build one demonstration system and test it in vineyards and tree nut orchards in October 2023. We may also be able to conduct postharvest data collection, time and resources permitting.

At seven days, the timeline for this campaign is dreadfully short, but if we want to get into the field this year, we need to get moving as soon as possible. Any support that gets us toward this goal is enthusiastically welcome.

Please consider sharing this outside the GOSH Forum if you know any colleagues who might be interested in collaborating on design or research projects. Please feel free to contact me via the experiment.com platform or in the GOSH Forum with any questions. Thank you!

2 Likes

Hello,

First of all, I’d like to express deep gratitude to the Experiment team including @davidtlang, and the conveners of the Low-Cost Tools Challenge @shannond and @jcm80, for funding our camera trap project. I, and I’m sure @jpearce and @jmwright as well, am also so thankful for support from the wider GOSH community including @julianstirling, @moritz.maxeiner, @Juliencolomb, @KerrianneHarrington, @hikinghack, and @rafaella.antoniou, plus other generous contributors. We look forward to continuing this work and keeping everyone update on the Experiment.com page or in the ongoing forum thread.

Looking at what’s already been posted in this thread, I think there’s plenty that we are collectively learning about the Challenge and the Experiment platform. I took notes during the process of proposing our project for the Low-Cost Tools Challenge, and would like to share my observations here. Please note, in addition to gratitude, my comments here are shared in the spirit of mutual learning, being constructive, improving the user experience (UX) of Experiment, and probably helping other research funding initiatives as well. So here they are, in no particular order:


  1. I really like that a digital object identifier (DOI) is associated with projects. Great idea!
  2. It has been hard for me to search for and view projects associated with the different Challenges. I think this has been noted in previous comments in this thread.
  3. Many Challenges are listed under “Current Grants” or “Requests for Experiments” even through there stated deadlines have passed. What does this mean? Are they still accepting projects or not?
  4. The Experiment.com documentation states: “Experiment charges a 8% platform fee and our payment processor charges an additional fee of roughly 3-5%. The payment processor charges 2.9% + $0.30 on each successful charge.”
    • I get the gist of this statement. However, my university requires any funding to go through them instead of coming directly to me, and they are very uncomfortable with not having enough information to calculate exactly how much money is coming. I know the final charges might depend on how many individual transactions are made, but these complications make it difficult for university (at least from my experience in the UK) researchers to utilise Experiment.
  5. I actually submitted a separate project to a different Challenge that was rejected. This is totally fine. However, I learned of the rejection from one of the Challenge conveners before my project went live! What’s the review process of Challenges? When do they start looking at projects??? My project was rejected even before my endorser posted their endorsement. Technically, it’s possible that the endorsement might affect the funding decision of the Challenge conveners. More clarity on exactly when your project will be evaluated will be crucial.
  6. I carefully read the description of the Low-Cost Tools Challenge in June 2023. After reading it and the discussions in the thread, I got the understanding that the Challenge leads would decide whether to fund projects in full, after which the projects could optionally crowdfund beyond the campaign goal. However, for our project, and several others in the challenge, I see that they received support for a portion of their campaign goal, implying that they are expected and required to raise additional funds to meet their goals. This might look like a subtle difference, but would greatly affect prospective projects and their planning. I went into our project expecting to receive either 0% or 100% from the Challenge, did not plan to have to crowdfund, and was OK with the possibility that we might receive 0%. In the end, the Challenge (very generously, thank you!) gave us 80% of our goal, and I had to scramble in the final days to crowdfund the rest. As an academic, it is extremely difficult to organise sudden, unplanned tasks like this. While our project was ultimately successfully funded, it is very unclear to me from the Challenge description that crowdfunding is required instead of optional. I strongly suggest revising the wording of Challenge descriptions to at least state that they might partially fund projects instead of just 0% or 100%.
  7. Unclear what the “deadlines” listed in Challenge descriptions mean:
    • Is that when I need to hit “submit” or for the project to be live or when the campaign needs to end by?
    • What’s the difference between “Submission Deadline” and “Campaign Launch” on a Challenge page when they seem to be the same date???
  8. Project pre-launch reviews by Experiment do not allow us to directly respond to a reviewers comments/questions. This is even less functional than the already-dysfunctional peer review process in academic journals. Some of the reviewer comments take the form of “Can you do x?”. Without being able to respond to the reviewer, I feel I’m being forced to say yes to the question. If the reviewer requires me to do something, then say so and don’t present it as a question that I am not allowed to answer.
  9. This manifested in a real problem for our project, for which I used a real camera trap photo of a wild roe deer as our banner image. By definition, camera trap photos include text on their top and bottom showing metadata such as time, date, temperature, moon phase, etc. One of the Experiment reviewers required us to crop that metadata text out of the photo. If you visit our project page now, you see the deer photo without that information. This presentation is misleading and visitors would not know at all that it’s a camera trap picture, and the link to our project topic is completely severed. In other words, I feel I’m being forced into misleading potential backers because there’s no way to respond to reviewer comments. Also, we were required to crop our image this way during the second review round. Why did the first reviewer not ask us to do this?
  10. Speaking of which, our project went through multiple rounds of review by Experiment, but on the second round they asked me to do things that could totally have been done the first time. This really feels like a big waste of time for everyone involved, and pushed back the launch of our project beyond the original stated deadline. (also see point above regarding my confusion on what the stated deadlines actually mean)
  11. A few other technical problems:
    • The user interface (UI) is contradictory on whether I need to be “verified” before getting an endorsement.
    • The UI requires date of birth for verification, which I almost certainly can not obtain from a finance person at my university, how did others deal with it? Is this documented somewhere?
    • It’s unclear at the beginning of creating my project what’s required for project to officially launch, such as needing to secure an endorsement or verifying identity. These additional steps came as surprises, and threw off my timeframe. I know there is some documentation, but it’s still unclear to me exactly what will be asked of me and required at which steps of the process. A detailed diagram that lays out the exact steps from project inception to successful funding, with clear indicators of what’s needed as each stage, definitions of terms, and other key “stage gates” would be very helpful.
  12. By the way, I got a message from someone who claims they can provide services to allow my campaign to succeed. In my opinion there’s nothing inherently wrong with this, but just want to share it FYI.
  13. Lastly, several of my emails to Experiment went completely unanswered. I later learned that there was some delay due to illness, which is understandable. But even then several of my questions went unanswered, and it was unclear what would happen if because of this our project couldn’t be launched by the deadline. Would be no longer qualify to be considered for funding from a Challenge?

OK, the above is a summary of my notes on the experience. They are shared with the intention to be helpful and archiving learnings, and with peace and love for such a worthwhile endeavor. :heart:

Many thanks again to @davidtlang and the Experiment team, and @shannond and @jcm80 for leading a very important Challenge! I’d love to hear their experiences as well, including what worked well or not!

4 Likes

This is very helpful feedback. Thank you @hpy.

As mentioned before, this funding program was a bit of a hack on the existing Experiment platform, so we’re still learning where all the rough patches are. This feedback will help us in the ongoing redesign.

I’ll address some of the points directly:

  1. Cool! I agree that this is an underappreciated feature. I would love to see Experiment projects cited more often and, in general, bring awareness to the idea of open grant proposals: Open Grant Proposals · Series 1.2: Business of Knowing

  2. Agree. That should be fixed in the new design.

  3. I think this is already fixed/updated.

  4. Interesting. I haven’t heard of this specific problem yet.

  5. Point taken. We will try to add more clarity and guidance on this. I encourage all the science leads to reach out to folks early. They usually start looking once a project has been submitted for review, but they can also see when folks have started a draft.

  6. This was clearly a mistake on my/our part in not explaining the process in enough detail. Can I ask if this description improves the clarity? Experiment

What else should we add?

  1. That’s a design remnant from a previous grant design. Agree needs to be fixed in an updated design.

  2. Good feedback. I’ll bring this up with the team. Note: Experiment review is not an attempt to replace peer review or serve that same purpose.

  3. Consistency among reviewers is actually an insanely hard problem that we’re working to improve.

  4. See above.

  5. Noted and sending to the team.

  6. Ugh. I try to block these folks when I see it happening. You can report as spam if it feels as such.

  7. THIS is a problem. We heard this feedback from another project creator too, and we think we found the problem with our customer service email system. We apologize if emails were missed. It’s because they were getting routed in the wrong way. Hopefully this is fixed.

Thanks again for the feedback.

2 Likes

I would also like to start by thanking everyone for sponsoring, facilitating and making the fast funding of low-cost science tools a success. I particularly appreciate flexibility at the end on Experiment to make my projects fundable. There are two main points @hpy made that as a community of scientists we should consider in the future, which divert resources from open hardware development.

  1. This results in a de facto 10% tax on running funding through the Experiment platform on top of whatever overhead is forced out of scientists at their own institutions whether part of F&A, mandatory training, fees, etc. If doing so results in more than 10% of the initial funding minus real administration costs then it is economically rational. If it does not, then future GOSH funders may want to consider the flat rate costs of hiring someone to perform the administrative functions. This is something that should be calculatable at the end of this round and I am quite curious to see the result.

5,8,10 &13. These four points are all part of the same issue: the pre-launch reviews were numerous, always delayed by long periods and for the most part irrelevant to open hardware development, which is what this CFP was about. Many scientists who posted shared their frustrations that they had to invest inordinate amounts of time cramming their hardware development proposals into the Experiment mold of an ‘experiment’.

This administrative preprocessing is now common among funding agencies. For example, working in Canada I can point out that in the US, the NSF changes the format and the bio requirements frequently forcing everyone to revamp even their basic templates every time they submit a new proposal. In addition, the NSF force scientists to spend absurd amounts of time pushing information like current and pending grants into their format, same with COIs, and other non-scientific mandatory parts of a proposal. This means more productive and more collaborative scientists (like most of GOSH) are punished with more administrative work per proposal than lone wolfs. This also results in a non-scientific screening of grant proposals, which in some countries can be political (and potentially really dangerous). Active scientists give up control of the peer-review process as administrators can effectively end proposals on failure to comply (at many universities this is even dictated by non-scientific research staff internally before scientists are allowed to click the submit button).

The last time I checked the award rate for single PI NSF proposals was ~7%, which means 93% of scientists wasted their time writing proposals that were not funded and what appears to be a large fraction of them were cut for non-scientific reasons (and that does not include the proposals that were blocked at the university level). The end result is that any scientist that wants to be successful is forced into investing dozens of hours cutting and pasting into fields for each proposal…or hiring someone else to do it. This is a sub-optimal use of resources no matter how the scientists comply.

Experiment has the potential to upend some of the serious problems with scientific funding - but not if they follow the bad habits that have developed in the funding agencies. In Experiment’s case I would recommend making two categories 1) “Gold star compliant” (or something similar) that follows your normal process and 2) “Regular”, where scientists can submit projects in whatever way they see fit after only a single round of recommendations not mandates. This will cut down on Experiment staff’s investment, which hopefully would result in faster response times and maybe even lower overhead rates as well as providing more flexibility and efficiency to the whole process.

2 Likes

I take issue with this comment. On the Experiment & Experiment Foundation side, we have worked really hard to build a tool and structure a grant program that can fund folks and projects that the other systems sometimes miss, including doing all the legal work required to support those who are working outside Universities. For many folks, this is the first scientific grant they’ve ever gotten. We’re proud of our work. To casually call that a “tax” or imply that it’s not “real” administration is frustrating.

If you think you can run a grant program for less than that to prove me wrong, by all means, please do.

I stand by the Experiment team. I think most of the projects were improved by the feedback. Could we improve the process? Of course. But the demeaning language is unhelpful.

2 Likes

As we are in feedback mode I’ll jump in too.

Payment fees

Firstly, the payment fees. Personally I was pleasantly surprised that they came out a little lower than we first budgeted for. As a freelancer I received ~90% of the money I asked for, If I had used a fiscal sponsor for a grant I would be happy to get 90%. If I am at a university I would be expecting them to take over to 50%.

Something I have learned since leaving the university, is that I am more efficient without having to buy through a slow bureaucratic procurement department, I am more efficient without having to fight IT to do something basic on my own computer. The 50% university overhead the university used to take from grants was for the corner of dark office in an asbestos-ridden building, a tiny lab space, considerable bureaucracy and other responsibilities, and a fancy name. Bad value for money for my kind of work. Experiment’s 10% for this project was for the web platform, the payment system, raising awareness, and helping me get the investment in the project. This seems fair.

@jpearce perhaps we should point our pitchforks towards the universities rather than @davidtlang?

Pen’s point #4

University bureaucracy around payments is a huge hassle. I get that they have quire strict accounting and normally fairly inflexible systems for recording it. I am actually impressed that Pen has managed to get the University of Bristol to accept the money at all. My experience at Bath was that proposing to bring in money without their expected high overhead rate was really difficult, even if the fees and rates are fixed.

From my experience at Bath I would have never applied for my salary via experiment, as I assume that I would spend more time negotiating with the university than doing science. It would be great to see this problem solved, as more options to extend precarious post-doc contracts would be very helpful. Probably, the university would jump at the option of 13% going to experiment (the worst case estimate) to have certainty on their numbers. Doing something non-standard with universities is fine if you are bringing in a few million, but for a few thousand they are unhelpful. Good luck with this.

Reviews

I would say peer review is as always the thing that scientists hold up as the best thing about science, yet in practice is the actually the worst part. In academia peer reviews is a secret judgement by others who often have a vested interest in the result. It rarely makes work better, loads of bad results get into the literature anyway, it just slows things down.

I can see why experiment.com reviews projects internally. And I know having talked to @davidtlang that the process is intended as a conversation. However, as happened in the argument above we didn’t feel able to have the conversation through the platform. It would be great to be able to query, explain, and respond to reviews within the platform.

UI wishlist

UI is hard. Looking at the UI I have created for projects one might question whether those in glass houses should really be throwing stones. However, here are a few things that I think would help:

  • Spellchecking in lab notes - Lab notes, and I think some other places on the website have a custom editor that stops the built in spell checker in the browser working. As a person who can’t spell very well I have to spend a lot of time copying the notes elsewhere to check my spelling.
  • Better link between challenge pages and projects - This has already been discussed
  • Improved search Project search always seems to max out at 8 items.
3 Likes

David - Apologies - I did not mean in any way to demean you or your team by calling the admin fee a tax. Where I come from a tax is not necessarily a bad thing as long as it is used to provide value.

To be more clear. Ignoring overhead, science scales with funds. $100k roughly pays for a masters student. $200k pays for two of them. 2 masters students will normally do twice the work of one… @julianstirling is right that normal university 50% overhead means 50% less. You need $200k at 50% overhead to get one masters student graduated and fully funded.

That said, 10% on admin fees means 10% less science. It seems these fees should be flat fees - not percentage. @davidtlang I fail to understand how administrative tasks scale by percent of the grant size and maybe this is something you can explain.

Here is an example with order of magnitude numbers to keep our math easy: If the grant is $10k and the admin work is 10 hours @ $100/hr the admin cost is $1k, which is 10%. Then if GOSH was funding 100 such grants by investing $1m there would be 100X the admin work – and the overall overhead would be $100k - enough for an FTE. That part scales linearly and makes sense to me, but could be accomplished by charging a flat fee of $1k per grant. 10 hours @ $100/hr seems expensive but I don’t know the real costs – Experiment should know those numbers exactly including servers, etc. and could offer a flat fee.

Here is the part I don’t understand: What if GOSH decided to take the same $1m in funding and award it to one team instead of 100. In that case, the admin effort would still be 10 hours @ $100/hr =$1k, not $100k. Losing $99k to the 10% admin fees in that case, approximately loses a masters student to science. If a funder can hire someone for 10 hours @ $100/hr =$1k to do the admin work they would be able to fund one more masters student. Is that correct, or did I miss something?

1 Like

The thought exercise works in theory (and in a specific university setting), but not in practice. Here are some places where it breaks down:

Mostly, this falls into the trap of thinking PIs with labs and grad student labor are the only source of good ideas or the only ones with the capacity to contribute.

Only a few projects in this grant program are paying grad students to do the work. Most are doing the work directly. We can fund grad students directly to work on their own projects, and in many cases we are. I much prefer to fund the people doing the actual work.

Grant writing is part of the scientific process–it’s defining the question. Honing your question in a way that you can communicate it to others is a valuable exercise, even though many scientists lament it. Of course, the tedium of many granting bodies makes this unreasonable. Our goal is to help project creators improve their clarity at this stage. Our reporting requirements are completely reasonable and doable and help to communicate the science in addition to fulfilling the non-profit legal requirements.

Also, if someone hires 9 people ($100k per masters student) then they are basically a full-time manager (or need to hire a manager). So the admin is going to be much closer to $100k on either example.

I would much rather have 100 people working directly on projects they are passionate about than 1 PI with 9 employees working for them. And I would think the first example fits the GOSH ethos much more than the latter. It’s in the Manifesto: “1,000 heads are better than 1.”

Not always. I think this group epitomizes the idea that science could scale with reduced costs. What should science cost? - by David Lang

1 Like

Hello everyone!

I adhere fully to the previous feedback, as I went through a very similar experience. Thanks for taking the time to write it in the way you did, and thanks David for answering.

I am also very thankful to everyone working to make this happen.

+1

Here is my overall feedback:

  • The whole application process was both uncomfortable and confusing from start to finish; beginning at the “low-cost tool development is not an experiment” debate, and ending at #6. I’ve applied to only a few grants, but never experienced this.
  • The “How does this work?” button in project pages does not seem to work. The UI looks really nice, but for these purposes I’d rather use the ugly, in-house, static HTML sites from my university, which work perfectly well. In my short experience, UI designers care for pretty, charge you for it, and leave nasty bugs under the rug.

The first error appears on its own on page load, the second one after clicking the buggy button.

All in all, I think that GOSH’s CDP was a much better “grant experience” for me. The application process was clearly stated, it used this forum as a platform, and charged a 0% overhead. There were also complaints and a lot of room for improvement, but that popped up here as well.

I think Experiment can learn from the CDP’s simplicity.

I’m sorry to learn that universities in other countries keep 50% of the grants. As a PhD student, I only had to pay a 5% fee to use the faculty’s fiscal sponsor (whose work does scale with the size of the grants). Living in Argentina had to have some advantage. (?)

Just a note on that: science will scale with reduced costs in tools (which is the focus of the challenge) and not in people (which was being discussed).

I’m inclined to agree with Joshua. I did not expect a >10% fee from Experiment, but would have expected a fixed rate instead. This is because I can not see why Experiment’s work would scale with the project’s budget, while my work does.

Those fees are, however, irrelevant to the application; I did not organise the challenge nor secured funding for it (thanks everyone for that :slight_smile:). I simply increased the project’s budget to account for the fee. It’s not up to me if less work gets funded because of it, I can only try to estimate my costs accurately, and omit charging for my time (which is a donation to OScH development entirely).

I hope the rest of my experience with Experiment will be as good as with the CDP.

Best,
Nico

2 Likes

Hi Marc,

I would be interested to find more details in who does what and especially how would this different from the openflexure one.

Let me know.

Adrian

1 Like

Thanks everyone for taking the time and sharing all this great, very detailed feedback about the experiment.com experience!

2 Likes

Hello,

This is my first time experience with the Experiment platform and firstly I like to thank David for making this possible. And thank the reviewers and the rest of the people that participated in this endeavor for their diligent work.

To clarify the context, I collected this feedback based on two projects I have been involved in and several other projects all part of the Low Cost campaign. My impressions were various and I hope this will help improve the platform. I’ll try to make it exhaustive yet concentrating in things that could improve. The overall impression was positive so don’t take that the suggested improvement areas as a negative. I submitted by the first original deadline however some of the other projects I base my feedback were submitted during the extension.

  1. The experience with the UI and flow of the application was excellent.

  2. I liked the fact that the application is pretty slim, restricted to several hundred chars. I saw other processes that required hundreds of pages.

  3. While I still like the idea of a minimalist application, based on the experience I think the character number limitation introduced some major issues.

  4. In my case one of the projects I submitted was a complex project design to offer a full solution. It has several components that could have been standalone projects. The problem is that the character limitations made it pretty much for the reviewers to understand the project. Adding an explanation meant taking another one out and pleasing one reviewer seemed to do the contrary for the other. The result was an extensive back and forth that lasted a long time. This delay seems to have cost me the funding. I am very disappointed with that because it should not be impossible for complex projects to go through Experiment. Suggestion: Leave the original short fields and introduce a detail section for all of them so the reviewers and donors can really see the full information behind the short abstracts and understand complex topics.
    This is probably the most important suggestion and I understand it would take work to implement should you decide to go with it.

  5. Many replies from the Experiment took long time (even weeks) to get and some of the emails were ignored. I think the reviewers are probably overwhelmed. Suggestion: Perform a fast 30 sec read of the emails followed by a fast personalized one line reply after the emails are received. Indicate when a full response should be expected. Managing expectations is a good thing.

  6. Awarding of the funds for accepted projects is not done based on the date of of application and some projects submitted after the original deadline were awarded before the projects submitted in time. Suggestion: Awarding of funds should be done based on the date of submission especially in cases where the original deadline was extended.

  7. All the rules should be published from the beginning not introduced mid-stream.

  8. There were cases where one reviewer marked a section a great while the other reviewer marked the same section as totally unacceptable. Of course reviewer agreement is not perfect yet total disagreement for too many sections should not occur. Suggestion: reviewers should consult the experts before dismissing sections of the projects.

  9. Reviewers did not consult the experts in areas they were not familiar with before giving initial feedback. Suggestion: Reviewers should consult the subject experts at the first phase of review. That will avoid confusion and speed up the process. Consulting experts only at the end, foster misunderstanding and introduce delays.

  10. There seems to be a bug in the character count where the same submission will detect slightly different numbers of characters when repeat revisions are performed. Suggestion: Somebody should try to reproduce that bug on the first page in different browsers.

  11. The email exchange is pretty impersonal and I can understand if in some cased some replies could be misconstrued as condescending. I saw some feedback from GOSH members mentioning that. Suggestion: Pay special attention to that aspect.

Campaigns

When funding is coming through the Experiment the procedure should require an adequate screening. That is very important as Experiment has to make sure the funds go to worthy causes.

The main work is to try to screen out requests that would use the funds for alternative purposes.

a) The process should ask if the group had previous grants and check if the grant work was actually performed. That could be done by asking for the contact info of the grant organization and the repository of the work product.

b) The process should ask if the group had done any work in the area of research.

b’) The process should ask if the project will be done under an University umbrella.

c) The process should screen out requests that do not match the campaign. For instance requests for education funds should not be made through Low Cost Tools. I have seen projects that seem to do just that. Funds for education should be placed through education cam pains.

d) The process should screen out requests that employ bait and switch techniques. All the members that participate in the project should be named. I have seen Campaign Experiment projects that seem to do just that. Simple checks should be able to catch those.

e) The process should screen out or try to avoid requests for salaries (or other monetary compensations) for unnamed persons. I have seen Campaign Experiment projects that seem to do just that.

f) Experiment should be able to provide transparent and accessible feedback on participants on funded projects to other organizations that grant funds.

g) The process should screen out requests for general ideas, especially when those ideas exist and no prior work with concrete deliverables has be done by requestor. I have seen Campaign Experiment projects that seem to do just that. Projects that cannot itemize concrete deliverables will probably not deliver anything.

h) The process should screen out requests for funds for general costs of organizations unless the Campaign is designed for that. I have seen Campaign Experiment projects that seem to do just that.

i) The process should require careful consideration before granting funds where a lot of the funds are taken away by umbrella organization. Experiment should suggest making those projects independent. I have seen discussions of Campaign Experiment projects where an University took up to 50% of the funds.

i) The process should require careful consideration before granting funds where there is no clear or concrete research or scientific outcome. I have seen Campaign Experiment projects that seem to do just that.

We are all aware of the overhead of University research. A Space X projects cost about one fifth of NASA projects so one can estimate the overhead of bureaucracy.
Many crucial scientific discoveries were not make under academic umbrella. While the state allocates ample funding for academic research there is none for independent research.
Based on discussions on this thread and other considerations I think many people mistakenly regard Experiment as an extension of academia. I am happy to hear David clarifying that. I think it’s important to support genuine low overhead scientific research. Experiment is one of the few organizations that can fund that.

I would like to end by thanking everyone at Experiment for enabling research and ideas to become reality. Especially my thanks go to David that created and made this hidden gem a reality. I cannot even imagine the difficulties he had to make the financial part possible and the work he puts in securing funding for the campaigns. Outstanding work!

Thank you,
Adrian

2 Likes

Update: the project is now live! Open Source Radio Imager for Measuring Plant Health | Experiment

3 Likes