$50k fast funding program for low-cost science tools

Fantastic! Has a preferred communication channel been suggested? I don’t seem to be able to republish the draft on the experiment.com website.

PS: Sorry to be asking so many questions after missing the meeting. I’d be glad to watch the recording when it is available.

Perhaps make a new one? Or email Nicole or David? You could also ping @davidtlang on here.

1 Like

Hi @hikinghack @hugobiwan
According to my opinion the project is really cool.
For scientific application, i can imagine all the application that need touch sensitiv documents, i’m thinking of :

  • automated generation of map from openstreetmap
  • touch sensitiv transcription of painting
    And all the application in pedagogy for visualy impaired students.

Best regards

2 Likes

Hi David @davidtlang I was wondering if there are new guidelines or advice for updating the applications.

I wrote to Nicole several times but have not received a response.

Best,
Nico

1 Like

I know Denny and Nicole are busy working through reviews and emails today! A lot came in over the weekend.

4 Likes

Do you know is there a link where we can browse and share all the different things people are sharing in this category?

I can go here

which explains the challenge, but im wanting to see all the different cool projects folks submitted!

1 Like

Eventually!

The challenge is in “request for proposals” mode, but we’ll switch it to view at some point. Then it will look more like this:

-David

2 Likes

Another option is to search “Tools” in the search bar.

2 Likes

Hi everyone,

Two quick updates:

  • The funding size of the challenge has increased to $150,000
  • Timeline has been extended to June 30, 2023. If you still have a draft project or a new idea that you’d like to submit, you have a bit of extra time.

Shannon

3 Likes

Awesome!

1 Like

I have an idea for a proposal. By the time I’ve made three successive modifications proposed by the reviewers on Experiment, my project has been validated today. I hope it’s not too late.

2 Likes

Good luck @TOKO!

I see a few GOSH projects are about to run out of time in the next week or two, within in the 80% funded region. I think quote a few of us managed to pull in a few backers, but not enough to take us over the line. It is hard to see exactly how many projects are have applied as the challenge doesn’t list the projects.

...

It is confusing that it tells you to click on the challenge to see more projects by none are listed. I tried searching “low cost tools” as @davidtlang showed us, however it only every finds 8 projects (though it finds a different 8 if I sort differently :woozy_face:)

Is the plan to extend the deadline and allow a new round of submission if the $150k isn’t spent?

3 Likes

Do I see correctly that this opportunity is still open?
Website says until 15. august now.

We are experimenting (@gaudi @dusjagr ) with a new concept for a low-cost diy microscope using esp-32 and cam, hack the lens and a simple stage setup using PCB materials taht can be assembled into a solid stage structure.

Anybody interested to join the development?

We think the “old” approach with webcams attached to a laptop is a bit outdated.

Using the ESP & cam easier globally, where people usually only have a smart phone. so we can look at the stream in the webbrowser of the phone, and record higher res images to the onboard sd-card.
adding some simple tools for measurements and calibration should be easy with a webapp.

2 Likes

I’m interested

2 Likes

Thanks for pointing this out, Marc. The deadline was June 30th and David is fixing.

2 Likes

I’m also interested.

I am launching a campaign on experiment.com for the Low Cost Science Tools fast funding program: https://experiment.com/projects/open-source-radio-imager-for-measuring-plant-health

Our team is working on an RF imaging system to characterize plant biomass in field settings. This will allow researchers and, eventually, crop advisors and growers the ability to measure things like crop load/yield, plant phenology - such as leaf-out and seasonal evolution of biomass, and the interior structure of trunks, branches, and fruit. The goal is to enable the production of systems, including RF transceivers, antenna arrays, and mounting hardware for tractors and other farm vehicles for a cost of less than $1,000 per unit. Schematics, design files, firmware, and other elements will be made available under an open source license (i.e., one of the CERN open source hardware licenses and OSI-approved software licenses).

We conducted lab measurements in late 2019 that yielded some promising results, but the project was shelved indefinitely as the world went through some things for the next few years. I would like to restart the project so we can pursue larger research funding opportunities in 2024. We propose to build one demonstration system and test it in vineyards and tree nut orchards in October 2023. We may also be able to conduct postharvest data collection, time and resources permitting.

At seven days, the timeline for this campaign is dreadfully short, but if we want to get into the field this year, we need to get moving as soon as possible. Any support that gets us toward this goal is enthusiastically welcome.

Please consider sharing this outside the GOSH Forum if you know any colleagues who might be interested in collaborating on design or research projects. Please feel free to contact me via the experiment.com platform or in the GOSH Forum with any questions. Thank you!

2 Likes

Hello,

First of all, I’d like to express deep gratitude to the Experiment team including @davidtlang, and the conveners of the Low-Cost Tools Challenge @shannond and @jcm80, for funding our camera trap project. I, and I’m sure @jpearce and @jmwright as well, am also so thankful for support from the wider GOSH community including @julianstirling, @moritz.maxeiner, @Juliencolomb, @KerrianneHarrington, @hikinghack, and @rafaella.antoniou, plus other generous contributors. We look forward to continuing this work and keeping everyone update on the Experiment.com page or in the ongoing forum thread.

Looking at what’s already been posted in this thread, I think there’s plenty that we are collectively learning about the Challenge and the Experiment platform. I took notes during the process of proposing our project for the Low-Cost Tools Challenge, and would like to share my observations here. Please note, in addition to gratitude, my comments here are shared in the spirit of mutual learning, being constructive, improving the user experience (UX) of Experiment, and probably helping other research funding initiatives as well. So here they are, in no particular order:


  1. I really like that a digital object identifier (DOI) is associated with projects. Great idea!
  2. It has been hard for me to search for and view projects associated with the different Challenges. I think this has been noted in previous comments in this thread.
  3. Many Challenges are listed under “Current Grants” or “Requests for Experiments” even through there stated deadlines have passed. What does this mean? Are they still accepting projects or not?
  4. The Experiment.com documentation states: “Experiment charges a 8% platform fee and our payment processor charges an additional fee of roughly 3-5%. The payment processor charges 2.9% + $0.30 on each successful charge.”
    • I get the gist of this statement. However, my university requires any funding to go through them instead of coming directly to me, and they are very uncomfortable with not having enough information to calculate exactly how much money is coming. I know the final charges might depend on how many individual transactions are made, but these complications make it difficult for university (at least from my experience in the UK) researchers to utilise Experiment.
  5. I actually submitted a separate project to a different Challenge that was rejected. This is totally fine. However, I learned of the rejection from one of the Challenge conveners before my project went live! What’s the review process of Challenges? When do they start looking at projects??? My project was rejected even before my endorser posted their endorsement. Technically, it’s possible that the endorsement might affect the funding decision of the Challenge conveners. More clarity on exactly when your project will be evaluated will be crucial.
  6. I carefully read the description of the Low-Cost Tools Challenge in June 2023. After reading it and the discussions in the thread, I got the understanding that the Challenge leads would decide whether to fund projects in full, after which the projects could optionally crowdfund beyond the campaign goal. However, for our project, and several others in the challenge, I see that they received support for a portion of their campaign goal, implying that they are expected and required to raise additional funds to meet their goals. This might look like a subtle difference, but would greatly affect prospective projects and their planning. I went into our project expecting to receive either 0% or 100% from the Challenge, did not plan to have to crowdfund, and was OK with the possibility that we might receive 0%. In the end, the Challenge (very generously, thank you!) gave us 80% of our goal, and I had to scramble in the final days to crowdfund the rest. As an academic, it is extremely difficult to organise sudden, unplanned tasks like this. While our project was ultimately successfully funded, it is very unclear to me from the Challenge description that crowdfunding is required instead of optional. I strongly suggest revising the wording of Challenge descriptions to at least state that they might partially fund projects instead of just 0% or 100%.
  7. Unclear what the “deadlines” listed in Challenge descriptions mean:
    • Is that when I need to hit “submit” or for the project to be live or when the campaign needs to end by?
    • What’s the difference between “Submission Deadline” and “Campaign Launch” on a Challenge page when they seem to be the same date???
  8. Project pre-launch reviews by Experiment do not allow us to directly respond to a reviewers comments/questions. This is even less functional than the already-dysfunctional peer review process in academic journals. Some of the reviewer comments take the form of “Can you do x?”. Without being able to respond to the reviewer, I feel I’m being forced to say yes to the question. If the reviewer requires me to do something, then say so and don’t present it as a question that I am not allowed to answer.
  9. This manifested in a real problem for our project, for which I used a real camera trap photo of a wild roe deer as our banner image. By definition, camera trap photos include text on their top and bottom showing metadata such as time, date, temperature, moon phase, etc. One of the Experiment reviewers required us to crop that metadata text out of the photo. If you visit our project page now, you see the deer photo without that information. This presentation is misleading and visitors would not know at all that it’s a camera trap picture, and the link to our project topic is completely severed. In other words, I feel I’m being forced into misleading potential backers because there’s no way to respond to reviewer comments. Also, we were required to crop our image this way during the second review round. Why did the first reviewer not ask us to do this?
  10. Speaking of which, our project went through multiple rounds of review by Experiment, but on the second round they asked me to do things that could totally have been done the first time. This really feels like a big waste of time for everyone involved, and pushed back the launch of our project beyond the original stated deadline. (also see point above regarding my confusion on what the stated deadlines actually mean)
  11. A few other technical problems:
    • The user interface (UI) is contradictory on whether I need to be “verified” before getting an endorsement.
    • The UI requires date of birth for verification, which I almost certainly can not obtain from a finance person at my university, how did others deal with it? Is this documented somewhere?
    • It’s unclear at the beginning of creating my project what’s required for project to officially launch, such as needing to secure an endorsement or verifying identity. These additional steps came as surprises, and threw off my timeframe. I know there is some documentation, but it’s still unclear to me exactly what will be asked of me and required at which steps of the process. A detailed diagram that lays out the exact steps from project inception to successful funding, with clear indicators of what’s needed as each stage, definitions of terms, and other key “stage gates” would be very helpful.
  12. By the way, I got a message from someone who claims they can provide services to allow my campaign to succeed. In my opinion there’s nothing inherently wrong with this, but just want to share it FYI.
  13. Lastly, several of my emails to Experiment went completely unanswered. I later learned that there was some delay due to illness, which is understandable. But even then several of my questions went unanswered, and it was unclear what would happen if because of this our project couldn’t be launched by the deadline. Would be no longer qualify to be considered for funding from a Challenge?

OK, the above is a summary of my notes on the experience. They are shared with the intention to be helpful and archiving learnings, and with peace and love for such a worthwhile endeavor. :heart:

Many thanks again to @davidtlang and the Experiment team, and @shannond and @jcm80 for leading a very important Challenge! I’d love to hear their experiences as well, including what worked well or not!

4 Likes

This is very helpful feedback. Thank you @hpy.

As mentioned before, this funding program was a bit of a hack on the existing Experiment platform, so we’re still learning where all the rough patches are. This feedback will help us in the ongoing redesign.

I’ll address some of the points directly:

  1. Cool! I agree that this is an underappreciated feature. I would love to see Experiment projects cited more often and, in general, bring awareness to the idea of open grant proposals: Open Grant Proposals · Series 1.2: Business of Knowing

  2. Agree. That should be fixed in the new design.

  3. I think this is already fixed/updated.

  4. Interesting. I haven’t heard of this specific problem yet.

  5. Point taken. We will try to add more clarity and guidance on this. I encourage all the science leads to reach out to folks early. They usually start looking once a project has been submitted for review, but they can also see when folks have started a draft.

  6. This was clearly a mistake on my/our part in not explaining the process in enough detail. Can I ask if this description improves the clarity? Experiment

What else should we add?

  1. That’s a design remnant from a previous grant design. Agree needs to be fixed in an updated design.

  2. Good feedback. I’ll bring this up with the team. Note: Experiment review is not an attempt to replace peer review or serve that same purpose.

  3. Consistency among reviewers is actually an insanely hard problem that we’re working to improve.

  4. See above.

  5. Noted and sending to the team.

  6. Ugh. I try to block these folks when I see it happening. You can report as spam if it feels as such.

  7. THIS is a problem. We heard this feedback from another project creator too, and we think we found the problem with our customer service email system. We apologize if emails were missed. It’s because they were getting routed in the wrong way. Hopefully this is fixed.

Thanks again for the feedback.

2 Likes

I would also like to start by thanking everyone for sponsoring, facilitating and making the fast funding of low-cost science tools a success. I particularly appreciate flexibility at the end on Experiment to make my projects fundable. There are two main points @hpy made that as a community of scientists we should consider in the future, which divert resources from open hardware development.

  1. This results in a de facto 10% tax on running funding through the Experiment platform on top of whatever overhead is forced out of scientists at their own institutions whether part of F&A, mandatory training, fees, etc. If doing so results in more than 10% of the initial funding minus real administration costs then it is economically rational. If it does not, then future GOSH funders may want to consider the flat rate costs of hiring someone to perform the administrative functions. This is something that should be calculatable at the end of this round and I am quite curious to see the result.

5,8,10 &13. These four points are all part of the same issue: the pre-launch reviews were numerous, always delayed by long periods and for the most part irrelevant to open hardware development, which is what this CFP was about. Many scientists who posted shared their frustrations that they had to invest inordinate amounts of time cramming their hardware development proposals into the Experiment mold of an ‘experiment’.

This administrative preprocessing is now common among funding agencies. For example, working in Canada I can point out that in the US, the NSF changes the format and the bio requirements frequently forcing everyone to revamp even their basic templates every time they submit a new proposal. In addition, the NSF force scientists to spend absurd amounts of time pushing information like current and pending grants into their format, same with COIs, and other non-scientific mandatory parts of a proposal. This means more productive and more collaborative scientists (like most of GOSH) are punished with more administrative work per proposal than lone wolfs. This also results in a non-scientific screening of grant proposals, which in some countries can be political (and potentially really dangerous). Active scientists give up control of the peer-review process as administrators can effectively end proposals on failure to comply (at many universities this is even dictated by non-scientific research staff internally before scientists are allowed to click the submit button).

The last time I checked the award rate for single PI NSF proposals was ~7%, which means 93% of scientists wasted their time writing proposals that were not funded and what appears to be a large fraction of them were cut for non-scientific reasons (and that does not include the proposals that were blocked at the university level). The end result is that any scientist that wants to be successful is forced into investing dozens of hours cutting and pasting into fields for each proposal…or hiring someone else to do it. This is a sub-optimal use of resources no matter how the scientists comply.

Experiment has the potential to upend some of the serious problems with scientific funding - but not if they follow the bad habits that have developed in the funding agencies. In Experiment’s case I would recommend making two categories 1) “Gold star compliant” (or something similar) that follows your normal process and 2) “Regular”, where scientists can submit projects in whatever way they see fit after only a single round of recommendations not mandates. This will cut down on Experiment staff’s investment, which hopefully would result in faster response times and maybe even lower overhead rates as well as providing more flexibility and efficiency to the whole process.

2 Likes