Do you know is there a link where we can browse and share all the different things people are sharing in this category?
I can go here
which explains the challenge, but im wanting to see all the different cool projects folks submitted!
Do you know is there a link where we can browse and share all the different things people are sharing in this category?
I can go here
which explains the challenge, but im wanting to see all the different cool projects folks submitted!
Eventually!
The challenge is in ârequest for proposalsâ mode, but weâll switch it to view at some point. Then it will look more like this:
-David
Another option is to search âToolsâ in the search bar.
Hi everyone,
Two quick updates:
Shannon
Awesome!
I have an idea for a proposal. By the time Iâve made three successive modifications proposed by the reviewers on Experiment, my project has been validated today. I hope itâs not too late.
Good luck @TOKO!
I see a few GOSH projects are about to run out of time in the next week or two, within in the 80% funded region. I think quote a few of us managed to pull in a few backers, but not enough to take us over the line. It is hard to see exactly how many projects are have applied as the challenge doesnât list the projects.
It is confusing that it tells you to click on the challenge to see more projects by none are listed. I tried searching âlow cost toolsâ as @davidtlang showed us, however it only every finds 8 projects (though it finds a different 8 if I sort differently )
Is the plan to extend the deadline and allow a new round of submission if the $150k isnât spent?
Do I see correctly that this opportunity is still open?
Website says until 15. august now.
We are experimenting (@gaudi @dusjagr ) with a new concept for a low-cost diy microscope using esp-32 and cam, hack the lens and a simple stage setup using PCB materials taht can be assembled into a solid stage structure.
Anybody interested to join the development?
We think the âoldâ approach with webcams attached to a laptop is a bit outdated.
Using the ESP & cam easier globally, where people usually only have a smart phone. so we can look at the stream in the webbrowser of the phone, and record higher res images to the onboard sd-card.
adding some simple tools for measurements and calibration should be easy with a webapp.
Iâm interested
Thanks for pointing this out, Marc. The deadline was June 30th and David is fixing.
Iâm also interested.
I am launching a campaign on experiment.com for the Low Cost Science Tools fast funding program: https://experiment.com/projects/open-source-radio-imager-for-measuring-plant-health
Our team is working on an RF imaging system to characterize plant biomass in field settings. This will allow researchers and, eventually, crop advisors and growers the ability to measure things like crop load/yield, plant phenology - such as leaf-out and seasonal evolution of biomass, and the interior structure of trunks, branches, and fruit. The goal is to enable the production of systems, including RF transceivers, antenna arrays, and mounting hardware for tractors and other farm vehicles for a cost of less than $1,000 per unit. Schematics, design files, firmware, and other elements will be made available under an open source license (i.e., one of the CERN open source hardware licenses and OSI-approved software licenses).
We conducted lab measurements in late 2019 that yielded some promising results, but the project was shelved indefinitely as the world went through some things for the next few years. I would like to restart the project so we can pursue larger research funding opportunities in 2024. We propose to build one demonstration system and test it in vineyards and tree nut orchards in October 2023. We may also be able to conduct postharvest data collection, time and resources permitting.
At seven days, the timeline for this campaign is dreadfully short, but if we want to get into the field this year, we need to get moving as soon as possible. Any support that gets us toward this goal is enthusiastically welcome.
Please consider sharing this outside the GOSH Forum if you know any colleagues who might be interested in collaborating on design or research projects. Please feel free to contact me via the experiment.com platform or in the GOSH Forum with any questions. Thank you!
Hello,
First of all, Iâd like to express deep gratitude to the Experiment team including @davidtlang, and the conveners of the Low-Cost Tools Challenge @shannond and @jcm80, for funding our camera trap project. I, and Iâm sure @jpearce and @jmwright as well, am also so thankful for support from the wider GOSH community including @julianstirling, @moritz.maxeiner, @Juliencolomb, @KerrianneHarrington, @hikinghack, and @rafaella.antoniou, plus other generous contributors. We look forward to continuing this work and keeping everyone update on the Experiment.com page or in the ongoing forum thread.
Looking at whatâs already been posted in this thread, I think thereâs plenty that we are collectively learning about the Challenge and the Experiment platform. I took notes during the process of proposing our project for the Low-Cost Tools Challenge, and would like to share my observations here. Please note, in addition to gratitude, my comments here are shared in the spirit of mutual learning, being constructive, improving the user experience (UX) of Experiment, and probably helping other research funding initiatives as well. So here they are, in no particular order:
OK, the above is a summary of my notes on the experience. They are shared with the intention to be helpful and archiving learnings, and with peace and love for such a worthwhile endeavor.
Many thanks again to @davidtlang and the Experiment team, and @shannond and @jcm80 for leading a very important Challenge! Iâd love to hear their experiences as well, including what worked well or not!
This is very helpful feedback. Thank you @hpy.
As mentioned before, this funding program was a bit of a hack on the existing Experiment platform, so weâre still learning where all the rough patches are. This feedback will help us in the ongoing redesign.
Iâll address some of the points directly:
Cool! I agree that this is an underappreciated feature. I would love to see Experiment projects cited more often and, in general, bring awareness to the idea of open grant proposals: Open Grant Proposals ¡ Series 1.2: Business of Knowing
Agree. That should be fixed in the new design.
I think this is already fixed/updated.
Interesting. I havenât heard of this specific problem yet.
Point taken. We will try to add more clarity and guidance on this. I encourage all the science leads to reach out to folks early. They usually start looking once a project has been submitted for review, but they can also see when folks have started a draft.
This was clearly a mistake on my/our part in not explaining the process in enough detail. Can I ask if this description improves the clarity? Experiment
What else should we add?
Thatâs a design remnant from a previous grant design. Agree needs to be fixed in an updated design.
Good feedback. Iâll bring this up with the team. Note: Experiment review is not an attempt to replace peer review or serve that same purpose.
Consistency among reviewers is actually an insanely hard problem that weâre working to improve.
See above.
Noted and sending to the team.
Ugh. I try to block these folks when I see it happening. You can report as spam if it feels as such.
THIS is a problem. We heard this feedback from another project creator too, and we think we found the problem with our customer service email system. We apologize if emails were missed. Itâs because they were getting routed in the wrong way. Hopefully this is fixed.
Thanks again for the feedback.
I would also like to start by thanking everyone for sponsoring, facilitating and making the fast funding of low-cost science tools a success. I particularly appreciate flexibility at the end on Experiment to make my projects fundable. There are two main points @hpy made that as a community of scientists we should consider in the future, which divert resources from open hardware development.
5,8,10 &13. These four points are all part of the same issue: the pre-launch reviews were numerous, always delayed by long periods and for the most part irrelevant to open hardware development, which is what this CFP was about. Many scientists who posted shared their frustrations that they had to invest inordinate amounts of time cramming their hardware development proposals into the Experiment mold of an âexperimentâ.
This administrative preprocessing is now common among funding agencies. For example, working in Canada I can point out that in the US, the NSF changes the format and the bio requirements frequently forcing everyone to revamp even their basic templates every time they submit a new proposal. In addition, the NSF force scientists to spend absurd amounts of time pushing information like current and pending grants into their format, same with COIs, and other non-scientific mandatory parts of a proposal. This means more productive and more collaborative scientists (like most of GOSH) are punished with more administrative work per proposal than lone wolfs. This also results in a non-scientific screening of grant proposals, which in some countries can be political (and potentially really dangerous). Active scientists give up control of the peer-review process as administrators can effectively end proposals on failure to comply (at many universities this is even dictated by non-scientific research staff internally before scientists are allowed to click the submit button).
The last time I checked the award rate for single PI NSF proposals was ~7%, which means 93% of scientists wasted their time writing proposals that were not funded and what appears to be a large fraction of them were cut for non-scientific reasons (and that does not include the proposals that were blocked at the university level). The end result is that any scientist that wants to be successful is forced into investing dozens of hours cutting and pasting into fields for each proposalâŚor hiring someone else to do it. This is a sub-optimal use of resources no matter how the scientists comply.
Experiment has the potential to upend some of the serious problems with scientific funding - but not if they follow the bad habits that have developed in the funding agencies. In Experimentâs case I would recommend making two categories 1) âGold star compliantâ (or something similar) that follows your normal process and 2) âRegularâ, where scientists can submit projects in whatever way they see fit after only a single round of recommendations not mandates. This will cut down on Experiment staffâs investment, which hopefully would result in faster response times and maybe even lower overhead rates as well as providing more flexibility and efficiency to the whole process.
I take issue with this comment. On the Experiment & Experiment Foundation side, we have worked really hard to build a tool and structure a grant program that can fund folks and projects that the other systems sometimes miss, including doing all the legal work required to support those who are working outside Universities. For many folks, this is the first scientific grant theyâve ever gotten. Weâre proud of our work. To casually call that a âtaxâ or imply that itâs not ârealâ administration is frustrating.
If you think you can run a grant program for less than that to prove me wrong, by all means, please do.
I stand by the Experiment team. I think most of the projects were improved by the feedback. Could we improve the process? Of course. But the demeaning language is unhelpful.
As we are in feedback mode Iâll jump in too.
Firstly, the payment fees. Personally I was pleasantly surprised that they came out a little lower than we first budgeted for. As a freelancer I received ~90% of the money I asked for, If I had used a fiscal sponsor for a grant I would be happy to get 90%. If I am at a university I would be expecting them to take over to 50%.
Something I have learned since leaving the university, is that I am more efficient without having to buy through a slow bureaucratic procurement department, I am more efficient without having to fight IT to do something basic on my own computer. The 50% university overhead the university used to take from grants was for the corner of dark office in an asbestos-ridden building, a tiny lab space, considerable bureaucracy and other responsibilities, and a fancy name. Bad value for money for my kind of work. Experimentâs 10% for this project was for the web platform, the payment system, raising awareness, and helping me get the investment in the project. This seems fair.
@jpearce perhaps we should point our pitchforks towards the universities rather than @davidtlang?
University bureaucracy around payments is a huge hassle. I get that they have quire strict accounting and normally fairly inflexible systems for recording it. I am actually impressed that Pen has managed to get the University of Bristol to accept the money at all. My experience at Bath was that proposing to bring in money without their expected high overhead rate was really difficult, even if the fees and rates are fixed.
From my experience at Bath I would have never applied for my salary via experiment, as I assume that I would spend more time negotiating with the university than doing science. It would be great to see this problem solved, as more options to extend precarious post-doc contracts would be very helpful. Probably, the university would jump at the option of 13% going to experiment (the worst case estimate) to have certainty on their numbers. Doing something non-standard with universities is fine if you are bringing in a few million, but for a few thousand they are unhelpful. Good luck with this.
I would say peer review is as always the thing that scientists hold up as the best thing about science, yet in practice is the actually the worst part. In academia peer reviews is a secret judgement by others who often have a vested interest in the result. It rarely makes work better, loads of bad results get into the literature anyway, it just slows things down.
I can see why experiment.com reviews projects internally. And I know having talked to @davidtlang that the process is intended as a conversation. However, as happened in the argument above we didnât feel able to have the conversation through the platform. It would be great to be able to query, explain, and respond to reviews within the platform.
UI is hard. Looking at the UI I have created for projects one might question whether those in glass houses should really be throwing stones. However, here are a few things that I think would help:
David - Apologies - I did not mean in any way to demean you or your team by calling the admin fee a tax. Where I come from a tax is not necessarily a bad thing as long as it is used to provide value.
To be more clear. Ignoring overhead, science scales with funds. $100k roughly pays for a masters student. $200k pays for two of them. 2 masters students will normally do twice the work of one⌠@julianstirling is right that normal university 50% overhead means 50% less. You need $200k at 50% overhead to get one masters student graduated and fully funded.
That said, 10% on admin fees means 10% less science. It seems these fees should be flat fees - not percentage. @davidtlang I fail to understand how administrative tasks scale by percent of the grant size and maybe this is something you can explain.
Here is an example with order of magnitude numbers to keep our math easy: If the grant is $10k and the admin work is 10 hours @ $100/hr the admin cost is $1k, which is 10%. Then if GOSH was funding 100 such grants by investing $1m there would be 100X the admin work â and the overall overhead would be $100k - enough for an FTE. That part scales linearly and makes sense to me, but could be accomplished by charging a flat fee of $1k per grant. 10 hours @ $100/hr seems expensive but I donât know the real costs â Experiment should know those numbers exactly including servers, etc. and could offer a flat fee.
Here is the part I donât understand: What if GOSH decided to take the same $1m in funding and award it to one team instead of 100. In that case, the admin effort would still be 10 hours @ $100/hr =$1k, not $100k. Losing $99k to the 10% admin fees in that case, approximately loses a masters student to science. If a funder can hire someone for 10 hours @ $100/hr =$1k to do the admin work they would be able to fund one more masters student. Is that correct, or did I miss something?
The thought exercise works in theory (and in a specific university setting), but not in practice. Here are some places where it breaks down:
Mostly, this falls into the trap of thinking PIs with labs and grad student labor are the only source of good ideas or the only ones with the capacity to contribute.
Only a few projects in this grant program are paying grad students to do the work. Most are doing the work directly. We can fund grad students directly to work on their own projects, and in many cases we are. I much prefer to fund the people doing the actual work.
Grant writing is part of the scientific processâitâs defining the question. Honing your question in a way that you can communicate it to others is a valuable exercise, even though many scientists lament it. Of course, the tedium of many granting bodies makes this unreasonable. Our goal is to help project creators improve their clarity at this stage. Our reporting requirements are completely reasonable and doable and help to communicate the science in addition to fulfilling the non-profit legal requirements.
Also, if someone hires 9 people ($100k per masters student) then they are basically a full-time manager (or need to hire a manager). So the admin is going to be much closer to $100k on either example.
I would much rather have 100 people working directly on projects they are passionate about than 1 PI with 9 employees working for them. And I would think the first example fits the GOSH ethos much more than the latter. Itâs in the Manifesto: â1,000 heads are better than 1.â
Not always. I think this group epitomizes the idea that science could scale with reduced costs. What should science cost? - by David Lang
Hello everyone!
I adhere fully to the previous feedback, as I went through a very similar experience. Thanks for taking the time to write it in the way you did, and thanks David for answering.
I am also very thankful to everyone working to make this happen.
+1
Here is my overall feedback:
The first error appears on its own on page load, the second one after clicking the buggy button.
All in all, I think that GOSHâs CDP was a much better âgrant experienceâ for me. The application process was clearly stated, it used this forum as a platform, and charged a 0% overhead. There were also complaints and a lot of room for improvement, but that popped up here as well.
I think Experiment can learn from the CDPâs simplicity.
Iâm sorry to learn that universities in other countries keep 50% of the grants. As a PhD student, I only had to pay a 5% fee to use the facultyâs fiscal sponsor (whose work does scale with the size of the grants). Living in Argentina had to have some advantage. (?)
Just a note on that: science will scale with reduced costs in tools (which is the focus of the challenge) and not in people (which was being discussed).
Iâm inclined to agree with Joshua. I did not expect a >10% fee from Experiment, but would have expected a fixed rate instead. This is because I can not see why Experimentâs work would scale with the projectâs budget, while my work does.
Those fees are, however, irrelevant to the application; I did not organise the challenge nor secured funding for it (thanks everyone for that ). I simply increased the projectâs budget to account for the fee. Itâs not up to me if less work gets funded because of it, I can only try to estimate my costs accurately, and omit charging for my time (which is a donation to OScH development entirely).
I hope the rest of my experience with Experiment will be as good as with the CDP.
Best,
Nico