Hello,
First of all, I’d like to express deep gratitude to the Experiment team including @davidtlang, and the conveners of the Low-Cost Tools Challenge @shannond and @jcm80, for funding our camera trap project. I, and I’m sure @jpearce and @jmwright as well, am also so thankful for support from the wider GOSH community including @julianstirling, @moritz.maxeiner, @Juliencolomb, @KerrianneHarrington, @hikinghack, and @rafaella.antoniou, plus other generous contributors. We look forward to continuing this work and keeping everyone update on the Experiment.com page or in the ongoing forum thread.
Looking at what’s already been posted in this thread, I think there’s plenty that we are collectively learning about the Challenge and the Experiment platform. I took notes during the process of proposing our project for the Low-Cost Tools Challenge, and would like to share my observations here. Please note, in addition to gratitude, my comments here are shared in the spirit of mutual learning, being constructive, improving the user experience (UX) of Experiment, and probably helping other research funding initiatives as well. So here they are, in no particular order:
- I really like that a digital object identifier (DOI) is associated with projects. Great idea!
- It has been hard for me to search for and view projects associated with the different Challenges. I think this has been noted in previous comments in this thread.
- Many Challenges are listed under “Current Grants” or “Requests for Experiments” even through there stated deadlines have passed. What does this mean? Are they still accepting projects or not?
- The Experiment.com documentation states: “Experiment charges a 8% platform fee and our payment processor charges an additional fee of roughly 3-5%. The payment processor charges 2.9% + $0.30 on each successful charge.”
- I get the gist of this statement. However, my university requires any funding to go through them instead of coming directly to me, and they are very uncomfortable with not having enough information to calculate exactly how much money is coming. I know the final charges might depend on how many individual transactions are made, but these complications make it difficult for university (at least from my experience in the UK) researchers to utilise Experiment.
- I actually submitted a separate project to a different Challenge that was rejected. This is totally fine. However, I learned of the rejection from one of the Challenge conveners before my project went live! What’s the review process of Challenges? When do they start looking at projects??? My project was rejected even before my endorser posted their endorsement. Technically, it’s possible that the endorsement might affect the funding decision of the Challenge conveners. More clarity on exactly when your project will be evaluated will be crucial.
- I carefully read the description of the Low-Cost Tools Challenge in June 2023. After reading it and the discussions in the thread, I got the understanding that the Challenge leads would decide whether to fund projects in full, after which the projects could optionally crowdfund beyond the campaign goal. However, for our project, and several others in the challenge, I see that they received support for a portion of their campaign goal, implying that they are expected and required to raise additional funds to meet their goals. This might look like a subtle difference, but would greatly affect prospective projects and their planning. I went into our project expecting to receive either 0% or 100% from the Challenge, did not plan to have to crowdfund, and was OK with the possibility that we might receive 0%. In the end, the Challenge (very generously, thank you!) gave us 80% of our goal, and I had to scramble in the final days to crowdfund the rest. As an academic, it is extremely difficult to organise sudden, unplanned tasks like this. While our project was ultimately successfully funded, it is very unclear to me from the Challenge description that crowdfunding is required instead of optional. I strongly suggest revising the wording of Challenge descriptions to at least state that they might partially fund projects instead of just 0% or 100%.
- Unclear what the “deadlines” listed in Challenge descriptions mean:
- Is that when I need to hit “submit” or for the project to be live or when the campaign needs to end by?
- What’s the difference between “Submission Deadline” and “Campaign Launch” on a Challenge page when they seem to be the same date???
- Project pre-launch reviews by Experiment do not allow us to directly respond to a reviewers comments/questions. This is even less functional than the already-dysfunctional peer review process in academic journals. Some of the reviewer comments take the form of “Can you do x?”. Without being able to respond to the reviewer, I feel I’m being forced to say yes to the question. If the reviewer requires me to do something, then say so and don’t present it as a question that I am not allowed to answer.
- This manifested in a real problem for our project, for which I used a real camera trap photo of a wild roe deer as our banner image. By definition, camera trap photos include text on their top and bottom showing metadata such as time, date, temperature, moon phase, etc. One of the Experiment reviewers required us to crop that metadata text out of the photo. If you visit our project page now, you see the deer photo without that information. This presentation is misleading and visitors would not know at all that it’s a camera trap picture, and the link to our project topic is completely severed. In other words, I feel I’m being forced into misleading potential backers because there’s no way to respond to reviewer comments. Also, we were required to crop our image this way during the second review round. Why did the first reviewer not ask us to do this?
- Speaking of which, our project went through multiple rounds of review by Experiment, but on the second round they asked me to do things that could totally have been done the first time. This really feels like a big waste of time for everyone involved, and pushed back the launch of our project beyond the original stated deadline. (also see point above regarding my confusion on what the stated deadlines actually mean)
- A few other technical problems:
- The user interface (UI) is contradictory on whether I need to be “verified” before getting an endorsement.
- The UI requires date of birth for verification, which I almost certainly can not obtain from a finance person at my university, how did others deal with it? Is this documented somewhere?
- It’s unclear at the beginning of creating my project what’s required for project to officially launch, such as needing to secure an endorsement or verifying identity. These additional steps came as surprises, and threw off my timeframe. I know there is some documentation, but it’s still unclear to me exactly what will be asked of me and required at which steps of the process. A detailed diagram that lays out the exact steps from project inception to successful funding, with clear indicators of what’s needed as each stage, definitions of terms, and other key “stage gates” would be very helpful.
- By the way, I got a message from someone who claims they can provide services to allow my campaign to succeed. In my opinion there’s nothing inherently wrong with this, but just want to share it FYI.
- Lastly, several of my emails to Experiment went completely unanswered. I later learned that there was some delay due to illness, which is understandable. But even then several of my questions went unanswered, and it was unclear what would happen if because of this our project couldn’t be launched by the deadline. Would be no longer qualify to be considered for funding from a Challenge?
OK, the above is a summary of my notes on the experience. They are shared with the intention to be helpful and archiving learnings, and with peace and love for such a worthwhile endeavor.
Many thanks again to @davidtlang and the Experiment team, and @shannond and @jcm80 for leading a very important Challenge! I’d love to hear their experiences as well, including what worked well or not!