Call for inputs: Global Consultation on the Draft Principles of Open Science Monitoring

Per the UNSECO announcement:

As part of the ongoing efforts to advance open science monitoring in alignment with the 2021 UNESCO Recommendation on Open Science, UNESCO is calling for inputs and comments from all the regions and interested stakeholders on the Draft Principles for Open Science Monitoring.
Deadline for inputs: 30 November 2024

Open source hardware is currently a part of the UNESCO Recommendation on Open Science, probably a good thing to give some feedback on how to achieve effective monitoring of open science hardware!!

3 Likes

Great lets organise a working group and jump on a call ?

I think that any suggestions for the recommendation need to take into account that a depressingly large proportion of academics will do the exact minimum to technically comply. More complex hoops to jump through which are still binary (i.e. pass-fail) often can end up catching out truly open projects that work in a new or innovative way, while normally still being easy to short cut.

This is particularly important as certain domains are overlooked and most scientists live in a field-specific bubble.

I am a strong believer that we need real career incentives within science for people to work openly (See opinion paper). I also hold the somewhat incompatible view best sumarised by Goodharts law:

“When a measure becomes a target, it ceases to be a good measure”

I feel that what I would push for is for UNESCO to encourage governments and institutions to put less weight on metrics which are antithetical to open science rather than to establish yet more metrics. For example, publications metrics are key to academic success. If academics are incentivised to out publish their peers at all cost, then there is an incentive to withhold data, details, and tools. Open science requirements can be sprinkled on top, but there is an incentive to do the bare minimum to pass the threshold of “open science” but to keep all other key information in-house.

One other thing I would suggest is that when we monitor science we need more nuance and less counting. Citation counting again poorly reflects the usefulness of open hardware. A citation is a citation. You get one “point” if someone mentions your work in passing because it’s cool, one “point” if someone says your result is wrong or controversial, one “point” if they back up a fact they already knew by citing your paper, and one point if they meticulously follow your carefully written painstaking instructions to build an instrument that is the foundation of the work they are publishing. This is not an incentive to do good meticulous open source hardware documentation, it is an incentive to get as many flashy, controversial results into the literature as possible.

This might sound like quite a negative response from a bitter individual (because it is!). But, it is also a reminder that institutions actively encourage scientists to maximise statistics by gaming the system. We need to be aware of this when suggesting the rules of the game.

2 Likes

Thanks for sharing this; it would be really nice if we can give our feedback considering that UNESCO is keen on having diverse inputs. We (@uddhavi2009 , @e-clin and @ratiranjan) can also add some feedback with the south-east Asian perspective.

Sidenote: As we are encouraging for Open Data in the NIHR Global RESPIRE project; our team is considering for giving comments on Open Data “As open as possible”, on achieving effective monitoring of Open Data in line with the FAIR and CARE principles. So that it will help to protect the confidentiality and rights of the people including the Indigenous (e.g. tribal communities).

1 Like

I sent my own feedback already per email, happy to work it out in this space and send a new email. Here my email :

In the introduction, please add open source hardware, as indicated in the UNESCO open science recommendation
opening up or sharing of research data, publication of software used in research under open source licences
→
opening up or sharing of research data, publication of software and hardware source used in research under open source licences

As a more general remark, I would like to propose some clause to minimise the risk of misuse. We have seen both misuse of metrics (JIF for instance) and competing interests in open science monitoring (same company creating metrics and selling services for publication).

In Part 2: Transparency and reproducibility I would propose to add:

Objectives and competing interest. In order to reduce the risk of misuse, the objective aimed during the creation of the measure should be indicated (in order to prevent the use of a metrics for a purpose it was not designed for), in addition the authors of the metrics should be known and any competing interest should be stated.

PS: the authors are very experienced (and clever), I think they considered what @julianstirling mean and the principles reflect that in part 3 (especially the “avoid ranking” part).

2 Likes

It’s certainly nice to see “Avoid rankings” in the document. I read that as no new rankings on how open science is? It would be even better to suggest removing existing rankings that harm open science, but I suppose that may be out of scope.

1 Like

I read Julian’s linked opinion piece, and in Section 3, the examples of openness during the Victorian era (1845!) –

  • publishing details of a mechanism needed for replication of an experiment,
  • creating the first standards for screw threads to speed innovation, and
  • rejecting patents

– makes the perverse incentives of today’s academic science practices plainly evident.

I’d like to see his conclusion calling for carrots (aligning vision) rather than sticks (requirements that can be gamed) inform a group letter to UNESCO.

2 Likes

Just for context, from the Unesco draft (Part 3, items 2 and 3):

Monitor for improvement: monitoring should be used to incentivise rather than punish lack of practices.
Continuous assessment of the monitoring initiative: as any indicator could potentially facilitate gaming attempts, there is a need for continuous assessment of indicators and methodologies, and the potential shift of practices in unintended directions.

From the folks I know in the contributor list, I have every reason to believe that the people who wrote these draft principles are open to discussing improvements and suggestions, and that letters from any individual or group will be well received and fully considered. I would just note that the scope of the document is Open Science monitoring initiatives (which already exist and aren’t going away) and not research evaluation.

Cheers! :slight_smile:

.~´

1 Like