New code of conduct for researchers: be cautious with AI
Support open science where possible, don’t publish in predatory journals, and be careful with AI. The Royal Netherlands Academy of Arts and Sciences (KNAW) has presented a draft version of the new code of integrity, which is now open for public comment.
The current code of scientific integrity is only seven years old. Yet an update is already needed, a KNAW committee decided last year.
The committee felt the code should better reflect the practice-oriented research at universities of applied sciences. In addition, the use of artificial intelligence has skyrocketed, and due to international tensions, knowledge security has also become a much bigger issue.
Politics
And then there was the pandemic. In the draft code released today, the KNAW notes that “political involvement” in scientific research has increased in recent years. Think, for example, of the lockdowns, which raised the question for many: was it the scientists or the politicians who decided schools should close?
Scientists must be clear when they are engaging in politics. And politicians should not interfere with the fact-finding role of scientists. “For everyone to properly fulfill their societal role, all parties must respect each other’s distinctiveness and autonomy,” the new code advises. Moreover, science does not provide just facts, but also “methodical doubt.”
Predatory journals
The explosive growth of predatory ‘scientific’ journals has also led to a new addition: don’t get involved with them. Publish only in journals that maintain serious quality standards.
Based on the integrity code, anyone with doubts about the integrity of scientific research can file a complaint with the university. For appeals, cases can be brought to the National Board for Research Integrity (LOWI).
The KNAW is inviting everyone to weigh in on the draft text. Feedback is welcome until October 20. Is the wording appropriate? Is something missing? Are the consequences of certain rules well considered?
Open science and AI
Some of the new additions sound cautious. The principle of open science has been included in the code, for instance. Data and publications should be freely accessible “where possible and desirable.”
The new guidelines on generative artificial intelligence are intentionally phrased broadly, the authors explain in an accompanying note. Developments are moving quickly, and the rules must also apply to technologies that do not yet exist.
One rule reads: “Use only technologies whose functionality is known and scientifically validated.” That may not be easy. AI tools such as ChatGPT, Perplexity, and Mistral generate texts and images based on statistics and data. It is often hard to determine what sources were used or how the data was processed.
Not easy
Another rule states: “Do not use technologies that hinder compliance with the principles and standards in this code.” The committee urges employers to support their staff in this regard and to develop clear guidelines.
The new code of conduct applies to both universities and universities of applied sciences. The latter are not mentioned very explicitly, except in matters such as the funding of professorships and lectureships, which, according to the code, must be transparent.
The updated version of the Dutch Code of Conduct for Scientific Integrity is expected to take effect early next year.
This article was translated using AI-assisted tools and reviewed by an editor.
Discussion