Read more
The European Code of Conduct for Research Integrity (The ALLEA code, 2023)
EU guidelines on AI: Living guidelines on the RESPONSIBLE USE OF GENERATIVE AI IN RESEARCH
The use of generative AI is widespread in research. It is crucial to be aware that the principles for AI-assisted research are the same as the general principles for research integrity and as such follows the standards from the European Code of Conduct for Research Integrity. The key principles for the responsible use of generative AI in research are:
Reliability in ensuring the quality of research, reflected in the design, methodology, analysis and use of resources. This includes aspects related to verifying and reproducing the information produced by the AI for research. It also involves being aware of possible equality and non-discrimination issues in relation to bias and inaccuracies.
Honesty in developing, carrying out, reviewing, reporting and communicating on research transparently, fairly, thoroughly and impartially. This principle includes disclosing that generative AI has been used.
Accountability for the research from idea to publication, for its management and organisation, for training, supervision and mentoring, and for its wider societal impacts. This includes responsibility for all output a researcher produces, underpinned by the notion of human agency and oversight.
Respect for colleagues, research participants, research subjects, society, ecosystems, cultural heritage and the environment. Responsible use of generative AI should take into account the limitations of the technology, its environmental impact and its societal effects (bias, diversity, non-discrimination, fairness and prevention of harm). This includes the proper management of information, respect for privacy, confidentiality and intellectual property rights, and proper citation.
When considering the use of generative artificial intelligence (AI) tools for the preparation of an EU proposal, it is imperative to exercise caution and careful consideration. The AI-generated content should be thoroughly reviewed and validated by the applicants to ensure its appropriateness and accuracy, as well as its compliance with intellectual property regulations. Applicants are fully responsible for the content of the proposal (even those parts produced by the AI tool) and must be transparent in disclosing which AI tools were used and how they were utilized.
Specifically, applicants are required to:
Verify the accuracy, validity, and appropriateness of the content and any citations generated by the AI tool and correct any errors or inconsistencies.
Provide a list of sources used to generate content and citations, including those generated by the AI tool. Double-check citations to ensure they are accurate and properly referenced.
Be conscious of the potential for plagiarism where the AI tool may have reproduced substantial text from other sources. Check the original sources to be sure you are not plagiarizing someone else’s work.
Acknowledge the limitations of the AI tool in the proposal preparation, including the potential for bias, errors, and gaps in knowledge.
Merian Skouw Haugwitz-Hardenberg-Reventlow Chefrådgiver, Forskningssikkerhed Afdeling for Forskning, Rådgivning og Innovation Mobil: 25320325 mehau@dtu.dk
Read more
The European Code of Conduct for Research Integrity (The ALLEA code, 2023)
EU guidelines on AI: Living guidelines on the RESPONSIBLE USE OF GENERATIVE AI IN RESEARCH
Limitations to AI-tools
This means AI can generate an output, e.g. references or information that sounds convincing, but does not exist, are incomplete or are factually incorrect.