ChatGPT used in peer reviews of Australian Research Council grant applications

Academics have alleged that some peer reviews of grant applications are being written with ChatGPT, prompting the Australian Research Council (ARC) to warn academics that feeding their peers’ work into generative AI models could be a breach of confid

ChatGPT used in peer reviews of Australian Research Council grant applications

Academics have alleged that some peer reviews of grant applications are being written with ChatGPT, prompting the Australian Research Council (ARC) to warn academics that feeding their peers’ work into generative AI models could be a breach of confidentiality.

ARC_Tracker tweeted on Friday that it had received reports “that some ARC discovery projects assessor reports have been produced with ChatGPT.”

Discovery projects are multi-year research programs vying for up to $500,000 in government grants. Typically only 15-to-20 percent of applicants are funded, and the review process is rigorous.

But ARC_Tracker said some research teams had received assessor reports that were just a “generic regurgitation” of their applications, with at least one containing the text “regenerate response” – a telltale sign that it was generated by ChatGPT.

The Twitter user behind the ARC_Tracker account – who is a researcher at an Australian university – told iTnews that it appeared some assessment reports were AI-generated.

“Researchers who’ve led proposals have sent me (all or parts of) some expert assessments they received and they read exactly like a simple ChatGPT summary of a proposal without any critique, opinion, insight or assessment at all,” the ARC_Tracker account owner told iTnews.

“The researchers tell me the phrases used in these assessments are very simple rearrangements of the phrases already in their grant proposals.

“They have also commented that they put their own grant proposals into ChatGPT and got very similar summaries as they see in the assessment…

“One assessment, from one researcher, had the tell-tale “regenerate response” text at the bottom of (one section of) the assessment. This is what ChatGPT shows at the bottom of its response page.

“It’s smoking gun evidence that ChatGPT was used to generate their assessment text.” 

ARC has since released a statement advising peer reviewers not to use AI as part of their assessments.

“Release of material that is not your own outside of the closed research management system, including into generative AI tools, may constitute a breach of confidentiality,” the council said.

It added that it would update guidance on AI use “in the near future.”

An ARC spokesperson told iTnews that although “generative artificial intelligence (AI) tools such as ChatGPT are not explicitly named” in policies that apply to peer reviews, “common principles of confidentiality apply across both existing and emerging channels through which confidential information may be inappropriately disclosed.

“The ARC is not alone in considering a range of issues regarding the use of generative AI that uses algorithms to create new content and that may present confidentiality and security challenges for research and for grant program administration…Any concerns raised by applicants will be considered and responded to as per our policies,” the spokesperson said.

The spokesperson did not comment on how common the use of ChatGPT in peer reviews is; or if it’s considering using AI detection models to identify the use of ChatGPT by academics, as some universities, such as the University of Melbourne, have done to identify use by students. 

ARC_Tracker said that the factors underlying ChatGPT-assisted peer-reviews of grant applications included academics’ unmanageable workloads and ARC taking too long to release clear policies about the use of AI.

“There’s been a long-running problem of ARC grant proposals being extremely long,” the account owner said.

“If there’s just three or four investigators on the grant, it can easily run to 100-plus pages long. I’ve assessed ones that are 150 pages long.

“Assessors often aren’t given time to review anything in their academic workload model at universities (which have many problems themselves) and so the peer review process is generally under a lot of pressure.”

ARC_Tracker said that another reason peer reviewers may have resorted to ChatGPT is that “there’s nothing in any of the ARC’s policies explicitly about AI generative text engines” that prohibit or restrict their use.

ARC_Tracker added that the statement ARC released did not explicitly address the use of generative AI. 

“Sure, the ARC’s statement…says there are general requirements, under their confidentiality policy, not to upload other people’s grant text to external websites,” the account owner said.

“But take a read of that policy. Is it clear, simple and definitely – without any doubt – clearly warns people not to use ChatGPT or similar AI services? I don’t think so. I’ve read it – it’s pretty difficult to phrase. 

“The ARC do this all the time: when something goes wrong they rely on some worn old policy, that no one can read easily, instead of pre-empting problems that the community warns them about well in advance.”


About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.