Dennis Alexis Valin Dittrich<p>GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation <a href="https://misinforeview.hks.harvard.edu/article/gpt-fabricated-scientific-papers-on-google-scholar-key-features-spread-and-implications-for-preempting-evidence-manipulation/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">misinforeview.hks.harvard.edu/</span><span class="invisible">article/gpt-fabricated-scientific-papers-on-google-scholar-key-features-spread-and-implications-for-preempting-evidence-manipulation/</span></a><br>"Roughly two-thirds of the retrieved papers were found to have been produced, at least in part, through undisclosed, potentially deceptive use of GPT. The majority (57%) of these questionable papers dealt with policy-relevant subjects (i.e., environment, health, computing), susceptible to influence operations. Most were available in several copies on different domains (e.g., social media, archives, and repositories).<br>Two main risks arise from the increasingly common use of <a href="https://fediscience.org/tags/GPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPT</span></a> to (mass-)produce <a href="https://fediscience.org/tags/fake" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>fake</span></a>, scientific <a href="https://fediscience.org/tags/publications" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>publications</span></a>. First, the abundance of fabricated “studies” seeping into all areas of the <a href="https://fediscience.org/tags/research" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>research</span></a> infrastructure threatens to overwhelm the scholarly communication system and jeopardize the integrity of the scientific record. A second risk lies in the increased possibility that convincingly scientific-looking content was in fact deceitfully created with <a href="https://fediscience.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> tools and is also optimized to be retrieved by publicly available academic search engines, particularly <a href="https://fediscience.org/tags/GoogleScholar" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GoogleScholar</span></a>. However small, this possibility and awareness of it risks undermining the basis for <a href="https://fediscience.org/tags/trust" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>trust</span></a> in <a href="https://fediscience.org/tags/scientificKnowledge" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>scientificKnowledge</span></a> and poses serious societal risks."<br><a href="https://fediscience.org/tags/science" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>science</span></a> <a href="https://fediscience.org/tags/AIEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIEthics</span></a></p>