<?xml version="1.0"?>
<!DOCTYPE ArticleSet PUBLIC "-//NLM//DTD PubMed 2.0//EN" "http://www.ncbi.nlm.nih.gov/entrez/query/static/PubMed.dtd">
<ArticleSet>
  <Article>
    <Journal>
      <PublisherName>Barw</PublisherName>
      <JournalTitle>Barw Medical Journal</JournalTitle>
      <Issn>2960-1959</Issn>
      <PubDate PubStatus="epublish">
        <Year>2024</Year>
        <Month>11</Month>
        <Day>01</Day>
      </PubDate>
    </Journal>
    <ArticleTitle>Assessment of Chat-GPT, Gemini, and Perplexity in Principle of Research Publication: A Comparative Study</ArticleTitle>
    <ELocationID EIdType="doi">10.58742/bmj.v2i4.140</ELocationID>
    <Language>eng</Language>
    <AuthorList>
      <Author>
        <FirstName EmptyYN="Y"/>
        <LastName>Ameer M. Salih</LastName>
        <Affiliation>Civil Engineering Department, College of Engineering, University of Sulaimani, Sulaymaniyah, Iraq. ameer.salih@univsul.edu.iq</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="Y"/>
        <LastName>Jaafar Omer Ahmed</LastName>
        <Affiliation>Psychology Department, Faculty of Art, Soran University, Soran, Iraq. jaafar.ahmed@gmail.com</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="Y"/>
        <LastName>Dilan S. Hiwa</LastName>
        <Affiliation>Scientific Affairs Department, Smart Health Tower, Madam Mitterrand Street, Sulaymaniyah, Iraq. dilan.sarmad.hiwa@gmail.com</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="Y"/>
        <LastName>Abdulwahid M. Salih</LastName>
        <Affiliation>Scientific Affairs Department, Smart Health Tower, Madam Mitterrand Street, Sulaymaniyah, Iraq. abdulwahd.salih@gmail.com</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="Y"/>
        <LastName>Rawezh Q. Salih</LastName>
        <Affiliation>Scientific Affairs Department, Smart Health Tower, Madam Mitterrand Street, Sulaymaniyah, Iraq. rawezh.salih@gmail.com</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="Y"/>
        <LastName>Hemn A. Hassan</LastName>
        <Affiliation>Kscien Organization for Scientific Research (Middle East office), Hamdi Street, Sulaymaniyah, Iraq. hemn.ali@gmail.com</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="Y"/>
        <LastName>Yousif M. Mahmood</LastName>
        <Affiliation>Scientific Affairs Department, Smart Health Tower, Madam Mitterrand Street, Sulaymaniyah, Iraq. yousuf.smarthealth@gmail.com</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="Y"/>
        <LastName>Shvan H. Mohammed</LastName>
        <Affiliation>Xzmat Polyclinic, Rizgari, Kalar, Sulaymaniyah, Iraq. shvanh80@gmail.com</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="Y"/>
        <LastName>Bander A. Abdalla</LastName>
        <Affiliation>Scientific Affairs Department, Smart Health Tower, Madam Mitterrand Street, Sulaymaniyah, Iraq. bander.abdalla@gmail.com</Affiliation>
      </Author>
    </AuthorList>
    <History>
      <PubDate PubStatus="received">
        <Year>2024</Year>
        <Month>10</Month>
        <Day>03</Day>
      </PubDate>
    </History>
    <Abstract>Introduction

Many researchers utilize artificial intelligence (AI) to aid their research endeavors. This study seeks to assess and contrast the performance of three sophisticated AI systems, namely, ChatGPT, Gemini, and Perplexity when applied to an examination focused on knowledge regarding research publication.

Methods

Three AI systems (ChatGPT-3.5, Gemini, and perplexity) were evaluated using an examination of fifty multiple-choice questions covering various aspects of research, including research terminology, literature review, study design, research writing, and publication-related topics. The questions were written by a researcher with an h-index of 22, and it was later tested on two other researchers with h-indices of 9 and 10 in a double-blinded manner and revised extensively to ensure the quality of the questions before testing them on the three mentioned AI systems.

Results

In the examination, ChatGPT scored 38 (76%) correct answers, while Gemini and Perplexity each scored 36 (72%). Notably, all AI systems frequently chose correct options significantly: ChatGPT chose option (C) correctly 88.9% of the time, Gemini accurately selected option (D) 78.9% of the time, and Perplexity correctly picked option (C) 88.9% of the time. In contrast, other AI tools showed minor agreement, lacking statistical significance, while ChatGPT exhibited significant concordance (81-83%) with researchers' performance.

Conclusion

ChatGPT, Gemini, and Perplexity perform adequately overall in research-related questions, but depending on the AI in use, improvement is needed in certain research categories. The involvement of an expert in the research publication process remains a fundamental cornerstone to ensure the quality of the work.
</Abstract>
  </Article>
</ArticleSet>
