Non-Recommended Publishing Lists: Strategies for Detecting Deceitful Journals
Abstract
The rapid growth of open access publishing (OAP) has significantly improved the accessibility and dissemination of scientific knowledge. However, this expansion has also contributed to the emergence of non-recommended journals (NRJs) and non-recommended publishers (NRPs) that exploit researchers by offering rapid publication without maintaining proper peer-review standards. Such practices threaten the credibility of scholarly communication and may lead to the dissemination of low-quality or unreliable research. In response to this challenge, several initiatives have developed lists and monitoring systems to help researchers identify potentially NRJs. This study reviews major NRPs and NRJs evaluation platforms, including Beall’s List, Cabells’ Predatory Reports, Kscien’s List, Predatory Reports, Academic Journal Predatory Checking (AJPC), the Early Warning List of International Journals, Kanalregister, the Open Access Journal List, the Journal Insights Predatory List, and the International Journals Blacklist. The review examines their objectives, operational approaches, strengths, and limitations in identifying deceptive publishing practices. The findings indicate that while these lists serve as useful preliminary screening tools, they vary in transparency, accessibility, evaluation criteria, and governance structures. Some rely on expert assessment, whereas others apply automated or bibliometric methods. Despite their usefulness, no single list can comprehensively identify all NRJs, and misclassification may occur. Therefore, researchers should use these tools cautiously and combine them with independent evaluation of journal credibility to ensure responsible publishing and maintain the integrity of scientific communication.
Introduction
Many scholars agree that one of the main ways to evaluate an academic’s performance is by looking at their published research in scientific journals and conferences [1]. In recent years, academic publishing has increasingly shifted toward open access publishing (OAP). This model uses digital platforms to make research freely available to everyone. Readers can access, download, copy, and share the work without restrictions. Because open access articles are widely available, they often receive more attention and citations. For this reason, OAP is attractive to both authors and journals. However, in most cases, researchers must pay article processing charges (APCs) to publish their work [2]. Although OAP has many advantages and is expected by many to become the standard model for academic publishing, its rapid growth has also led to serious problems. One major issue is the rise of non-recommended journals (NRJs) and non-recommended publishers (NRPs). The term “predatory” originally comes from biology and refers to harming or exploiting others for personal gain. According to the Merriam-Webster dictionary, it involves intentional exploitation for profit. In 2010, Jeffrey Beall first used the term to describe journals and publishers that claim to conduct peer review but either do not perform it properly or skip it entirely [3].
Today, many journals are run by unethical publishers whose main goal is to collect fees from researchers who are eager to publish their work [4]. Non-recommended journals harm researchers, damage the integrity of science, and weaken the communication of reliable knowledge. Under the open access model, publishers earn more money for every article they publish. However, a proper peer review process often results in the rejection of weak or low-quality manuscripts. This conflicts with the financial interests of NRPs. As a result, they may bypass or rush the peer review process and ignore reviewers’ comments to accept more papers and increase their profits [5]. This weakness in the OAP system has led to the publication of a large number of low-quality papers that now fill the scientific literature. It has also encouraged the growth of publishers who focus more on making money than on maintaining scientific standards. In some cases, it has become difficult to clearly distinguish between real scientific research and pseudoscience [5].
In response to these challenges, scholars have sought more precise terminology to describe problematic journals. The term “non-recommended journals” was first introduced by Kakamad et al. during the 18th meeting of the European Association of Science Editors held in Oslo, Norway. In that context, the authors argued that this terminology provides a broader and more practical framework for classifying journals that do not meet acceptable standards of scientific publishing. Unlike the term “predatory journals,” which implies clear misconduct, “non-recommended journals” encompasses the expanding gray zone of journals that lie between fully legitimate and clearly predatory entities those with variable or minimal peer review and questionable editorial practices. Additionally, the adoption of NRJs was proposed as a strategic approach to reduce potential legal challenges, as it avoids direct accusations while still allowing institutions and researchers to make informed and cautious publication decisions [6].
The present study aims to review existing lists that identify NRJs. It will analyze their strengths and weaknesses and evaluate how effective and comprehensive they are in detecting such journals. Through this review, the study seeks to provide useful insights into the value of these lists and to support ongoing efforts to reduce the impact of NRPs in academia.
Non-Recommended Publishing Lists
Non-Recommended publishing lists have emerged as strategic tools to identify and discourage journals and publishers that engage in deceptive scholarly practices. These lists were developed in response to the rapid expansion of OAP and the increasing exploitation of researchers through inadequate peer review, fabricated metrics, and misleading editorial policies [7]. Provides a comparative overview of major NRP lists (Table 1).
|
List / Platform |
Year Introduced |
Responsible Entity |
Type |
Accessibility |
Evaluation Method |
Main Focus |
Key Strength |
Main Limitation |
|
|
Beall’s List |
2008 |
Jeffrey Beall, University of Colorado Denver |
Blacklist |
Public (archived/ replicated sites) |
Manual evaluation using qualitative criteria |
Predatory publishers, standalone journals, hijacked journals, misleading metrics |
First systematic attempt to document predatory publishers |
Criticized for subjectivity and lack of transparent methodology |
|
|
Cabells’ List |
2017 |
Cabells International |
Commercial blacklist |
Subscription-based |
Structured evaluation using 70+ violation criteria |
Predatory journals and unethical publishing practices |
Formalized and criteria-based assessment system |
Limited accessibility due to subscription and opaque review process |
|
|
Kscien’s List |
2020 |
Kscien Organization (research NGO) |
Public blacklist |
Open access |
Committee-based evaluation and research-driven criteria |
Non-recommended publishers, journals, hijacked journals, Conference |
Focus on protecting researchers in developing countries |
Criteria still evolving and coverage may be incomplete |
|
|
Predatory Reports |
Unknown |
Anonymous volunteer researchers |
Public blacklist |
Open access |
Compilation of publicly available evidence |
Predatory publishers, Stand-alone journals |
Freely accessible and community-driven |
Limited transparency regarding governance and evaluation procedures |
|
|
Academic Journal Predatory Checking (AJPC) |
2023 |
Academic researchers |
Automated detection system |
Public tool |
Machine learning and natural language processing |
Identification of predatory journal websites |
Scalable and data-driven evaluation approach |
Dependent on training datasets and may produce classification errors |
|
|
Early Warning List of International Journals |
2020 |
Chinese Academy of Sciences (NSL) |
Risk-alert system |
Public access |
Bibliometric indicators and expert review |
Predatory Journals |
Institutional authority and data-driven evaluation |
Limited transparency of weighting methodology |
|
|
Kanalregister (Norwegian Register) |
2010 |
Norwegian Directorate for Higher Education and Skills |
National registry |
Public access |
NA |
Predatory Publishers, Predatory Journals |
Strong institutional governance and quality control |
Designed primarily for the Norwegian research evaluation system |
|
|
Open Access Journal List |
2017 |
Undisclosed private platform |
Public blacklist |
Open access |
Website-based reporting |
Journals suspected of predatory practices |
Easily accessible awareness tool |
Limited transparency of inclusion criteria |
|
|
Journal Insights Predatory List |
2024 |
Journals Insights platform |
Public blacklist |
Open access |
Website-based reporting |
Journals with questionable publishing practices |
Simple and publicly available reference |
Methodology and governance details are limited |
|
|
International Journals Blacklist |
Unknown |
Journal Index platform |
Public blacklist |
Open access |
Website-compiled reports |
Journals suspected of deceptive practices |
Easy access and quick verification |
Lack of detailed evaluation methodology |
These initiatives employ different strategies to identify questionable journals and publishers. Some systems rely primarily on expert evaluation and manual assessment, while others apply structured criteria, bibliometric indicators, or automated computational techniques to detect irregular publishing patterns [7,8]. Summarizes the main methodological approaches used by these systems (Table 2).
|
System / List |
Detection Approach |
Key Indicators Used |
Level of Automation |
Transparency of Criteria |
|
Beall’s List |
Expert judgment |
Editorial board legitimacy, peer-review practices, indexing claims, APC transparency |
Manual |
Moderate (criteria published but subjective) |
|
Cabells’ List |
Structured criteria-based evaluation |
74 violation indicators including editorial misconduct, plagiarism, fake metrics |
Manual with structured framework |
High (defined violation criteria) |
|
Kscien’s List |
Committee-based evaluation |
Journal conduct, peer-review integrity, Sting operation, fabricated quality indicators |
Manual (committee review) |
High (criteria evolving) |
|
Predatory Reports |
Evidence compilation |
Public reports, suspicious editorial practices, deceptive metrics |
Manual |
Limited |
|
Academic Journal Predatory Checking (AJPC) |
Machine learning / AI detection |
Website structure, textual features, metadata analysis |
Automated |
High (algorithm described in research) |
|
Early Warning List of International Journals |
Bibliometric risk assessment |
Citation patterns, publication growth, self-citation rates, retractions |
Semi-automated + expert review |
Moderate |
|
Kanalregister |
Expert disciplinary evaluation |
Editorial quality, peer review, academic relevance |
Manual (expert committees) |
High |
|
Open Access Journal List |
Website reporting |
Misleading metrics, solicitation practices, unclear editorial policies |
Manual |
Limited |
|
Journal Insights Predatory List |
Online compilation |
Questionable peer review, editorial transparency issues |
Manual |
Limited |
|
International Journals Blacklist |
Online reporting system |
Misleading indexing claims, editorial misconduct |
Manual |
Limited |
Beall’s List
Jeffrey Beall first began compiling what later became known as Beall’s List in 2008 after receiving numerous unsolicited emails inviting him to serve on editorial boards of unfamiliar journals. Initially, the list attracted limited attention; however, by mid-2010, it had gained widespread recognition within the academic community. Beall, an American librarian and associate professor at the University of Colorado Denver, developed the list with the stated objective of helping researchers identify recommended scholarly journals and avoid deceptive publishers [7,9].
Beall’s List consisted of four principal categories. The first included publishers considered to be of low quality or classified as NRJs. The second comprised standalone journals suspected of engaging in predatory practices. The third category addressed “hijacked journals,” referring to fraudulent journals that deliberately adopted names resembling established and reputable titles in order to mislead authors [7]. The fourth category, titled Misleading Metrics, identified companies and journals that promoted fabricated or unreliable journal impact measures. Non-recommended publishers were aware that academics generally prefer to publish in journals with recognized Impact Factors (IF), which are calculated annually and reported in Journal Citation Reports®. Since obtaining an authentic Impact Factor requires sustained performance and may take several years, some NRPs circumvented this process by inventing false metrics [10]. Beall documented such entities under the category of misleading metrics [7].
The criteria Beall used to identify NRJs were extensive and included unclear, duplicated, or absent editors; lack of transparent editorial board information; unclear peer-review processes; undisclosed article processing charges; fabricated indexing claims; plagiarism; image manipulation; absence of retraction policies; and misleading journal titles [11,12]. These warning signs were intended to help authors distinguish between recommended journals and deceptive publishing practices.
Although Beall’s blog provided significant assistance to scholars and contributed meaningfully to safeguarding academic integrity, it also attracted substantial criticism. Some inclusion decisions were viewed as subjective and lacking standardized parameters [12]. Critics argued that the evaluation process relied heavily on Beall’s personal judgment rather than a transparent scoring framework [12,13]. In early 2017, Beall removed his list without fully explaining the reasons for this decision [13]. The discontinuation occurred at a time when awareness of non-recommended publishing was increasing and many researchers relied on the list for guidance. Subsequent initiatives were introduced to address perceived methodological limitations and to improve transparency in journal evaluation processes [12,13].
Cabells’ List
Cabells’ List was introduced in 2017 by Cabell Publishing Co., a U.S.-based company, following the discontinuation of Beall’s List [14]. The initiative was designed to provide a more structured and systematic approach to identifying NRJs. Unlike Beall’s independently maintained blog, Cabells operates as a commercial product consisting of two databases: Journalytics, a whitelist of verified journals, and Predatory Reports, a blacklist of journals identified as engaging in deceptive practices associated with NRJs [14,15]. Cabells developed a formal evaluation framework based on 74 defined violation criteria intended to assess non-recommended publishing behaviors [14]. These criteria include misleading metrics, editorial misconduct, lack of transparent peer review, plagiarism, false indexing claims, absence of digital preservation policies, and unethical APC practices [14,16]. Access to Predatory Reports is subscription-based, and the identities of the evaluators responsible for journal inclusion decisions are not publicly disclosed, particularly for journals classified as NRJs [14].
One of the primary strengths of Cabells’ List is its structured, criteria-based methodology. By publishing a detailed set of violation indicators, Cabells aimed to reduce the subjectivity associated with earlier blacklists and to provide a reproducible framework for journal assessment [14]. The presence of both whitelist and blacklist databases allows institutions to evaluate journals from both positive and negative perspectives [15]. Additionally, Cabells’ database is periodically updated and has been adopted by many academic libraries and research institutions as a formal decision-support tool [15,17].
Despite these advantages, Cabells’ List has been subject to critical evaluation in the scholarly literature. Studies have argued that a substantial portion of the 74 criteria may be redundant or may assess journal quality rather than clear evidence of practices associated with NRJs [7]. Further analysis has revealed inconsistencies in violation assignments and cases where journals listed as NRJs demonstrated characteristics typical of recommended journals, including valid digital object identifiers (DOIs) [17]. Another major limitation is restricted accessibility due to its subscription model, which may disadvantage researchers from low- and middle-income countries [18].
Kscien’s List
The pressure to operate within academic systems that reward researchers primarily based on publication quantity often with limited institutional guidance, particularly in developing countries has led many scholars to unintentionally submit their work to NRJs. These journals commonly rely on aggressive email solicitation and present themselves in misleading ways to attract submissions [4]. Jeffrey Beall emphasized that such journals undermine scientific integrity by neglecting or fabricating the peer-review process, which he regarded as fundamental to scholarly communication [7]. As NRJs continuously modify their strategies to avoid detection, initiatives such as Kscien’s List have become increasingly necessary.
Kscien Organization is a non-profit, non-governmental initiative established by young researchers with the aim of strengthening research culture in developing countries. Its mission is to discourage NRJs and promote high-quality scientific communication, particularly in regions where researchers are more vulnerable to deceptive publishing practices. Evidence suggests that scholars from developing countries are disproportionately targeted by such journals compared with those in more established research environments. Kscien’s core values include scientific rigor, respect, excellence, collaboration, quality, and constructive competition. Its long-term vision focuses on advancing the standard of scientific research in developing nations [2].
To achieve these objectives, Kscien formed a specialized body known as the NRJs Committee, composed of 23 trained researchers. The committee is responsible for maintaining and updating the list while monitoring emerging tactics used by NRJs and NRPs. Current identification criteria emphasize journal conduct, fabrication of quality indicators, and inadequate or questionable peer-review practices. Ongoing efforts seek to develop more objective, evidence-based standards in response to earlier criticisms directed at subjective evaluation methods. Similar to Beall’s framework, Kscien’s List is structured into four primary categories: NRPs, non-recommended standalone journals, hijacked journals, and misleading metrics. A distinguishing feature, however, is its commitment to refining inclusion criteria through committee-based assessment and research-driven improvements [7]. The list is publicly accessible through the Kscien website (https://kscien.org/about-non-recommended-journals-lists/), allowing researchers worldwide to verify journal status.
As NRJs adopt increasingly sophisticated strategies-such as developing professional websites, obtaining indexing in recognized databases, securing sponsorship from recommended institutions, offering free publication for hidden motives, fabricating publication archives, and implementing superficial plagiarism checks-Kscien has expanded its scope accordingly. In addition to its original four categories, it introduced two independent lists: the “Conference List” and the “Cumulative List.” The Conference List identifies non-recommended conferences (NRCs), whether organized independently or under institutional sponsorship. Previously, one limitation of Kscien’s framework was its focus on publishers without clearly distinguishing the individual journals operating under them, which occasionally caused confusion among researchers. The Cumulative List was therefore established to separately catalogue journals affiliated with NRPs, thereby improving clarity and usability of the classification system [19].
Predatory Reports
Predatory Reports operates as an anonymous initiative, and publicly available information regarding its organizational structure and date of establishment remains limited. The platform maintains two primary databases: the Predatory Journal List and the Predatory Publisher List. According to its description, the initiative is supported by volunteer researchers who have experienced the adverse effects of deceptive publishing practices and seek to assist scholars in identifying trustworthy journals and publishers [20].
The organization states that its resources are provided free of charge and that the website operates without advertisements or external corporate sponsorship. It reports being self-funded and explains that its anonymity is motivated by concerns about potential legal actions from entities with aggressive practices. Predatory Reports clarifies that it does not claim authoritative status; rather, it aims to collect, organize, and present publicly accessible information for academic use [20].
Furthermore, the website indicates that referenced materials are included to enable users to assess the evidence independently and draw informed conclusions regarding the NRJs and NRPs listed [20].
Academic Journal Predatory Checking
Academic Journal Predatory Checking (AJPC) represents a technology-driven approach to identifying potentially deceptive journals. Unlike traditional blacklists that rely primarily on manual evaluation, AJPC was developed as an automated system utilizing machine learning techniques to classify journals as NRJs or recommended journals. The system was introduced in 2023 through a peer-reviewed publication describing its design, methodology, and predictive performance [8].
The AJPC operates by extracting features from journal websites and analyzing textual and structural characteristics through natural language processing and algorithmic modeling. The system was trained on datasets derived from previously identified NRJs and recommended journals. Based on these inputs, it generates a predictive classification intended to assist researchers in evaluating journal credibility. By applying computational methods, AJPC seeks to provide a scalable and efficient screening mechanism capable of processing large volumes of journal data. One notable strength of AJPC is its automation capacity. Traditional evaluation methods often require extensive manual review, which can be time-consuming and subject to individual interpretation. In contrast, AJPC offers rapid assessment through a structured algorithmic framework. Additionally, the system’s development and methodology were described in a peer-reviewed scientific article, enhancing transparency compared to informal or anonymously managed lists [8,21].
However, the tool is not without limitations. As with any machine-learning model, the accuracy of AJPC depends heavily on the quality, representativeness, and currency of its training datasets. If the underlying data contain biases or outdated classifications, predictive errors such as false positives or false negatives may occur. Furthermore, automated systems may not fully capture nuanced editorial misconduct or evolving deceptive strategies that require expert judgment. Therefore, AJPC should be considered a supplementary screening instrument rather than a definitive authority in journal classification.
Overall, AJPC reflects an important methodological shift toward computational detection of non-recommended publishing practices (NRPPs). While promising in scalability and efficiency, its outputs should be interpreted cautiously and complemented by critical human evaluation.
Early Warning List of International Journals
The Early Warning List of International Journals is an evaluative initiative developed under the guidance of the National Science Library (NSL) of the Chinese Academy of Sciences (CAS). First introduced in 2020, the list was designed to promote research integrity and support informed publishing decisions among researchers, particularly within China’s academic community. According to the official platform, the initiative aims to provide risk alerts regarding journals that may exhibit irregular publishing behaviors or questionable editorial standards, thereby safeguarding the credibility of scientific output [22].
The Early Warning List does not function as a traditional blacklist; rather, it categorizes journals into graded risk levels-typically low, medium, and high risk-based on comprehensive bibliometric and qualitative indicators. These indicators include publication growth rates, citation patterns, self-citation frequency, peer-review duration, retraction records, article processing charges, and international authorship diversity. By integrating multiple measurable parameters, the system seeks to identify abnormal publication trends that may indicate compromised editorial practices or insufficient quality control mechanisms [8].
Governance of the list is institutionally structured. The evaluation process is reportedly conducted by expert panels supported by bibliometric data analysis, ensuring that the assessment combines quantitative evidence with disciplinary expertise. Updates are released periodically, reflecting dynamic changes in journal performance metrics. Importantly, the official documentation emphasizes that inclusion in the Early Warning List does not necessarily imply misconduct, but rather signals potential risks requiring careful consideration by authors [22].
One of the principal strengths of the Early Warning List lies in its institutional legitimacy and policy integration. As an initiative supported by a major national research body, it carries significant authority and has influenced research evaluation practices. Its reliance on measurable indicators provides a structured and data-driven framework for journal risk assessment. Furthermore, the graded classification system allows nuanced interpretation rather than binary labeling [8].
Nevertheless, several limitations have been noted in scholarly discussions. The full methodological weighting of evaluation indicators is not entirely transparent in public documentation, which may restrict independent verification. Additionally, certain bibliometric patterns used as warning signals-such as rapid publication growth or high citation concentration-may also occur in recommended journals undergoing expansion or specialization. Consequently, while the Early Warning List serves as an important advisory mechanism, its classifications should be interpreted cautiously and supplemented with independent journal evaluation.
Kanalregister
Kanalregister is an online registry developed in Norway to document and monitor scholarly publication channels used within national research evaluation systems. The registry is maintained by the Norwegian Directorate for Higher Education and Skills (HK-dir) and is closely associated with the Norwegian Register for Scientific Journals, Series, and Publishers. Its primary objective is to provide an official overview of recognized scholarly publication outlets, including journals, book series, and academic publishers, thereby supporting transparency and standardization in research assessment [23].
The registry classifies publication channels according to a structured evaluation system that distinguishes between recognized scholarly outlets and those that do not meet the required academic standards. Journals and publishers included in the database are assessed based on criteria such as editorial quality, peer-review procedures, academic relevance, and disciplinary recognition. Within the Norwegian model, publication channels are typically categorized into levels reflecting their academic prestige and influence. This classification framework plays a significant role in Norway’s research funding and evaluation system, where academic publications contribute to institutional performance indicators [23,24].
Governance of the registry is organized through national disciplinary committees composed of academic experts. These committees evaluate journals and publishers within their respective fields and make decisions regarding their inclusion or exclusion from the registry. The database is regularly updated, and researchers can freely access the platform to verify whether a specific journal or publisher is recognized within the Norwegian scholarly communication system [23].
One of the major strengths of Kanalregister is its institutional oversight and transparent governance structure. Because the registry operates under a national research policy framework and involves expert review panels, it provides a structured and authoritative reference for identifying credible scholarly publication channels. Furthermore, its integration with Norway’s research funding model encourages researchers to publish in recognized academic outlets [24].
However, the registry also has certain limitations. As it was primarily designed to support the Norwegian research evaluation system, its classifications may not fully represent the diversity of global publishing practices. In addition, the absence of a journal from the registry does not necessarily imply that it is a NRJ; rather, it may simply fall outside the scope of the Norwegian evaluation framework. Therefore, while Kanalregister serves as a useful verification tool, it should be used alongside other sources when assessing the credibility of publication venues [24].
Open Access Journal List
The Open Access Journal List (OAJL) is an online resource that compiles NRJs within academic publishing. The platform aims to raise awareness of NRPPs and assist researchers in identifying journals that may not follow standard scholarly publishing principles. According to the website, the list includes journals suspected of demonstrating problematic practices such as misleading journal metrics, unclear peer-review procedures, aggressive email solicitation, and insufficient editorial transparency [25].
The platform provides its information in an openly accessible format, allowing researchers to review the listed journals without subscription or registration. Such publicly available lists can serve as practical reference tools for researchers, particularly early-career scholars who may lack adequate guidance when selecting appropriate publication venues. By highlighting journals that may exhibit deceptive or irregular publishing behavior, the OAJL attempts to support more informed decision-making within the academic community [16,25].
One of the primary strengths of the OAJL is its accessibility. The information is freely available online, allowing researchers to quickly review journals that may have NRPPs [25]. Public lists of NRJs can help increase awareness of NRPPs and assist authors in avoiding problematic outlets [16].
Despite its usefulness as a preliminary reference tool, the OAJL has several limitations. The criteria and evaluation methods used to identify listed journals are not clearly explained, which may reduce transparency and make independent verification difficult [25]. Furthermore, scholars have noted that blacklist-based approaches, in general, may face challenges related to inconsistent inclusion criteria, limited transparency in governance, and the potential for misclassifying journals [26].
Journal Insights Predatory List
The Journal Insights Predatory List is an online resource that compiles journals suspected of engaging in NRPPs. The platform aims to assist researchers in identifying journals that may not follow accepted standards of scholarly publishing. According to the website, the list highlights journals that may demonstrate problematic behaviors such as misleading impact metrics, questionable peer-review processes, aggressive email invitations to authors, and limited transparency regarding editorial policies. By providing this information, the platform seeks to raise awareness about non-recommended publishing and help authors make more informed decisions when selecting journals for manuscript submission. The website presents the information in a publicly accessible format, allowing users to review the listed journals without requiring registration or subscription. The platform organizes its content as a reference list of journals considered potentially problematic, thereby functioning as an awareness tool for researchers who wish to verify the credibility of publication outlets before submitting their work [27].
One of the main strengths of the Journal Insights non-recommended List is its open accessibility. The information is freely available online, which allows researchers to quickly consult the list and identify journals that may have been associated with NRPPS. Publicly accessible resources such as this can contribute to increasing awareness of NRPPs and help researchers, particularly early-career authors, recognize warning signs when evaluating potential journals. Despite its usefulness as an informational resource, the Journal Insights NRJS has certain limitations. The website provides limited details on the methodology used to determine journal inclusion and does not fully explain the evaluation criteria. Additionally, information about the governance structure or editorial oversight behind the list is not provided in detail [27]. Because of these limitations, the list should be used cautiously, preferably alongside other journal evaluation tools, when assessing the credibility of scholarly publication venues.
International Journal Blacklist
The International Journals Blacklist, hosted on the Journal Index platform, is an online resource that compiles journals suspected of demonstrating characteristics commonly associated with NRPPS. The platform aims to inform researchers about journals that may lack transparent editorial policies, credible peer-review procedures, or reliable publication standards. According to the website, the blacklist includes journals that have been reported for issues such as misleading indexing claims, unclear editorial oversight, aggressive solicitation of manuscripts, and questionable publication metrics [28].
The International Journals Blacklist is presented as a publicly accessible reference list that researchers can consult when evaluating potential publication venues. By providing information about journals suspected of engaging in deceptive publishing behavior, the platform attempts to raise awareness of NRPPs and support researchers in identifying potentially problematic journals [28]. Public resources that highlight questionable publication outlets can contribute to broader efforts within the academic community to recognize and avoid NRJs [16].
However, several limitations should also be considered when using the International Journals Blacklist. The platform provides limited information about the criteria and methodology used to determine journal inclusion, which may reduce transparency and make independent verification difficult [28]. In addition, studies have indicated that blacklist-based approaches may face challenges related to inconsistent evaluation standards and the potential misclassification of journals. Consequently, such lists should be used cautiously and preferably in combination with other journal evaluation tools when assessing the credibility of scholarly publication venues [26].
Future Directions
Future efforts to address NRPPs should focus on improving transparency, standardization, and collaboration among journal evaluation initiatives. One important direction is the development of clearer and more consistent criteria for identifying NRJs. Establishing standardized evaluation frameworks could reduce subjectivity and improve comparability among different blacklist systems.
Another promising direction involves integrating technological tools, such as machine learning and automated detection systems, to help identify suspicious publishing patterns. These computational approaches may help analyze large numbers of journals more efficiently and detect emerging NRJs behaviors that may not be immediately visible through manual evaluation alone.
Greater collaboration between academic institutions, libraries, research organizations, and publishing experts is also needed to strengthen monitoring systems and ensure that journal assessment methods remain reliable and up to date. Institutional support and training programs can play a crucial role in educating researchers particularly early-career scholars about responsible publishing practices and evaluating journal credibility.
In addition, improving accessibility to reliable journal evaluation resources is essential, especially for researchers in developing countries who may have limited access to subscription-based databases. Expanding open, publicly accessible verification tools could help reduce researchers' vulnerability to NRPPs.
Finally, future research should continue to examine the effectiveness of existing NRJ lists and explore new approaches for monitoring the rapidly evolving scholarly publishing landscape. By combining transparent evaluation frameworks, technological innovation, and global collaboration, the academic community can develop more effective strategies to safeguard the quality and credibility of scientific communication.
Conclusion
Non-recommended publishing remains a growing challenge in the academic community. The lists reviewed in this study can help researchers recognize potentially deceptive journals, but they should be used carefully and alongside critical evaluation of journal practices. Ultimately, maintaining research integrity requires awareness, careful journal selection, and continued efforts to improve transparency in scholarly publishing.
Declarations
Conflicts of interest: The authors have declared that no competing interests exist.
Ethical approval: Not applicable.
Patient consent (participation and publication):
Funding: The present study received no financial support.
Acknowledgements: None to be declared.
Authors' contributions: FHK and BAA conceived and designed the study and performed the literature search. BAA drafted the manuscript. DAH, AMM, HAN, and MMA contributed to data interpretation and critical revision. SHM, ZDH, SHK, SSO, and KAN contributed to the literature review, study design, critical revision, and table preparation. DAO, MQM, and AMS contributed to the literature review and critical revision. All authors approved the final manuscript.
Use of AI: ChatGPT Plus (OpenAI, version 5.3) was utilized exclusively for language editing and clarity enhancement. The authors independently reviewed and validated all content and take full responsibility for the accuracy, integrity, and originality of the manuscript.
Data availability statement: The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
References
- Asadi A. Invitation to speak at a conference: The tempting technique adopted by predatory conferences’ organizers. Science and engineering ethics. 2019;25(3):975-9. doi.10.1007/s11948-018-0038-0
- Kakamad FH. Cardiothoracic and vascular surgery is not immune to predatory publishing. Edorium J Cardiothorac Vasc Surg. 2017;4(1):1-3. doi:10.5348/C04-2017-11-ED-3
- Ayeni PO, Adetoro N. Growth of predatory open access journals: implication for quality assurance in library and information science research. Library Hi Tech News. 2017;34(1):17-22. doi.10.1108/LHTN-10-2016-0046
- Al-Khatib A. Protecting authors from predatory journals and publishers. Publishing Research Quarterly. 2016;32(4):281-5. doi.10.1007/s12109-016-9474-3
- Beall J. Best practices for scholarly authors in the age of predatory journals. The Annals of The Royal College of Surgeons of England. 2016;98(2):77-9. doi.10.1308/rcsann.2016.0056
- Hussein Kakamad F, Abdallla BA, Mohammed SH. Shifting from'Predatory Journals' to 'Non-Recommended Journals': A Proposal to Reduce Conflicts and Promote Ethical Discourse. In18th EASE General Assembly and Conference 2025 May 14. ScienceOpen. doi:10.14293/EASE.2025.038
- Kakamad FH, Abdalla BA, Abdullah HO, Omar SS, Mohammed SH, Ahmed SM, et al. Lists of predatory journals and publishers: a review for future refinement. European Science Editing. 2024;50: e118119. doi.10.3897/ese.2024.e118119
- Chen LX, Su SW, Liao CH, Wong KS, Yuan SM. An open automation system for predatory journal detection. Scientific Reports. 2023;13(1):2976. doi:10.1038/s41598-023-30176-z
- Beall J. Predatory journals: Ban predators from the scientific record. Nature. 2016;534(7607):326.
- Carroll CW. Spotting the wolf in sheep's clothing: predatory open access publications. Journal of graduate medical education. 2016;8(5):662-4. doi.10.4300/JGME-D-16-00128.1
- Beall J. Beall’s list of predatory publishers 2013. Scholarly Open Access. 2012;34(3):379-88. https://beallslist.net/
- Richtig G, Berger M, Koeller M, Richtig M, Richtig E, Scheffel J, Maurer M, Siebenhaar F. Predatory journals: Perception, impact and use of Beall’s list by the scientific community–A bibliometric big data study. Plos one. 2023;18(7):e0287547. doi.10.1371/journal.pone.0287547
- Murphy JA. Predatory publishing and the response from the scholarly community. Serials Review. 2019;45(1-2):73-8. doi.10.1080/00987913.2019.1624910
- Cabells. (2019). Cabells Predatory Report Criteria v 1.1. March 20, 2019. https://blog.cabells.com/2019/03/20/predatoryreport-criteria-v1-1/
- Bisaccio M. Cabells’ Journal Whitelist and Blacklist: Intelligent data for informed journal evaluations. Learned Publishing. 2018;31(3):243-8. doi.10.1002/leap.1164
- Shamseer L, Moher D, Maduekwe O, Turner L, Barbour V, Burch R, et al. Potential predatory and legitimate biomedical journals: can you tell the difference? A cross-sectional comparison. BMC medicine. 2017;15(1):28. doi.10.1186/s12916-017-0785-9
- Dony C., Raskinet M., Renaville F., Simon S., & Thirion, P. (2020). How reliable and useful is Cabell’s Blacklist ? A data-driven analysis. LIBER Quarterly: The Journal of the Association of European Research Libraries, 30(1), 1-38. doi.10.18352/lq.10339
- Saloojee H, Pettifor JM. Maximizing Access and Minimizing Barriers to Research in Low-and Middle-Income Countries: Open Access and Health Equity: H. Saloojee and JM Pettifor. Calcified Tissue International. 2024;114(2):83-5. doi.10.1007/s00223-023-01151-7
- Kscien Organization (2018). Kscien Organization website, Predatory publishing. https://kscien.org/predatory-publishing/
- Predatory Journals. The list. 2023 [cited 2025 Jan 11]. Available from: https://www.predatoryjournals.org/
- Krawczyk F, Kulczycki E. How is open access accused of being predatory? The impact of Beall's lists of predatory journals on academic publishing. The Journal of Academic Librarianship. 2021;47(2):102271. doi.10.1016/j.acalib.2020.102271
- National Science Library, Chinese Academy of Sciences. Early Warning List of International Journals. 2023 [cited 2025 Jan 11]. Available from: https://earlywarning.fenqubiao.com/#/en/README
- Norwegian Directorate for Higher Education and Skills (HK-dir). Norwegian Register for Scientific Journals, Series and Publishers (Kanalregister). 2024. Available from: https://kanalregister.hkdir.no/
- Sivertsen G. Publication-based funding: The Norwegian model. InResearch assessment in the humanities: Towards criteria and procedures 2016 (pp. 79-90). Cham: Springer International Publishing. doi:10.1007/978-3-319-29016-4_7
- Open Access Journal. Predatory Journals List. 2017. Available at: https://www.openacessjournal.com/blog/predatory-journals-list/
- Cobey KD, Lalu MM, Skidmore B, Ahmadzai N, Grudniewicz A, Moher D. What is a predatory journal? A scoping review. F1000Research. 2018;7:1001. doi.10.12688/f1000research.15256.2
- Journals Insights. Predatory Journals List. 2024. Available at: https://journalsinsights.com/predatory-journals-list
- Journal Index. International Journals Blacklist. Available at: https://journal-index.org/index.php/international-journals-blacklist#gsc.tab=0

This work is licensed under a Creative Commons Attribution 4.0 International License.
