Abstract
In academic publishing, there is a need to be able to discern scholarly from unscholarly, deceptive, and/or predatory journals. Predatory journals are not expected among highly ranked journals in reputable databases. SCImago Journal Rank (SJR), which ranks journals into four quartiles (Q1–Q4), acts as a whitelist or safelist for journal selection. Q1 SJR-ranked journals are likely not “predatory.” An artificial intelligence (AI)-based tool Academic Journal Predatory Checking (AJPC) system launched in February 2023 claims to differentiate suspected predatory journals (SPJs) from normal journals. AJPC system classified, in a 2 June 2023 assessment, 27 (or 42%) of the 64 Q1 SJR-ranked library and information science journals, most (48%) published by Taylor & Francis, as SPJs. This output is unlikely to be accurate (or is likely to be inaccurate) and may suggest that this free online AI-driven tool, whose output can be independently verified by anybody, may be providing erroneous output, and thus misleading information.
1 Why do Academics Need to Screen Journals to Assess their Scholarly Nature?
It is generally understood that academics would want to select the best journals in which to represent their work. For many reasons (e.g., insufficiently robust analysis, small sample sizes or insufficient replicates, poor experimental design, insufficient novelty), it might not always be possible for an academic to publish in their desired target journal, so they might seek alternative journals in which to publish their work, particularly in response to one or more desk rejections (Dwivedi et al., 2022). Since competition for space in journals can be strong and given that high barriers to entry exist in some high-ranked journals, it is not uncommon for some authors to experience multiple desk rejections among such journals before their work eventually finds its way into a lower-ranked journal, which may exercise lower standards of screening, have a higher level of acceptance, or a unique editorial view on the work, thereby allowing publication. In some cases, academics might select journals that do not exercise sufficiently stringent academic standards (Frandsen, 2019; Mertkan, Aliusta, & Suphi, 2021). Even though the assessment of work based the perceived “quality” of research, as reflected in the journal’s impact factor or similar journal-based metrics (JBMs) carries flaws, and should thus not be used in official decision-making (Paulus, Cruz, & Krach, 2018), researchers nonetheless try to publish in highest ranked journals, typically first quartile (Q1) journals, because they might receive more benefits such as funding, than if they were to publish in lower-ranked journals (e.g., in Brazil; McManus & Baeta Neves, 2021), and because papers in such journals tend to gather more citations and/or attention by the peer community (Miranda & Garcia-Carpintero, 2019). Even though researchers should be free to select their target journal of choice, their decision is often determined by their employer’s (e.g., research institute or university) objectives and prerogatives, which tend to place pressure on publication in higher-ranked journals, thereby stimulating the publish-or-perish culture (Grimes, Bauch, & Ioannidis, 2018). That choice might also be biased toward open access (OA) journals, if the perception is that there is a citation advantage (Bautista-Puig, Lopez-Illescas, de Moya-Anegon, Guerrero-Bote, & Moed, 2020). This article does not aim to resolve these issues.
Jeffrey Beall, a US librarian, established a blog with blacklists of what he perceived to be “predatory” OA journals that took advantage of this publishing model to generate as much revenue as possible (Beall, 2017). However, those lists have become outdated, following a closure of his blog in 2017, and due to reliability issues, researchers should refrain from using or relying on them for advice or other academic purposes, including the selection of venues in which to publish (or not) their work (Teixeira da Silva & Kendall, 2023a). Despite those efforts, there is little clarity about what a “predatory journal” actually is, and since many facets may enter the realm of deception (Eriksson & Helgesson, 2017), journals in a “gray” zone of scholarly behavior display borderline activity that lies between predation and legitimacy (IAP, 2022; Siler, 2020).
2 Are SJR Q1 Journals Likely to Be “Predatory”? A Generalist’s View
There is ample discussion in the academic literature arguing that the use of JBMs to assess journal quality is fraught with limitations and that more factors need to be benchmarked when deciding the choice of publishing venue (Björk & Holmström, 2006). The topics of JBMs, ranking, indexing, and journal quality are often interlinked and are central to academic performance and librarianship, the former because academics are judged and rewarded based on the venues where they publish, the latter because a journal’s prestige and reputation depend on reliable bibliometric classification, coverage, and ranking in order to attract prospective authors. Coverage of journals among different journal ranking lists (JRLs) can differ, so some academics have attempted to find ways to appreciate whether journal ranking across such lists is consistent or overlapping (Anderson, Elliott, & Callahan, 2021; Jaafar, Pereira, Saab, & El-Kassar, 2021). Cognizant of a wealth of literature on such comparisons, this article focuses on only one such JRL, the SCImago Journal Rank (SJR), which draws information about journals that are indexed in Elsevier’s Scopus, which itself is a curated database (Baas, Schotten, Plume, Côté, & Karimi, 2020), and one of the largest and most prominent databases, although it is neither free of errors nor perfect (Pranckutė, 2021). Moreover, some of the lower-ranked journals may exercise poor scientific standards (Frandsen, 2017). There is a high (>99%) overlap between Scopus-indexed journals and those that are indexed in the Web of Science (Singh, Singh, Karmakar, Leta, & Mayr, 2021). Given its reliability, generally speaking, SJR is used for rewarding researchers’ academic achievement (Garner, Hirsch, Albuquerque, & Fargen, 2018).
SJR is a JRL that ranks journals based on citations (Brown & Gutman, 2019), then classifies them into four groups or quartiles, for example, Q1 represents the top 25% or quartile (Ahmad et al., 2019). If one considers an unscholarly journal, i.e., one that does not apply scholarly principles to its publishing model, then it can be argued that Q1-ranked journals in SJR are highly likely to contain less such journals than Q4 journals, if any at all, by virtue of the fact that a double-filter system of quality control is in place: the first by the ranked journal itself, the second by an independent team of experts[1] who monitor, filter, and evaluate journals for inclusion in SJR (Sonntag, 2023). Seasoned academics who publish widely and who have experience with journal content and even a journal’s handling of papers, such as peer review, might broadly agree that Q1 journals tend be more academically stringent than Q4 journals because they might exercise more quality control, although this is a loose generalization because listing and ranking in Q1–Q4 is based on metrics such as citations, and not on quality control. However, as a result, robustly published science papers may result in more citations, thereby feeding this cycle of quality = rank analogy. Not always, as evidenced by the citation (sometimes highly) of retracted and invalidated papers (Frandsen, 2017), but for the purpose of this discussion, and for simplicity’s sake, the rank of a journal (e.g., in SJR) is considered to be proportional to its level of quality control.
The SJR list related to library and information science (LIS) journals (SJR, 2023) is a list of reasonably reliable – in the author’s opinion, and based on experience and indicators in the literature – scholarly LIS journals that academics can submit to, and that tend to exercise acceptable – even high – standards of rigorous screening and peer review. Stated in another way, seasoned academics would probably not consider Q1 LIS journals indexed in SJR to be “predatory.” Bibliometric performance of a set of journals blacklisted by Beall was lower than an SJR-selected set of journals that was not blacklisted (Moed, Lopez-Illescas, Guerrero-Bote, & de Moya-Anegon, 2022). Can LIS researchers reliably detect a predatory from a non-predatory LIS journal?
Academics’ experiences and motivations may differ considerably (Mills & Inouye, 2021), so there is a need to remove human-based subjectivity when relying on decision-making related to journal quality (Yamada & Teixeira da Silva, 2022), and one potentially effective way to achieve this would be to rely on artificial intelligence (AI). However, to the author’s knowledge, only four studies have proposed AI to resolve the issue of predatory publishing: Adnan et al. (2018), Ateeq and Al-Khalifa (2023), Bedmutha and Bedmutha (2022), Chen, Su, Liao, Wong, and Yuan (2023), and Gaffney and Townsend (2022). This article focuses on the latter study.
3 Is Academic Journal Predatory Checking (AJPC) system an Effective AI Tool for Differentiating “Predatory” Journals?
In February of 2023, a ground-breaking paper published in Springer Nature’s Scientific Reports (#11 Q1 journal in SJR’s Multidisciplinary field) announced the development of a simple AI-based tool, the AJPC system, which the authors claimed was able to effectively distinguish suspected predatory journals (SPJs) from normal journals (NJs) (Chen et al., 2023). AJPC system is indicated by Chen et al. as functioning by screening a journal’s website for language-based signals, verifying such listing against two blacklists or watchlists (Beall’s Lists and a derivative of Beall’s Lists) and/or one whitelist or safelist (BIH Quest), classifying it as either an NJ or an SPJ. Prospective authors can verify this classification at AJPC system, via a one-click step after pasting the URL of their target journal into an online prompt. This tool is free, and open, so it is an attractive solution. In recent years, academics have relied heavily on two watchlists, Beall’s Lists[2] and Cabells’ Predatory Reports,[3] the former free and the latter pay-walled, but both blacklists have sensitivity issues caused by some unclear criteria, leading some journals or publishers to be likely incorrectly classified as “predatory,” or vice versa (Teixeira da Silva et al., 2022; Teixeira da Silva, Moradzadeh, Yamada, Dunleavy, & Tsigaris, 2023a). For this reason, AJPC system would be a practical solution for academics around the globe to identify predatory journals, provided it is able to do so effectively.
As part of an ongoing exercise to assess the reliability of this AI-driven tool, AJPC system was put to the test, using a methodology that was used by Teixeira da Silva, Tsigaris, and Moussa (2023b). In the past few months, three important tests and findings were made and published. First, the classification of Scientific Reports, the journal in which the introduction of AJPC system by Chen et al. (2023) was published, was modified from an SPJ to an NJ within about 2 weeks (Teixeira da Silva & Daly, 2023). Second, a web-scraping approach was used to assess about 17,000 journals, including 2500 SJR-ranked journals, and also 15,000 prominent journals of leading (in terms of market size, journal volume, and/or academic status, standing or prominence) publishers (Elsevier, Frontiers, MDPI, Springer Nature, Taylor & Francis, Wiley), finding, among some of the most incredulous results, that AJPC system classified 100% of Elsevier journals (n = 4,756) as SPJs (Teixeira da Silva & Kendall, 2023b). In that study, using OMICS as an outsider, and given its classification as a predatory publisher in a US court of law (Manley, 2019), it was discovered that about 7% of 705 journals were classified as NJs. Third, using a similar approach followed in this article, but testing the famous Financial Times top 50 (FT50) journals in fields such as financing, marketing, and other related fields, an incredulous 88% were classified as SPJs (Teixeira da Silva et al., 2023b), despite knowledge that their quality control measures are robust (Fassin, 2021).
4 AJPC system Put to the Test: Q1 SJR LIS Journals
The Q1-ranked LIS journals in SJR (SJR, 2023) – 64 in total – were selected, in the belief that these represent the crème-de-la-crème of LIS journals. Eventually, it is reasonable to expect that one or two journals that might not exercise pristine scholarly principles might be identified, as was argued earlier, but overall, the expectation was that none of these top Q1-ranked journals would be considered as SPJs. Departing from this belief, URLs of the top pages of all 64 journals were manually entered into AJPC system, to appreciate the outcome, and then verified again on 2 June 2023 (Suppl. file). Surprisingly, 27 (or 42%) of these 64 Q1 LIS journals were classified as SPJs, including some journals that, to the author’s knowledge and/or experience, have very rigorous peer review and editorial policies in place, aspects that would not typically be associated with “predatory” behavior. AJPC system indicates that most (13/27, or 48%) of these SPJs are published by Taylor & Francis.
What then might explain this highly unlikely output? At least three possibilities might explain this observation: (1) an inaccurate tool (AJPC system) that is not able to clearly distinguish an NJ from an SPJ because it relies on a fundamentally flawed (inaccurate) watchlist, Beall’s Lists; (2) an outdated dataset (the AJPC system website indicates that trained data reflect a 11 November 2020 date; Figure 1a and b); (3) an over-simplistic binary output (NJ or SPJ) that does not accompany that classification with any list of indicators or properties that could inform users of the scholarly characteristics that led it to be classified as an NJ, or, more importantly, the unscholarly characteristics that led it to be classified as an SPJ; (4) the lack of criteria used to differentiate an SPJ from an NJ; (5) the relative lack of open and public accountability by the authors and Scientific Reports to resolve these discrepancies; and (6) poorly trained AI with erroneous input parameters or data.

AJPC system classified Wiley’s Journal of the Association for Information Science and Technology, a Q1 ranked 27th by the SJR (Supplementary file) as an SPJ on 18 March 2023 (a), but as an NJ on 2 June 2023 (b). What happened in the interim? The author submitted a report to Journal of the Association for Information Science and Technology. It is possible (conceivable) that the journal challenged its classification as an SPJ, sharing the data submitted by the author to that journal, and that Chen et al. modified the code and underlying data so that an NJ classification would be reflected sometime between March and June 2023. No public explanation for this change can be found.
Of potentially greater concern is the change in classification of journals that were once (March 2023) classified as SPJs to NJs (by June 2023) (see one example in Figure 1). Although in March 2023, the author initially analyzed the AJPC system classification of only 50 of the 64 Q1 SJR-ranked LIS journals, at that time, 54% of the journals were classified as SPJs, including the top six. Comparing the 50 classifications made on March 2023 (reflecting a 2021 SJR rank) with an overlap in the June 2023 classifications (reflecting a 2022 SJR rank), 18 journals’ classification as an SPJ changed to NJ, while in eight cases the classification changed from NJ to SPJ (see yellow and blue entries, respectively, in second spreadsheet in Suppl. file). Chen et al. provide no public explanation for these changes.
Did any journal or publisher put pressure on Chen et al. to modify AJPC system’s classification?
5 Is AJPC system a Viable Assistant for Differentiating Scholarly from Predatory Journals?
In the world of academic publishing, the risk of a journal’s misclassification can cause heavy – and in extreme cases, irreversible – reputational damage, both to the journal that is classified as such, as well as to academics that may select it for publishing their work and who then receive benefits based on their publishing performance (Hulsey et al., 2023). Consequently, if a journal is characterized as “predatory” (or in this case, as an SPJ) when it is not (in this case, an NJ), this could “cost” the journal and publisher if the tool (in this case, AJPC system) becomes widely used. Research institutes and university authorities need to thus rethink their reward policies for publishing, offering better guidance to their researchers, especially students and early career researchers, and ensure that those who publish do so in journals that truly follow best publishing practices (Khan, Vieira Armond, Ghannad, & Moher, 2022). LIS researchers, following such advice, and confident in their perception that it is safe to submit to a Q1 SJR-ranked LIS journal (SJR, 2023), but turning to AJPC system to verify their choice, might be confused and surprised to learn that 42% of them are classified as SPJs. In the absence of any additional explanations by AJPC system, Chen et al. (2023), or Scientific Reports, and not wishing to take any reputational risks, they might then turn to another Q1 SJR-ranked LIS journal that is, using the same two-step process, classified as an NJ by AJPC system. In this example, LIS researchers would likely have been misled by AJPC system. Moreover, these top Q1 SJR-ranked LIS journals might lose an opportunity of peer reviewing and publishing these researchers’ papers. For the publishers of these journals, the loss of each paper is not only the loss of indexed intellect and copyright, but also in the case of gold OA journals, the loss of an article processing cost (APC). Cumulatively, if AJPC system gains traction over time and becomes a popular AI tool among students and seasoned academics to screen journals to determine if they are safe to publish in, this might cause reputational damage to those journals that are classified as SPJs, when they are not. In addition, assuming that a substantial mass of papers might be “lost” due to failed submissions caused by hesitation, this might “cost” the publisher subscriptions and APCs, i.e., both intellectual and financial costs. As one concrete example, how many authors have the Taylor & Francis LIS journals, the majority stake-holder of AJPC system SPJ classification, lost due to authors who might have turned away and submitted to another journal that is classified as an NJ?
6 Conclusion, Limitations, and Suggestions for Improvement
The classification of the majority of Q1 SJR-ranked LIS journals as SPJs is likely a seriously erroneous classification error. The creators of AJPC system (Chen et al., 2023) and Scientific Reports need to address these odd classifications urgently and publicly, explaining what characteristics led them to classify all journals classified as SPJs as such. There also needs to be some clarification as to whether AJPC system was trained on Beall’s Lists, which were focused exclusively on OA (Krawczyk & Kulczycki, 2021). In the absence of journal-by-journal evidence (which currently does not exist and is not provided by Chen et al.), AJPC system cannot – in its current state of development – be considered to be a trustworthy AI-based tool and is destined to be relegated to the history books, similar to Beall’s Lists, unless the six aforementioned limitations are fully addressed. Any output by AJPC system should not be accompanied by a simplistic binary NJ/SJR output, but also supplemented by a list of scholarly and unscholarly characteristics in journals that led to their SJR classification, because predation is not binary, but rather a gradient of negative characteristics (IAP, 2022; Yamada & Teixeira da Silva, 2022). In the absence of accountability, were AJPC system to become a popular or widely used online tool for academics to screen journals before submission, allowing them to discern NJs from SPJs, and then make a selection for manuscript submission based on AJPC system’s classification, this may threaten the reputation of truly scholarly LIS journals, throwing them into disrepute. There is a silver lining, however: AI can be trained if accurate information is fed to it by its human managers, so AJPC system could still become a promising tool to assist academics in journal selection, provided that the underlying data that have led to the classification of a journal as an NJ or an SPJ are open and public. Although Chen et al. provide the code in OA at GitHub, what data have changed and the dates when changes have been made to reflect manipulated classifications (Figure 1a and b; Teixeira da Silva & Daly, 2023; Yamada & Teixeira da Silva, 2023) are neither clear nor open.
Of course there is one additional (less plausible, in the author’s opinion, and based on the lack of published evidence) possibility, namely that AJPC system’s assessment is accurate while that of SJR is not, i.e., that there are in fact predatory journals in the ranks of SJR-ranked journals. However, this possibility is slim. Even Chen et al. recognize that AJPC system has flaws and errors, and is limited to knowledge known until the end of 2020, i.e., almost 3 years outdated.[4]
This article only presents findings of SJR’s Q1 journals for the LIS category. Interested researchers are encouraged to expand the analysis to encompass Q2–Q4, keeping in mind that as Q increases, scholarly quality or editorial scrutiny might decrease. Despite the relative simplicity of this analysis, journals in other fields of study also need to be tested and evidence (i.e., the AJPC system output) needs to be archived via screenshots (e.g., Figure 1) because the output cannot be archived in the Internet Archive. Those findings should be recorded and published in academic journals for posterior appreciation of how the AJPC system has fared over time.
-
Funding information: No funding was received by the author for this research.
-
Author contributions: The author contributed to all aspects of the paper.
-
Conflict of interest: The author states no conflict of interest.
-
Data availability statement: The URLs of the 64 Q1 SJR-ranked LIS journals used for input into AJPC system may be found in the Supplementary file.
References
Adnan, A., Anwar, S., Zia, T., Razzaq, S., Maqbool, F., & Rehman, Z. U. (2018). Beyond Beall’s blacklist: Automatic detection of open access predatory research journals. In 2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS) (pp. 1692–1697). Exeter, UK. doi: 10.1109/HPCC/SmartCity/DSS.2018.00274.Search in Google Scholar
Ahmad, S., Sohail, M., Waris, A., Abdel-Magid, I. M., Pattukuthu, A., & Azad, M. S. (2019). Evaluating journal quality: A review of journal citation indicators and ranking in library and information science core journals. Collnet Journal of Scientometrics and Information Management, 13(2), 345–363. doi: 10.1080/09737766.2020.1718030.Search in Google Scholar
Anderson, V., Elliott, C., & Callahan, J. L. (2021). Power, powerlessness, and journal ranking lists: The marginalization of fields of practice. Academy of Management Learning & Education, 20(1), 89–107. doi: 10.5465/amle.2019.0037.Search in Google Scholar
Ateeq, W. M. B., & Al-Khalifa, H. S. (2023). Intelligent framework for detecting predatory publishing venues. IEEE Access, 11, 20582–20618. doi: 10.1109/ACCESS.2023.3250256.Search in Google Scholar
Baas, J., Schotten, M., Plume, A., Côté, G., & Karimi, R. (2020). Scopus as a curated, high-quality bibliometric data source for academic research in quantitative science studies. Quantitative Science Studies, 1(1), 377–386. doi: 10.1162/qss_a_00019.Search in Google Scholar
Bautista-Puig, N., Lopez-Illescas, C., de Moya-Anegon, F., Guerrero-Bote, V., & Moed, H. F. (2020). Do journals flipping to gold open access show an OA citation or publication advantage? Scientometrics, 124(3), 2551–2575. doi: 10.1007/s11192-020-03546-x.Search in Google Scholar
Beall, J. (2017). What I learned from predatory publishers. Biochemia Medica, 27(2), 273–278. doi: 10.11613/BM.2017.029.Search in Google Scholar
Bedmutha, P. S., & Bedmutha, M. S. (2022). SciPred: An end to end approach to classify predatory journals. In 5th Joint International Conference on Data Science & Management of Data (9th ACM IKDD CODS and 27th COMAD) (CODS-COMAD 2022) (pp. 326–327). New York, NY, USA: Association for Computing Machinery. doi: 10.1145/3493700.3493764.Search in Google Scholar
Björk, B. C., & Holmström, J. (2006). Benchmarking scientific journals from the submitting author’s viewpoint. Learned Publishing, 19(2), 147–155. doi: 10.1087/095315106776387002.Search in Google Scholar
Brown, T., & Gutman, S. A. (2019). Impact factor, eigenfactor, article influence, Scopus SNIP, and SCImage journal rank of occupational therapy journals. Scandinavian Journal of Occupational Therapy, 26(7), 475–483. doi: 10.1080/11038128.2018.1473489.Search in Google Scholar
Chen, L.-X., Su, S.-W., Liao, C.-H., Wong, K.-S., & Yuan, S.-M. (2023). An open automation system for predatory journal detection. Scientific Reports, 13(1), 2976. doi: 10.1038/s41598-023-30176-z.Search in Google Scholar
Dwivedi, Y. K., Hughes, L., Cheung, C. M. K., Conboy, K., Duan, Y-Q., Dubey, R., … Viglia, G. (2022). How to develop a quality research article and avoid a journal desk rejection. International Journal of Information Management, 62, 102426. doi: 10.1016/j.ijinfomgt.2021.102426.Search in Google Scholar
Eriksson, S., & Helgesson, G. (2017). Time to stop talking about ‘predatory journals’. Learned Publishing, 31(2), 181–183. doi: 10.1002/leap.1135.Search in Google Scholar
Fassin, Y. (2021). Does the Financial Times FT50 journal list select the best management and economics journals? Scientometrics, 126(7), 5911–5943. doi: 10.1007/s11192-021-03988-x.Search in Google Scholar
Frandsen, T. F. (2017). Are predatory journals undermining the credibility of science? A bibliometric analysis of citers. Scientometrics, 113(3), 1513–1528. doi: 10.1007/s11192-017-2520-x.Search in Google Scholar
Frandsen, T. F. (2019). Why do researchers decide to publish in questionable journals? A review of the literature. Learned Publishing, 32(1), 57–62. doi: 10.1002/leap.1214.Search in Google Scholar
Gaffney, S. G., & Townsend, J. P. (2022). Jot: Guiding journal selection with suitability metrics. Journal of the Medical Library Association, 110(3), 376–380. doi: 10.5195/jmla.2022.1499.Search in Google Scholar
Garner, R. M., Hirsch, J. A., Albuquerque, F. C., & Fargen, K. M. (2018). Bibliometric indices: Defining academic productivity and citation rates of researchers, departments and journals. Journal of Neurointerventional Surgery, 10(2), 102–106. doi: 10.1136/neurintsurg-2017-013265.Search in Google Scholar
Grimes, D. R., Bauch, C. T., & Ioannidis, J. P. A. (2018). Modelling science trustworthiness under publish or perish pressure. Royal Society Open Science, 5(1), 171511. doi: 10.1098/rsos.171511.Search in Google Scholar
Hulsey, T., Carpenter, R., Carter-Templeton, H., Oermann, M. H., Keener, T. A., & Maramba, P. (2023). Best practices in scholarly publishing for promotion or tenure: Avoiding predatory journals. Journal of Professional Nursing, 45, 60–63. doi: 10.1016/j.profnurs.2023.01.002.Search in Google Scholar
IAP (The Interacademy Partnership). (2022). Combatting Predatory Academic Journals and Conferences. https://www.interacademies.org/publication/predatory-practices-report-English (March, 2022; last accessed: 2 June, 2023).Search in Google Scholar
Jaafar, R., Pereira, V., Saab, S. S., & El-Kassar, A.-N. (2021). Which journal ranking list? A case study in business and economics. EuroMed Journal of Business, 16(4), 361–380. doi: 10.1108/EMJB-05-2020-0039.Search in Google Scholar
Khan, H., Vieira Armond, A. C., Ghannad, M., & Moher, D. (2022). Disseminating biomedical research: Predatory journals and practices. Indian Journal of Rheumatology, 17(Suppl 2), S328–S333. doi: 10.4103/0973-3698.364675.Search in Google Scholar
Krawczyk, F., & Kulczycki, E. (2021). How is open access accused of being predatory? The impact of Beall’s lists of predatory journals on academic publishing. The Journal of Academic Librarianship, 47, 102271. doi: 10.1016/j.acalib.2020.102271.Search in Google Scholar
Manley, S. (2019). Predatory journals on trial. Allegations, responses, and lessons for scholarly publishing from FTC v. OMICS. Journal of Scholarly Publishing, 50(3), 183–200. doi: 10.3138/jsp.50.3.02.Search in Google Scholar
McManus, C., & Baeta Neves, A. A. (2021). Funding research in Brazil. Scientometrics, 126(1), 801–823. doi: 10.1007/s11192-020-03762-5.Search in Google Scholar
Mertkan, S., Aliusta, G. O., & Suphi, N. (2021). Profile of authors publishing in ‘predatory’ journals and causal factors behind their decision: A systematic review. Research Evaluation, 30(4), 470–483. doi: 10.1093/reseval/rvab032.Search in Google Scholar
Mills, D., & Inouye, K. (2021). Problematizing ‘predatory publishing’: A systematic review of factors shaping publishing motives, decisions, and experiences. Learned Publishing, 34(2), 89–104. doi: 10.1002/leap.1325.Search in Google Scholar
Miranda, R., & Garcia-Carpintero, E. (2019). Comparison of the share of documents and citations from different quartile journals in 25 research areas. Scientometrics, 121(1), 479–501. doi: 10.1007/s11192-019-03210-z.Search in Google Scholar
Moed, H. F., Lopez-Illescas, C., Guerrero-Bote, V. P., & de Moya-Anegon, F. (2022). Journals in Beall’s list perform as a group less well than other open access journals indexed in Scopus but reveal large differences among publishers. Learned Publishing, 35(2), 130–139. doi: 10.1002/leap.1428.Search in Google Scholar
Paulus, F. M., Cruz, N., & Krach, S. (2018). The impact factor fallacy. Frontiers in Psychology, 9, 1487. doi: 10.3389/fpsyg.2018.01487.Search in Google Scholar
Pranckutė, R. (2021). Web of Science (WoS) and Scopus: The titans of bibliographic information in today’s academic world. Publications, 9, 12. doi: 10.3390/publications9010012.Search in Google Scholar
Siler, K. (2020). Demarcating spectrums of predatory publishing: Economic and institutional sources of academic legitimacy. Journal of the Association for Information Science and Technology, 71(11), 1386–1401. doi: 10.1002/asi.24339.Search in Google Scholar
Singh, V.K., Singh, P., Karmakar, M., Leta, J., & Mayr, P. (2021). The journal coverage of Web of Science, Scopus and Dimensions: A comparative analysis. Scientometrics, 126(6), 5113–5142. doi: 10.1007/s11192-021-03948-5.Search in Google Scholar
Sonntag, D. (2023). Avoid predatory journals. Künstliche Intelligenz, 37(1), 1–3. doi: 10.1007/s13218-023-00805-w.Search in Google Scholar
SJR. (2023). Scimago Journal & Country Rank. Social Sciences. Library and Information Sciences. https://www.scimagojr.com/journalrank.php?area=3300&category=3309 (last accessed: 2 June, 2023).Search in Google Scholar
Teixeira da Silva, J. A., & Daly, T. (2023). The diagnostic accuracy of AI-based predatory journal detectors: An analogy to diagnosis. Diagnosis (in press). doi: 10.1515/dx-2023-0039.Search in Google Scholar
Teixeira da Silva, J. A., & Kendall, G. (2023a). Academia should stop using Beall’s Lists and review their use in previous studies. Central Asian Journal of Medical Hypotheses and Ethics, 4(1), 39–47. doi: 10.47316/cajmhe.2023.4.1.04.Search in Google Scholar
Teixeira da Silva, J. A., & Kendall, G. (2023b). Mis(-classification) of 17,721 journals by an artificial intelligence predatory journal detector. Publishing Research Quarterly (in press). doi: 10.1007/s12109-023-09956-y.Search in Google Scholar
Teixeira da Silva, J. A., Moradzadeh, M., Adjei, K. O. K., Owusu-Ansah, C. M., Balehegn, M., Faúndez, E. I., … Al-Khatib, A. (2022). An integrated paradigm shift to deal with “predatory” publishing. The Journal of Academic Librarianship, 48(1), 102481. doi: 10.1016/j.acalib.2021.102481.Search in Google Scholar
Teixeira da Silva, J. A., Moradzadeh, M., Yamada, Y., Dunleavy, D. J., & Tsigaris, P. (2023a). Cabells’ Predatory Reports criteria: Assessment and proposed revisions. The Journal of Academic Librarianship, 49(1), 102659. doi: 10.1016/j.acalib.2022.102659.Search in Google Scholar
Teixeira da Silva, J. A., Tsigaris, P., & Moussa, S. (2023b). Can AI detect predatory journals? The case of FT50 journals. SSRN (preprint, not peer reviewed). doi: 10.2139/ssrn.4391108.Search in Google Scholar
Yamada, Y., & Teixeira da Silva, J. A. (2022). A psychological perspective towards understanding the objective and subjective gray zones in predatory publishing. Quality & Quantity, 56(6), 4075–4087. doi: 10.1007/s11135-021-01307-3.Search in Google Scholar
Yamada, Y., & Teixeira da Silva, J. A. (2023). A measure to quantify predatory publishing is urgently needed. Accountability in Research (in press). doi: 10.1080/08989621.2023.2186225.Search in Google Scholar
© 2023 the author(s), published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.