There are many definitions of fraud in medical writing.4–8 The Royal College of Physicians of Edinburgh defines research fraud as “the behavior by a researcher, intentional or not, that falls short of good ethical and scientific standard.”5 The United Kingdom's Committee on Publication Ethics describes fraud as the “intention to cause others to regard as true that which is not true.”6 The US Office of Research Integrity defines fraud using the fabrication, falsification, and plagiarism model as follows: “Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.”7 According to Protti,8 fraud may also have a legal point of view: “Scientific fraud is a deliberate misrepresentation by someone who knows the truth.” Fraud may be related to writing (editors and authors) and publishing (editors and publishers). Fraud in publishing is not the subject of this article.
When an article is submitted to a journal, the review process begins. The article is assigned to a deputy or associate editor or perhaps to a section editor. The editor quickly reviews the article to determine whether it merits peer review, falls within the journal's scope, and conforms to the rules of scientific conduct. If these requirements are met, the editor assigns the article to reviewers. If the topic is unimportant, the information is outdated, the scientific methods are faulty, the article is poorly structured and written, or bias or other ethical concerns exist, then the article is rejected without being assigned to reviewers.9
Editors are engaging in fraud if they (1) explicitly request that authors include citations to their journal in a submitted article's reference list, (2) consider citations to their journal when deciding whether to accept an article, (3) insist that authors reduce citations to competing journals, (4) use their journal to promote their own work, (5) publish effortlessly, or (6) publish too many introductory articles in their journal. Editors are not engaging in fraud and are being accountable to their journal and authors if they (1) bring to the attention of authors articles related to their work and suggest references, (2) provide structural comments and suggest up-to-date citations, and (3) write state-of-the-art articles on important topics based on their experience and scientific skills with the aim of raising awareness of and interest in their journal.
The first scientific journal appeared in 1665, and the citation of manuscripts began in 1752.1 Thousands of scientific journals now exist, and ranking them is difficult.
In 1955, Garfield10 described impact factor (IF) for peer-reviewed scientific journals. Impact factor equals the ratio of the number of citations in the current year to articles published in a journal in the 2 preceding years divided by the number of citable items published in the same 2 years.10 Source items (original research papers, technical notes, reviews, and papers presented as proceedings) compose the denominator of the IF equation.11,12 Nonsource items (letters, news stories, abstracts, book reviews, and editorials) are not included in the denominator of the IF equation but may be included in the numerator as a pool for citations.13–16 Editors and journals can take advantage of the simple calculation method of IF.15–19 An editorial manipulation is to increase non-source items with citations (the numerator), and to limit the total number of articles and/or the number of original papers and increase the number of reviews and/or technical articles that are more likely to be cited (the denominator). The arbitrary selection of a 2-year reference period has been the subject of much debate.20 The practice by some journals of pre-releasing details of accepted articles may allow citation before those articles go to press, increasing the immediacy of impact.18,21
There is a poor correlation between the IF of a journal in which an article is published and the number of future citations to that article.22–24 Self-citation (ie, referring to articles from the same journal), citation density (ie, the number of references listed in a journal), quality of citations of a journal (ie, mainly English-language publications), types of articles published (eg, topical papers or review articles), ease of access to journals, and publication immediacy are major limitations of IF.21–30 Regarding IF, a citation from a large, important journal is not more valuable than a citation from a smaller journal.31,32 Journals not listed in the Science Citation Index (SCI) database are often referred to as having no IF. Finally, IF is often misused to evaluate scientists, influencing decisions regarding awards, grants, scholarships, and fellowships.21,33,34 Such evaluation is best performed by experts in the subject matter.
Alternative metrics for ranking journals, based on publications or Internet use, have been introduced by bibliometricians. The SCImago Journal Rank indicator represents prestige awarded per article in the analyzed year.25 It is calculated for a 3 calendar-year period. A complicated formula computes the prestige gained by a journal through the transfer of prestige from all of its citations during the past 3 years to all articles of a specific journal published in the past 3 years. This is divided by the total number of articles of the specific journal during the 3-year period. The amount of prestige of each journal transferred to another journal in the network is computed by considering the percentage of citations of the former journal directed to articles of the latter journal.25,35 A major advantage of the SCImago Journal Rank indicator is that it allows for the estimation of a journal's prestige without the influence of self-citations, as prestige can be transferred to a journal by other journals but not by itself.25,35 Other advantages include the greater number of journals and languages included in its database and the unrestricted (open) access.25,36 A major shortcoming of the SCImago Journal Rank indicator is the sophisticated methodology used in the calculation of this index. Further, it divides the prestige gained by a journal through the citations of its articles by the total number of articles included rather than by the number of citable articles.25
The Eigenfactor score is based on direct citation counts.26,27,31 It is a ratio of the number of citations to the total number of articles. Unlike IF, the Eigenfactor score counts citations to journals in both the sciences and the social sciences, eliminates self-citations by discounting every reference from one article in a journal to another article from the same journal, and weighs each reference according to a stochastic measure of the amount of time researchers spend reading the journal.37 The frequency with which a researcher visits each journal gives a measure of the journal's prestige within its network of academic citations. This frequency, expressed as a percentage, is essentially the Eigenfactor score of the journal.26,27,31 The Article Influence score also reflects a journal's prestige. It is a journal's Eigenfactor score divided by the fraction of articles published by the journal.26,27,31,37
Recently, altmetrics (alternative metrics) have been introduced as science evaluation metrics for the evaluation of publications and researchers.38 Altmetrics are based on citation counts from scientific and social media. They also cover other aspects of the impact of scholarly work (eg, how many databases refer to it; number of article views, downloads, and mentions in social media and news media). One proposed classification of altmetrics includes the categories “viewed,” “discussed,” “saved,” “cited,” and “recommended.”39 Altmetrics, although in their infancy, might be worth trying. However, many believe that all scientific metrics are eventually abused.40 According to Goodhart's law, when a feature of the economy is picked as an indicator of the economy, it inexorably ceases to function as such, as people soon start to game it.40