Imagine pouring years of research into a scientific paper, only to discover that the peer-review process – the bedrock of scientific integrity – might be compromised. That's exactly what happened when questions arose about the work of a professor and journal editor, sparking serious concerns about potentially fabricated research and compromised peer review. But here's where it gets controversial: is this an isolated incident, or a symptom of a larger problem within the scientific community?
The story begins with a neuroscientist in Germany who was asked to review a manuscript focused on modeling the indusium griseum, a thin layer of gray matter in the brain. Something felt off. The reviewer, an expert in the field, found the paper strangely devoid of substance. The figures were confusing, the MATLAB functions provided seemed irrelevant to the research, and the discussion felt disconnected from the results, reading more like a literature review than an analysis of original findings.
And this is the part most people miss: she questioned whether the resolution of the MRI data used in the study was even high enough to visualize the delicate structure of the indusium griseum in the first place. To confirm her suspicions, she consulted with a colleague, an expert in analyzing brain images. His assessment? The resolution was indeed too low. (Both researchers requested anonymity, fearing professional repercussions.) The reviewer recommended rejecting the manuscript.
But the story doesn't end there. In an unusual twist, just weeks later, the colleague who had confirmed the MRI resolution issue received an invitation to review the same paper, this time for a different journal, Scientific Reports. Intrigued, he accepted. What he found was even more baffling. One figure, supposedly depicting the indusium griseum, showed a simple sine wave. “You look at that and think, well, this is not looking like an anatomical structure,” he explained. Like his colleague, he also felt the text read like technobabble, reminiscent of content generated by a large language model – full of scientific-sounding phrases but ultimately lacking in genuine meaning.
The two reviewers compared notes and discovered that the author had attempted to address the first review by swapping out the irrelevant MATLAB functions. However, the core issues – the flawed results and problematic images – remained unchanged. As the second reviewer put it, “Nice try, my friend, but forget it.”
At the end of November, the same reviewer was again asked to review a manuscript by the same author: Associate Professor Eren Öğüt of Istanbul Medeniyet University. This new paper, submitted to Neuroinformatics, focused on a different brain structure, but the abstract geometric shapes presented were eerily familiar. Growing increasingly suspicious, the reviewer decided to investigate Öğüt's publication record online.
The findings were astonishing. In 2025 alone, Öğüt had published a staggering 25 papers, predominantly in Springer Nature journals, with 12 of those being single-authored. But it gets even more remarkable: Öğüt also managed to review nearly 650 papers that year, according to Clarivate’s Web of Science. His total review count exceeded 1,400, with significant contributions to Elsevier, Wolters Kluwer Health, and Springer Nature. Is it humanly possible to produce such a large volume of high-quality research and reviews?
Öğüt also held editorial positions at several journals from major publishers, including serving as an associate editor of Springer Nature’s European Journal of Medical Research. He teaches anatomy and neuroanatomy and claims membership in Sigma Xi, a scientific honor society. But here's a point that could spark debate: Could the pursuit of academic prestige and influence be driving this apparent over-commitment? Is the pressure to publish and review leading to shortcuts and compromised quality?
To the two reviewers in Germany, Öğüt's productivity seemed superhuman, suggesting the possible misuse of generative AI. One piece of evidence supporting this theory is the average length of Öğüt's reviews: 364 words, just one word longer than the average review length calculated from a massive dataset of 11 million reviews. This raises a critical question: is the peer-review process being undermined by the use of AI to generate superficial reviews, allowing substandard research to slip through the cracks?
Extensive review activity can enhance a researcher's reputation, potentially leading to preferential treatment for their own submissions. This creates a potential conflict of interest: could prolific reviewers be leveraging their position to boost their own publication record?
Öğüt defended his publications, stating that some had been in development for years, and attributed his review activity to a team effort. He claimed that the simultaneous publication of new and previously developed studies was coincidental, and that all manuscripts underwent rigorous editorial handling and peer review. "We use AI tools for editing or improving sentence clarity, just as many other researchers do,” he added. “In fact, in some manuscripts, in line with editorial and reviewer recommendations, we explicitly state that AI was used for editing purposes.”
He emphasized his role as a reviewer and editor for numerous journals, striving to meet deadlines promptly with the support of a dedicated team. But here's a counterpoint: While AI can be a valuable tool for editing, is it being used responsibly and ethically in research and peer review? At what point does AI assistance become a substitute for genuine scientific rigor and critical thinking?
Öğüt also expressed concern about the discussion of his unpublished work “outside the formal peer-review process,” deeming it a potential “ethical violation.” However, following our outreach, his profiles on Google Scholar, ORCID, and Frontiers’ Loop mysteriously disappeared.
The reviewers detailed their concerns in an email to John Van Horn, editor of Neuroinformatics, highlighting the consistent template used in Öğüt's single-authored papers, including similar title structures, redundant figures, irrelevant MATLAB functions, and discussions resembling literature reviews rather than result analyses. They also noted the absence of overlaid structures on real MRI images, data sharing, or code availability.
Van Horn stated that Neuroinformatics had also developed “concurrent” concerns about Öğüt’s work and referred his past and current manuscript submissions to the Springer Nature Research Integrity Group for detailed examination. Tim Kersjes, head of Research Integrity, Resolutions at Springer Nature, confirmed the ongoing investigation but refrained from sharing specifics, emphasizing the seriousness with which they are treating the matter.
One of the questioned papers, “Integrated 3D Modeling and Functional Simulation of the Human Amygdala: A Novel Anatomical and Computational Analyses,” purports to use elastic shape analysis, a method developed by mathematician Anuj Srivastava and his colleagues. Öğüt's paper cites this work and strangely claims to replicate a specific quantitative result (38%) from Srivastava's 2022 paper, representing the “lateral bulging” of the amygdala in people with post-traumatic stress disorder. However, this number does not appear in Srivastava’s original article. This discrepancy raises a fundamental question: How can researchers ensure the accuracy and integrity of their citations and prevent the misrepresentation of previous findings?
Öğüt clarified that the 38% value was intended as a representative magnitude of variance, not an exact numerical identity with Srivastava's work. He acknowledged that explicitly stating this as an approximation would have avoided potential confusion.
Srivastava, on initial review, deemed Öğüt’s paper “sub standard,” citing missing methodological details, superficial information, mis-typed equations, and a lack of reproducibility. He emphasized that while the paper repeatedly mentions their method (’Elastic Shape Analysis’), it fails to provide evidence of its actual application. Jennifer S. Stevens, a coauthor of Srivastava’s 2022 paper, also expressed concerns about the vagueness and confusing nature of Öğüt’s work.
Srivastava ultimately raises a critical question: “I am wondering how this paper got accepted in the first place.” This case highlights the crucial role of peer review in maintaining scientific integrity. But it also raises deeper questions about the pressures within academia, the potential for misuse of AI, and the ongoing need for vigilance and accountability. What are your thoughts? Do you believe the current peer-review system is adequate, or does it need significant reform? Share your opinions and experiences in the comments below.