A new AI-powered scientific manuscript review tool called Rigorous has launched with promises to make academic publishing faster and more transparent. The cloud-based service offers free manuscript analysis, delivering comprehensive PDF reports within 1-2 working days. However, early users are encountering significant technical problems and the academic community remains divided on AI's role in peer review.
Technical Problems Plague Early Launch
Users attempting to upload manuscripts to the Rigorous platform are experiencing various technical difficulties. One researcher reported encountering HTTP 413 errors and JSON parsing failures when trying to submit a 9.4 MB research paper. The error messages suggest server timeout issues and file size limitations that may be affecting the service's reliability during its testing phase.
The developers acknowledge these are early-stage problems, explaining that the core workflow currently takes about 8 minutes locally but could be optimized to 1-2 minutes. They attribute the current 1-2 day turnaround time to manual review steps and cost control measures while they work out technical issues.
Current Performance Metrics:
- Local processing time: ~8 minutes (optimizable to 1-2 minutes)
- Cloud service turnaround: 1-2 working days
- File size limitations: Issues reported with 9.4 MB files
- Current status: Free testing phase
Community Raises Quality and Trust Concerns
The academic community has expressed mixed reactions to AI-assisted peer review. Some researchers worry about the quality of AI-generated feedback, particularly given the recent proliferation of AI-written reviews on academic platforms. Critics argue that meaningful peer review requires deep expertise and insights that current AI systems cannot provide.
The point of reviews is to get deep insights/comments from industry experts who have knowledge ABOVE the LLMs. The bar is very low, i know, but we should do better as the research community.
Trust issues have also emerged around the platform's transparency. Users noted the lack of clear privacy policies, contact information, and details about data handling practices. The developers responded by acknowledging they are in early MVP mode and promised to add proper contact information and policies.
Limited Training Data Hampers AI Development
A significant challenge facing AI peer review systems is the scarcity of quality training data. Historical peer review reports were rarely published, and only recently have some journals begun making review reports public. This lack of comprehensive training data may limit the effectiveness of AI systems in providing meaningful feedback.
The developers are exploring alternative approaches, including analyzing differences between preprints and final published versions to understand what changes peer review typically drives. However, they acknowledge that even this approach may not capture the best possible feedback, as human reviewer comments can sometimes be inconsistent or unreasonable.
Conclusion
While Rigorous aims to address real problems in academic publishing by making peer review faster and more accessible, its early launch reveals the challenges facing AI-powered academic tools. Technical issues, community skepticism, and fundamental questions about AI's ability to provide meaningful scholarly feedback suggest that widespread adoption may require significant improvements in both technology and trust-building measures.
The tool's developers appear responsive to feedback and are working to address concerns, but the academic community's cautious reception highlights the high standards expected for systems that could influence scientific publishing decisions.
Reference: Rigorous - Al-Powered Scientific Manuscript Analysis