Skip to main content

Vetting AI vs. Peer Review Process (Vetting AI Series #2)

Uncover the surprising parallels between vetting AI and the peer review process in research, and learn how both safeguard accuracy, ethics, and innovation in education.

Welcome back, fellow academics!

Today we're exploring the parallels between vetting artificial intelligence (AI) and the peer review process in academic research. From adaptive learning platforms to generative AI to AI-driven analytics, the potential for bias or inaccuracies in AI usage requires careful examination and scrutiny. So how does this examination compare with the peer review process in educational research?

To begin this exploration, let’s start off with the video, "Vetting AI and the Peer Review Process." (Length 2:32)

As academics, fact-checking sources and vetting technology is a common practice. It's not just about how well tools function or how often a source is cited; it’s about establishing that they align with purpose, values, and integrity. Imagine employing an AI tool in your research only to find that biased data interpretation skewed its results. Or trying a new technology tool in class, but the proxy server blocks students from using it. These examples underscore the importance of vetting, like analyzing a new hypothesis, theoretical framework, or research methodology.

In the academic world, vetting AI shares similarities with the rigor of peer-reviewing a research paper. Peer review is a cornerstone of academic integrity in research publishing. Before findings are shared with the academic community, they must endure rigorous evaluation by experts to verify their validity, methodology, and adherence to ethical standards. This process is essential for quality control and validation of journal submissions (Zimba & Gasparyan, 2021). Peer review is crucial for establishing high-quality research, training future researchers, and promoting broader knowledge in the academic community (Bailly, 2016). The peer review process confirms that educational research is reliable, contributing to evidence-based practices that enhance teaching and learning.

Visual representation of described peer review research process

The above infographic outlines the peer review research process. It includes:

  • Manuscript Submission
  • Evaluation
  • Feedback
  • Revision and Re-Evaluation
  • Acceptance for Publication

Fact-checking AI also requires critical thinking, assessing the facts' accuracy, and understanding the biases and viewpoints embedded in the responses, similar to evaluating and peer-reviewing written works. Though AI models do not have their own opinions, they are trained on datasets filled with human biases and viewpoints, which can influence their outputs. Therefore, it's important to recognize that AI responses may not be neutral and can reflect biases in their training data, necessitating a thoughtful consideration of the underlying viewpoints and potential biases in their answers (University of Maryland, n.d.). Vetting AI can help combat the spread of misinformation (Lin, 2023), assist researchers in discovering new knowledge, and help individuals understand scientific evidence (Vladika & Matthes, 2023).

Visual representation of described AI vetting process

The above infographic outlines the AI vetting process. It includes:

  • Design and Application
  • Evaluating Output
  • Test AI
  • Improvements and Changes
  • Integration

AI has the potential to help in the peer review process by providing quality control, quantified performance metrics as engagement incentives, and certification and reputation (Tennant et al., 2017). AI can help reveal correlations between decision processes and quality proxy measures, uncovering potential biases in the peer review process (Checco, 2021). Though AI vetting and peer review serve similar ends, they face distinct challenges. AI in education must be vetted for biases in training data, while peer review must grapple with the subtle differences in educational theories and practices. Both, however, are united in their goal to enhance educational outcomes and uphold the highest standards of ethics and accuracy.

Visual representation of peer review and vetting AI venn diagram description

The above Venn diagram compares the processes of Peer Review and Vetting AI, highlighting their similarities and differences:

  • Peer Review (left side):The steps involved in evaluating a research manuscript include Manuscript Submission, Review, Feedback, Revision, Acceptance, and Publication.
  • Vetting AI (right side): The process of evaluating AI tools or systems involves Design and Intention, Set Criteria, Testing, Improvements, and Integration.
  • Shared Components (center): Both processes emphasize Evaluation, Feedback, Iteration, and Quality Assurance.

The parallels with AI vetting are clear, with both processes aiming to safeguard the quality and integrity of their output. Integrating AI into educational research and practice presents exciting possibilities as we look ahead. From automating administrative tasks to providing personalized learning experiences, the potential is boundless (Chen et al., 2020). Yet, navigating the ethical landscape remains important. As we navigate the complexities of AI integration in academia, ensuring the accuracy and integrity of AI usage through vetting and peer review is crucial. Next time, we’ll discuss how to vet AI tools for education and research.

Additional information related to AI can be found in the  Discover AI: Your Guide to Understanding Artificial Intelligence and its Place in Higher Education resource.

View references