AI-assisted prioritization for Unjournal evaluation — Prototype, March 2026
About this tool
Very early prototype (21 March 2026). We have not yet invested significant compute in scoring these papers.
The results shown use lightweight models on a small sample of sources. Scores, explanations, and coverage will improve substantially
as we scale up model depth, expand sources, and incorporate human feedback. See the vision note below.
Papers auto-discovered from NBER, arXiv (econ), and CEPR, scored by AI models against Unjournal prioritization criteria.
Scores are suggestive, not definitive. We welcome both team and public feedback.
Each paper is scored on five criteria (0–10 scale):
Decision Relevance — Informs high-value global welfare decisions?
Timing Value — Working paper stage where feedback is actionable?
Methodological Potential — Feasible rigor given the field and question
Four-stage pipeline:
Suggesting — A paper is suggested (by AI or human), with a 0–100 percentile rating and discussion of relevance
Assessing — A second team member gives an independent rating and discussion (assessor should not read the suggester rating first)
Voting — If average rating ≥ 65%, the field group votes (Strong Yes to Strong No). Positive votes + strong case → moves to evaluation
Evaluation — An evaluation manager commissions 2+ public evaluations via PubPub
Prioritization = expected value of evaluation, not quality endorsement.
Read more.
Comment directly on this page using the
Hypothes.is sidebar
(look for the < tab on the right edge of the page). Highlight any text and add your annotation —
visible to all Hypothes.is users. You can also use the feedback buttons on each paper card.
-
Shown
-
High Priority
Vision: How this tool will work
We are building an efficient, AI-augmented prioritization pipeline:
AI discovery & preliminary rating — The tool finds, vets, and suggests research from multiple sources (NBER, arXiv, SSRN, EA Forum, etc.), giving a preliminary score and adding it to the prioritization database.
Human suggestions — Team members and the public can also add research directly as a "suggester" or "submitter," in which case the AI provides an additional analysis report.
Notifications — Sign up for alerts when new high-potential research in your area is added.
Team assessment — Team members review suggestions, find those of most interest, and give independent ratings. These may be used to continually train and improve the AI recommendation model.
Voting & decisions — The team votes (as in our current process), moving papers forward for commissioned evaluation.
The AI uses Unjournal's core principles and previous prioritization decisions as context.
We welcome your thoughts on this workflow — use the Hypothes.is sidebar or email
contact@unjournal.org.