What do research crowdfunding, citizen science, and blockchain technologies have in common? They represent a fundamental shift away from centralized authority toward distributed decision-making. This same shift is now reaching the heart of academic governance: how we evaluate research.
I attended a symposium at Kyoto University on their new research evaluation framework, COMON. As an outsider—a researcher studying peer review systems—I witnessed something that goes far beyond the “responsible metrics” discourse of DORA or the collaborative spirit of CoARA. This is not another call for better measurement. It is a restructuring of institutional relationships, built on recognition that universities must help research communities create entirely new resources rather than compete for a fixed pie.
From Discovery to Audit: The First Transformation
Scattered comparisons of publication counts began around 1910 [Narin, 1976], crystallizing into scientometrics as a formal discipline by the 1960s. Before the 1980s, evaluation served a clear purpose: discovering excellent work and maintaining scholarly quality. Peer review served as a self-regulatory mechanism. Evaluation occurred within disciplines, from an academic perspective, by those who understood the work.
The 1980s brought the first turning point. Research activity expanded dramatically while public funding growth slowed amid economic recession. The 1990s introduced New Public Management principles into research governance [Georghiou & Rossner, 2000; Irvine & Martin, 1984]. Evaluation pivoted from discovery to accountability. Taxpayers demanded explanations. Policymakers demanded metrics. The question changed from “How do we find great research?” to “How do we prove we’re spending wisely?”
The consequences are now visible globally. The Trump administration’s assault on university autonomy starkly revealed this audit paradigm’s vulnerabilities. In Japan, incorporating national universities as independent administrative agencies appeared to reduce government intervention, which turned out that universities remained dependent on block grants while reporting burdens exploded. In the EU, Horizon Europe struggles to balance excellence metrics with geographic and institutional diversity—revealing tensions between accountability demands and research reality. China’s centralized S&T planning faces growing recognition that top-down control stifles the innovation it seeks to foster. Autonomy in name; surveillance in practice.
Long Struggle: Fleeing the Metrics Tsunami
The 2010s brought growing recognition of these limitations. DORA (the San Francisco Declaration on Research Assessment) emerged as a response to metric abuse—but remained firmly within the accountability framework. It called for responsible metrics, not rejection of the audit paradigm itself. This is crucial: responsible metrics represent the second stage’s attempt at self-correction, not transcendence.
Multiple forces converged to make this insufficient. Change accelerated. Uncertainty rose. Multi-year review cycles like the UK’s Research Excellence Framework struggled to keep pace. Top-down national research strategies faced recognized limitations in unpredictable environments.
Interdisciplinarity complicated matters further. When projects span multiple fields, no single researcher can comparatively evaluate them. Who judges collaborations among climate scientists, AI researchers, and political economists against traditional disciplinary standards? UNESCO’s Open Science declaration acknowledged this: different fields produce different outputs. Publications dominate some disciplines. Others value practical software, conference presentations, or scholarly monographs. Imposing a single evaluation axis invites dysfunction.
Yet universities faced a paradox. Without common metrics, how can they allocate block funding across faculties? This tension drove CoARA’s (Coalition for Advancing Research Assessment) emergence as a bridge. While rooted in concerns about metric diversity, it gestured toward something more fundamental: recognition that research outcomes are inherently diverse and should be celebrated as such. CoARA’s collaborative exploration, involving numerous universities, has been developing new evaluation paradigms. But it remained largely within existing accountability structures.
From Audit to Dialogue: COMON’s Radical Vision
COMON represents these efforts crystallizing into something genuinely different. Kyoto University’s framework refuses to use evaluation results for funding allocation—a deliberate break from the accountability paradigm that defines Stage 2. Instead, it shifts institutional relationships from control to capacity-building.
One researcher from Kyoto University explained COMON through the Paris Agreement framework. The climate accord emerged from lessons learned: the Kyoto Protocol’s failure came from imposing uniform targets on nations with vastly different circumstances, producing resentment and resistance. The Paris Agreement enables voluntary participation. Each nation sets its own commitments while contributing to a collective goal. Unity through diversity, coordination without coercion.
COMON adopts this structure, but goes further. It defines quality research through five dimensions including diversity and research not yet visible. Crucially, COMON explicitly prohibits using these evaluations for funding allocation. Indicators inevitably produce gaming when tied to resources.
But the deeper transformation lies in institutional relationships. The university’s role shifts from allocation and audit to capacity-building. It provides minimum funding and focuses on helping each faculty and research project attract resources independently—through grants, industry partnerships, even crowdfunding. This support extends beyond money to researcher development, academic society formation, and communications capacity.
Here is what makes this radical: the goal is not optimizing distribution of a fixed resource pool. In an era of increasing complexity and diversity, the aim is transforming each organizational unit—each faculty, each project—into an entity capable of creating new resources rather than competing for limited existing ones. This requires each unit to reflect on where they currently stand and build evaluation frameworks suited to their specific contexts. The university enables this self-directed development through resources like University Research Administrators (URAs).
This is what “dialogue” means in COMON’s vision. It is not simply about emphasizing qualitative assessment over quantitative metrics for individual projects. It is about fostering disciplinary consensus and a sense of belonging—something that cannot emerge from centralized metrics. When faculties and projects collaboratively determine what they need to do, how they define success, and how they will build capacity, they reconstruct the shared purpose that audit culture eroded. This path forward is uncertain and will likely be difficult for all involved. But unlike superficial reforms, it addresses the root cause.
This is not a rosy vision of smooth transition. What Kyoto University proposes is difficult, uncertain work. It means faculties and projects must develop new capabilities. It means researchers must think differently about how they build support and demonstrate value. It means universities must radically reconceive their role from gatekeepers to enablers.
This matters because research evaluation shapes research itself. When competition for fixed resources dominates, collaboration suffers.
COMON asks whether research communities can define and pursue excellence when given tools, resources, and freedom rather than surveillance. Whether universities can shift from controlling to enabling. Whether decentralized evaluation can maintain quality and accountability. These questions have no guaranteed answers. Some faculties and projects will struggle or fail. Universities will face pressure to reassert central control when outcomes seem unclear.
This is not a quiet paradigm shift. It is a loud one—if you know what to listen for.