Governing Through “Mock Precision”: Why It Matters to Federal Contractors

In “Governing through guesstimates: mock precision in international organisations,” Lukas Linsi, Seiki Tanaka, Francesco Giumelli, and Leonard Seabrooke investigate why international organisations (IOs) routinely present uncertain statistics as if they were exact and what this practice does in global policy debates. Their core claim is that IOs operate within a professional “ecology” that rewards attention, the signaling of technical competence, and internal professional status; in that ecology, deceptively precise numbers thrive—even when underlying measurement quality is low. The authors study this through a mixed-methods design: document analysis, 29 elite interviews with IO statisticians and officials, an expert survey of 148 economists and IO practitioners, and a public-opinion experiment with 1,840 U.S. respondents, comparing hard-to-measure illicit-trade statistics with better-measured merchandise-trade statistics.

Three mechanisms drive the production of mock precision. First, attention-grabbing: policymakers and communications teams expect numbers—often single point estimates—to elevate issue salience with media and funders. Second, competence-signaling: precise-looking figures project scientific authority and organisational capability. Third, professional consolidation: within IO bureaucracies, experts navigate resource constraints, board oversight, and peer status, and supplying precise-looking numbers becomes the path of least resistance. Notably, IOs typically aggregate data collected elsewhere and set measurement standards, which places them at the center of the politics of quantification even when they do not directly measure phenomena.

A striking empirical finding is that mock precision itself does not meaningfully change public perceptions. In the survey experiment, presenting a record-high figure in text, as a rounded number, or as a highly precise number did not significantly alter respondents’ views of the issue’s importance, the credibility of the statement, or the organisation’s competence. Likewise, IO policy officials in the expert survey believed numbers garner attention and convey competence, but they did not see mock-precise figures as more effective than rounded ones. These results point away from mass persuasion and toward internal organisational incentives as the key engine behind mock precision’s persistence.

For federal government contractors, this matters in at least four ways. First, policy rationales, thresholds, and compliance benchmarks—covering areas such as illicit financial flows, supply-chain integrity, or trafficking-in-persons—often cite IO statistics. When those figures embody mock precision, contractors face rules and expectations built on estimates that are less certain than they appear. Second, the Article’s evidence suggests that “precision performance” can influence how issues are prioritized within agencies and among international partners; contractors should therefore interrogate numerical claims in solicitations, SOWs, and performance metrics, asking for ranges, methods, and sensitivity analyses. Third, proposal strategy and program risk registers should explicitly treat IO-sourced numbers as uncertain inputs, modeling contingencies where estimates shift materially after award. Fourth, ethics, reporting, and audit defensibility depend on demonstrating methodological literacy: acknowledging uncertainty while complying with the letter of program guidance positions contractors as sophisticated, trustworthy partners who can implement evidence-informed controls without overstating what the numbers can bear. Together, these implications align with the authors’ ecological account: mock precision serves IOs’ internal and inter-organisational dynamics more than it enlightens external audiences, which means contractors need to exercise disciplined due diligence whenever those numbers cascade into federal policy, compliance duties, or performance oversight.

Ultimately, Linsi, Tanaka, Giumelli, and Seabrooke argue that governing through guesstimates is not a mere pathology but a durable feature of how IOs secure resources and authority under uncertainty. For contractors engaging in U.S. and international programs, the prudent response is to treat IO-based statistics as structured approximations: request confidence intervals or methodological notes where feasible, reflect uncertainty in pricing and risk, and avoid relying on spurious exactness when making compliance attestations or forecasting outcomes.

Disclaimer of accuracy: This post summarizes the cited authors’ research and interpretations in a good-faith effort to be accurate and current, but it may omit nuances. Nothing here constitutes legal advice. Readers should consult the original publication and applicable regulations before relying on any statements.

Next
Next

Shutdown News for Federal Contractors: Why a New Bill and a DoD Class Deviation Matter