Why this question matters

When your library signs a transformative agreement (TA), you’re no longer paying only for access; you’re also funding your authors’ ability to publish open access (OA). That changes how we think about value. The sticker price of the deal is just the starting point. To understand whether a TA is “worth it,” you’ll want a clear view of (1) the money, (2) how the included journals are used and published in, and (3) the staff time required to make it all work. This article lays out a concise, replicable framework across three metric families—financial, journal package & usage, and labor—so you can producedefensible answers for collection committees, research administrators, and faculty.

1) Financial metrics: follow the money

These metrics quantify the direct monetary aspects of your agreement. Most can be computed with data you already collect.

A. Baseline & premium

Prior spend vs. TA spend. Compare the TA cost to the last read-only subscription price (adjusted for normal inflation).

Why it matters: isolates the OA premium—what you’re paying specifically to enable publishing.

B. Output-adjusted costs

OA articles published (count). Number of institution-affiliated, TA-eligible articles published OA during the period. Average cost per OA article = (TA cost attributable to “publish”) ÷ (OA articles).

Why it matters: lets you compare TAs to other OA funding routes (e.g., standalone APCs, discretionary funds).

C. APC cost avoidance

APC list price sum avoided = Σ (list APC per article) across OA articlescovered by the TA. Optionally compute net APC avoidance using the agreed APC (if different from list), or after applying waivers/discounts.

Why it matters: translates library spend into direct savings for researchers, departments, or the university.

D. Read/Publish allocation (ex-post)

Because actual author output is uncertain at the start, estimate the effective split after the year ends:Publish value realized ≈ APC avoidance (or agreed APC totals).Effective read spend = TA invoice – publish value realized. Then re-compute cost-per-use (CPU) for reading using the effective read spend (see next section).

Why it matters: shows how publishing activity reduces the true cost of reading, improving the apparent value of the package.

E. Campus-level value mapping (optional but persuasive)

If you serve multiple campuses/colleges, apportion APC avoidance and/or OA outputs by campus/department and compare to their library contribution.

Why it matters: demonstrates tangible return to each stakeholder (e.g., “OA publishing savings equaled ~4% of Campus A’s library contribution”).

Tip: Decide upfront whether to use list APCs or contracted APCs in your “publish value realized.” List APCs speak to researcher savings; contracted APCs may better reflect what the library actually “bought.”

2) Journal package & usage metrics: keep the classics, add an OA lens.

These metrics adapt familiar e-resource practices (usage, CPU) to the TA context.

A. Title lists are foundational

Maintain clean, versioned read and publish title lists (ISSNs, journal IDs, subject tags). Normalize publisher-supplied lists into a standard structure.

Why it matters: lets you map usage, publications, and citations to the package, and see subject coverage (e.g., HSS vs. STEM).

B. Usage and effective CPU for reading

COUNTER usage (e.g., full-text requests) for titles in the read list. Effective reading CPU = (Effective read spend from §1D) ÷ (package usage).

Why it matters: shows how active OA publishing lowers the cost of reading.

C. Caps & pacing

If the TA includes an article cap, track monthly consumption vs. cap.

Why it matters: prevents mid-year surprises; supports negotiation on top-ups or next-year caps.

D. Subject alignment

Compare subject mix in the publish list to your institutional publishing and usage by discipline.

Why it matters: reveals mismatches (e.g., heavy HSS output but a STEM-skewed TA), which inform renewal priorities.

E. Publish/read overlap

Flag journals with both high usage and high OA output from your authors.

Why it matters: identifies “anchor” titles where the TA is delivering value on both sides.

3) Labor metrics: the “unseen costs”

TAs add workflows: eligibility checks, author support, data reconciliation, reporting. Even modest per-article tasks add up.

A. Time accounting (lightweight but consistent)

Track staff hours per activity category (e.g., author helpdesk, data wrangling, reporting, license/admin). Roll up to monthly and per-article views.

B. FTE & role clarity

Estimate the FTE fraction supporting TAs (dedicated vs. shared roles).

Why it matters: supports staffing cases and succession planning.

C. Manual vs. automated

Note which processes are manual (spreadsheets, ad-hoc emails) vs. automated (APIs, dashboards, scheduled data pulls).

Why it matters: helps justify investments in tooling to reduce recurring labor.

D. Consortial leverage

If you’re in a consortium, record what is handled upstream (title lists, reporting, negotiations).

Why it matters: clarifies your local scope and the true local labor cost.

Reality check: Many libraries haven’t fully “cracked” labor metrics. Even a qualitative register of recurring tasks and pinch points (e.g., “eligibility verification is our bottleneck”) is actionable and valuable.

Bottom line

Answering “what does it cost to support open access?” means looking past the invoice and accounting for publish value realized, effective reading costs, and the labor that makes it all happen. With a small, disciplined metric set, you can show—clearly and credibly—how a TA converts subscription spend into open scholarship, how that affects traditional cost-per-use, and what it takes operationally to deliver the service. That evidence is exactly what collection councils, research leadership, and faculty advisors need to make confident decisions about renewing, expanding, or re-balancing your OA strategy.