Killing Sacred Cows, Part 2: We Can’t Train What We Can’t Define

Killing Sacred Cows, Part 2 of a series which explores myths and outdated practices in academic research that need to be critically examined, debated, and put on the chopping block. Killing Sacred Cows, Part 1: Staff Should Not Report to Faculty
P.U. Research Administration Job Descriptions Stink!
Everyone agrees that research administration needs better training. And yet, no matter how many new modules, certifications, or onboarding programs we roll out, it never feels like enough. The problems don’t go away; they just reappear in a new form — another policy misunderstood and misapplied, another system glitch that requires a ridiculous workaround, another department reinventing the wheel.
It’s exhausting. It’s never-ending. And it’s largely because we’re expecting people to do a job that isn’t accurately captured on paper.
Quiet truth #2: we can’t build effective training programs until we fix the foundation; the job itself is poorly defined and that starts with the job descriptions we put out into the labor market.
Job descriptions in research administration are often vague to the point of absurdity — collapsing dozens of specialized, high-stakes responsibilities into a single bullet or generically stuffing a lot of unwritten tasks into a “other duties as assigned” line at the end. That vagueness doesn’t just confuse new hires; it undervalues the work, keeps pay low, and perpetuates the idea that anyone with “good organizational skills and the ability to work in a fast-paced environment” can manage highly regulated sponsored projects.
When institutions can’t even describe what research administrators actually do, they can’t measure workload, design training, or build sustainable staffing models. The result? Burnout, turnover, and a workforce constantly reinventing the wheel, and worse yet creating risk and lapses in compliance.
Before we can talk about improving training, we have to get honest about what the job really is — and all the invisible labor we’ve normalized as “other duties as assigned.”
And that conversation has to start at the entry point. New research administrators are the canaries in the coal mine — the first to feel every broken system, unclear policy, and impossible expectation. They arrive eager to learn, but we hand them chaos disguised as opportunity. If we can’t build clarity and competence at that level, everything that follows is damage control.
Most research administration job descriptions are works of fiction. They collapse a dozen specialized, high-stakes compliance responsibilities into a few vague sentences that could describe almost any administrative role on campus. It’s like handing someone a pair of skis, pointing them toward a double black diamond, and saying, “You’ll learn as you go, it will be finnnnneeee.”
But skiing — like research administration — has fundamentals. You don’t start by navigating trees and moguls; you start by learning balance, edges, controlling speed, and how to fall without breaking something. Those are the core competencies. They’re teachable, measurable, and necessary before anyone should be sent down the mountain.
Instead, we shove new administrators into the trees on their very first run, rationalizing it because they supposedly have “transferable skills.” They crash. They get hurt. And in the process, they take out everyone around them — faculty, finance, compliance, anyone downstream of the work. Then we act surprised when they burn out, quit, or trigger a compliance event.
Entry-level administrators should have ZERO “other duties as assigned.” Their roles should be crystal clear: what they’re responsible for, what they’re learning, and what success looks like. Only when the fundamentals are solid should the slope get steeper.
The good news is, it’s not impossible to define. We just don’t do it.
Redefining the Foundation: Training That Starts Before the Job
If we want consistent, high-quality research administration, we have to stop treating training as an afterthought and start treating it as infrastructure. Every institution says it wants “qualified candidates,” but we’ve done almost nothing to define what “qualified” means — or to create pipelines that teach it.
That’s what makes Virginia Commonwealth University’s new Research Ecosystems minor so notable. Launching in Fall 2025, it’s one of the first undergraduate programs to treat research administration and management as part of the broader research enterprise — not just paperwork that happens as an after thought behind the science.
The program recognizes something higher ed has long ignored: research doesn’t run on good intentions and grant writers. It runs on people who understand how funding, regulation, ethics, compliance, and operations all fit together. Those skills don’t just appear on day one of the job; they have to be taught, cultivated, and valued.
The Research Ecosystems minor does exactly that. It introduces students to the operational and regulatory foundations of research, from grant lifecycles and compliance to project management and team science. It pairs classroom learning with internships across research administration, regulatory affairs, and research finance — grounding theory in the realities of institutional systems.
It’s an early but essential step toward professionalizing a field that’s evolved and been allowed to define itself through “other duties as assigned.”
Until we build consistent baseline knowledge — shared vocabulary, shared expectations, shared competency standards — no amount of on-the-job training will fix the churn. You can’t professionalize a workforce whose core skills are invisible on paper.
Take pre-award. I developed a Pre-Award Blueprint that outlines what it actually takes to become competent in the role — not the fairy tale version written into a job posting, but the real, measurable path to mastery. It includes sections like Roles and Responsibilities, Managing the Pre-Award Lifecycle, NIH Budget Basics, and Technical Submission Processes, all supported by a Knowledge, Skills, and Competencies framework and a Task Responsibility Matrix.
It’s not glamorous, but it’s honest and clocks in at 10 pages. It shows what most research units fail to recognize or admit out loud: there is a learning curve, and it’s steep. Realistically, it takes 18–24 months to become truly fluent in pre-award. That’s normal. What’s not normal is pretending a few weeks of shadowing and an onboarding checklist can produce expertise.
And if institutions want to attract people capable of climbing that curve, they need to start treating research administration as a discipline — not a fallback career you stumble into.
That’s what makes Virginia Commonwealth University’s Research Ecosystems minor so important. It’s one of the first real attempts to build a consistent knowledge base before people even enter the profession.
If we want research administration to work, we have to stop pretending it’s something you can pick up on the fly. Start with clarity. Start with fundamentals. Give new administrators a real roadmap, like a blueprint or structured curriculum, and don’t let them wander into the trees alone. Teach them the craft before you pile on.
We can’t shortcut competence or expect a new administrator to plug holes and do tasks that we need to pass off due to capacity. We can’t outsource judgment to shadowing or “good organizational skills.” And we can’t keep hiding the invisible labor behind vague job descriptions and other duties as assigned.
Do this right at the entry point — define the role, teach the fundamentals, provide real-world context — and you’ll have staff who can not only survive the mountain, but ski it with skill, confidence, and a little joy. Everything else flows from that.