In Practice Brief | HBCUs as Institutional Stewardship Infrastructure for AI Governance
In-Practice Briefs are TL Advisory’s applied governance series—concise, research-based analyses connecting institutional leadership, technology, and accountability.
Each brief examines how governance evolves through Perspective, Practice, and Proof—interpreted through Inquiry, Integrity, and Impact. Together, these stages form a continuous cycle that translates insight into accountability and accountability into measurable trust.
This brief examines how institutional stewardship at HBCUs functions as visible governance infrastructure for AI-era institutional design.
Executive Summary
Artificial intelligence is reshaping instructional delivery, enrollment management, research administration, and student support across higher education. Institutions are forming task forces, piloting vendor tools, and issuing policy statements. The central question, however, is structural: where authority resides, how accountability is organized, and how institutional design sustains trust.
HBCUs have long governed through explicit accountability to defined constituencies—students, alumni, trustees, and surrounding communities. That mission-centered oversight provides governance architecture directly relevant to AI adoption.
This In Practice Brief argues that visible stewardship—where accountability relationships shape authority, oversight, and executive decision-making—functions as infrastructure for AI governance. HBCUs illuminate how longstanding commitments translate into contemporary institutional design.
Leadership in this moment is expressed through institutional design.
I. Perspective | Public Accountability as Governance Infrastructure
HBCUs operate at the nexus of reputational, stakeholder, and structural visibility. Institutional decisions are interpreted through national narratives about equity and access; stakeholder relationships are proximate and identity-linked; and governance commitments are publicly articulated through mission and board stewardship. Accountability, in this context, is highly legible.
Stewardship is a universal obligation across higher education. What differs is the visibility of stewardship within governance architecture—how clearly accountability relationships shape authority, oversight, and executive decision-making. Where stewardship is visible in institutional design, emerging technologies can be governed through recognizable structures rather than newly constructed mechanisms.
As AI systems reshape operational workflows, public accountability becomes a governance signal. The question shifts from “What tool should we adopt?” to “What decision structure governs adoption?”
AI governance concentrates institutional risk in ways that make stewardship visible at the presidential level.
II. Practice | Executive Governance Domains
In institutions where accountability is highly visible, AI governance concentrates in four structural domains:
1. Authority & Oversight Design
Where does AI decision authority reside?
Is there a formal review body?
Does oversight report to the provost, president, or board?
Clarity of authority reduces fragmentation and protects executive accountability.
2. Data Governance & Custody
AI systems process student records, research data, and institutional analytics. Presidents remain accountable under FERPA and related statutes regardless of vendor architecture.¹ Executive review should address:
Who authorizes data access
What datasets train institutional models
What audit mechanisms exist
Data retention and deletion standards
3. Vendor & Dependency Risk
Enterprise AI agreements—particularly discounted or grant-supported arrangements—can embed long-term leverage and infrastructure dependency. Executive scrutiny should evaluate:
Termination and exit rights
Data ownership and derivative use
Model transparency
Strategic reliance over time
4. Equity & Impact Assurance
Algorithmic systems can influence admissions analytics, advising tools, and financial aid modeling. Institutions retain full accountability under Title VI of the Civil Rights Act and related federal statutes for outcomes produced by AI-enabled systems.² Federal civil rights obligations attach to institutional decision-making, including decisions mediated through external technologies. Governance requires:
Impact review prior to deployment
Bias testing where feasible
Clear escalation and remediation pathways
Equity becomes operational when tied to executive authority.
III. Proof | Institutional Alignment
HBCUs operate at the nexus of reputational, stakeholder, and structural visibility. Institutional decisions are interpreted through national narratives about equity and access; stakeholder relationships are proximate and identity-linked; and governance commitments are publicly articulated through mission and board stewardship. Accountability, in this context, is highly legible.
Morgan State University’s Center for Equitable Artificial Intelligence and Machine Learning Systems (CEAMLS) formalizes AI research within an equity-centered institutional structure, situating technological development inside a defined governance framework.³
North Carolina A&T State University has established a comprehensive AI Accelerator that integrates training, enterprise-approved tools, collaborative innovation spaces, and data governance guidance aligned with institutional information security policy.⁴
Howard University has launched a university-wide AI Initiative structured around four pillars—ethics and societal benefit, research and innovation, education and workforce development, and operational efficiency—and established a President’s Artificial Intelligence Advisory Council to guide AI implementation across academic programs, administrative units, and strategic planning.⁵
Florida A&M University has established the Cyber Policy Institute (CyPI), an interdisciplinary center organized around research, workforce development, civil discourse, industry partnership, and public impact pillars—embedding AI governance and digital ethics within a formal institutional framework.⁶
Beyond individual campuses, ecosystem organizations intensify this visibility. The UNCF Institute for Capacity Building has advanced digital transformation and technology modernization initiatives across member campuses, while the Thurgood Marshall College Fund has partnered with industry leaders to expand workforce preparation and AI-related training opportunities.⁷ ⁸ These initiatives elevate governance expectations across participating institutions by linking funding, modernization, and executive accountability.
These efforts differ in emphasis. Their common feature is structural alignment. AI initiatives are embedded within established mission, board, research, and executive governance pathways. The alignment is architectural. Emerging technologies operate inside visible accountability frameworks.
IV. Recommendation | Algorithmic Due Diligence
Applying TL Advisory’s Algorithmic Due Diligence Model, presidents and provosts can formalize AI accountability through stakeholder visibility, formalized oversight, and documented public review.
Perspective | Stakeholder & Authority Mapping: Identify affected constituencies and formally document decision rights, reporting lines, and escalation pathways. Governance begins with visibility.
Practice | Framework Translation & Structured Oversight: Translate legal obligations and institutional commitments into written AI standards. Designate a governance body with authority to review AI deployments and report directly to executive leadership.
Proof | Continuous Review & Public Accountability: Conduct periodic impact assessments, document determinations, communicate governance posture, and adjust oversight structures as technologies evolve.
Due diligence renders governance visible, documented, and defensible over time.
V. Proof | Stewardship Made Visible
For HBCUs, AI governance is not a departure from institutional identity. It is the formalization of mission accountability within emerging technological systems. Mission-centered oversight and public accountability provide the structural foundation. Formalized governance processes determine durability.
Institutions that formalize AI governance now will define the accountability standards others later follow.
Download the In Practice Brief (PDF)
Sources & Citations
1 Family Educational Rights and Privacy Act (FERPA), 20 U.S.C. § 1232g, https://www.law.cornell.edu/uscode/text/20/1232g.
2 Civil Rights Act of 1964, Title VI, 42 U.S.C. § 2000d, https://www.law.cornell.edu/uscode/text/42/2000d.
3 Morgan State University, Center for Equitable Artificial Intelligence and Machine Learning Systems (CEAMLS) (2024–2025), https://www.morgan.edu/ceamls.
4 North Carolina A&T State University, AI Accelerator, Information Technology Services (2024–2025), https://hub.ncat.edu/administration/its/ai/index.php.
5 Howard University, AI Initiative and President’s Artificial Intelligence Advisory Council (2024–2025), https://howard.edu/ai.
6 Florida A&M University, Cyber Policy Institute (CyPI) (2024–2025), https://www.famu.edu/academics/cypi/about-the-famu-cypi.php.
7 UNCF, Institute for Capacity Building — digital transformation initiatives (2024–2025), https://uncficb.org/.
8 Thurgood Marshall College Fund (TMCF), workforce and innovation partnership initiatives (2024–2025), https://www.tmcf.org/.