Decoded - AI Barriers
Six Things Standing Between Child Welfare and Meaningful AI Adoption
The field faces a 2026 convergence of federal mandates, generative AI potential, and unfinished infrastructure. Here’s what’s actually in the way — and what would help.
Access it here as a printable PDF
By Kurt Heisler
The Trump Administration’s child welfare executive order indicates the federal government plans to push hard on AI adoption in child welfare.
The order, signed in November 2025, directs HHS to “expand States’ use of technological solutions, including predictive analytics and tools powered by artificial intelligence.”
HHS has 180 days to develop guidance—which means something should be coming in May 2026. Most state agencies aren't ready.
That's not because they lack interest or capacity, but because they're navigating a fog of conflicting definitions, outdated infrastructure, and risk-aversion that treats all AI as equally dangerous.
What follows are six barriers between the federal push and state readiness.
They're specific, addressable, and grounded in what's actually happening. Leaders who focus on solving them now will maximize optionality and impact, regardless of what May brings.
The Barriers
1. The Conflation Problem
When most child welfare professionals hear “AI,” they picture one thing: predictive analytics.
That’s understandable; algorithms trained on historical data to predict risk, screen hotline calls, flag cases are high-stakes, ethically complex, and have generated a decade of legitimate debate about bias, transparency, and consent.
It also represents a sliver of what AI can actually do in this field.
There’s a second, fundamentally different use case that gets far less attention in child welfare: generative AI for workflow, productivity, and administrative tasks.
Drafting a service plan. Summarizing a case history that spans a decade. Generating a first-pass AFCARS data validation report.
Asking an AI assistant how to merge three Excel workbooks is not the same thing as using an algorithm to screen hotline calls.
Internalizing that distinction is essential to prevent risk-aversion toward one use case suppressing adoption across all the others.
What would help:
Clearer communication that differentiates AI use cases by risk tier.
Federal policymakers, national organizations and innovative state and local leaders can name and repeat the distinction.
This also will entail helping practitioners see that embracing generative AI for administrative tasks doesn’t require first resolving complex and foundational disagreements about predictive analytics.
2. The Awareness Gap
Many child welfare professionals simply don’t know what’s possible because nobody’s provided permission and guidance.
I meet data analysts regularly who still manually copy and paste across three Excel spreadsheets because the option of automating this seems out of bounds.
The solution is almost absurdly simple: open your preferred AI tool and type, “How can I easily merge three Excel workbooks?”
But if you don’t know that’s an option, don’t know if it’s allowed, or don’t have 20 minutes to experiment—you keep copy-pasting and spending 2 hours every week on a repetitive task that could be done in 5 seconds.
This isn’t a criticism of staff. It’s a system design failure mode. Our field hasn’t yet built channels for practitioners and staff to discover, try, and learn these tools in a low-pressure way.
What would help:
Building capacity to experiment and learn.
Leaders in state and local agencies can create low-pressure sandbox environments where staff can try AI tools on non-sensitive tasks.
Federal policy makers and national organizations can invest in short, practical AI-for-child-welfare tutorial libraries—not webinars about “the future of AI,” but concrete walkthroughs of what you can do today. Five-minute videos. Real examples from real agencies.
3. The Capacity Crisis
Child welfare staff are stretched thin.
Caseworkers are drowning in documentation. Supervisors are backfilling caseloads. Data teams are consumed by federal reporting deadlines, audits, consent decrees, and data emergencies.
Nobody has the bandwidth to deeply learn, test, and develop AI workflows—even when they suspect those tools could save them time.
The people who most need AI’s help are the ones with the least time to learn it.
What would help:
Creating structure and investment that make innovation and implementation
testing a routine and ongoing practice
State and local agency leaders can offer protected time for experimentation when the bandwidth exists, or create the bandwidth for it.The upfront marginal cost here pays unignorable compounding interest.
Alternatively, an agency can bring in external technical assistance to carry the learning curve, letting TA providers or consultants develop workflows that staff can then adopt rather than agency staff building this from scratch.
Philanthropy and public policy can also finance innovation, even through small investments of $25,000 to $50,000—that let a CQI team, program office, or data unit spend a quarter piloting AI tools on a defined use case.
Federal policy makers can help agencies through the first implementation. The investment doesn’t have to be enormous; it has to be targeted.
4. The Privacy and Security Fog
Real data privacy and security concerns exist—but they’re often invoked broadly without distinguishing between what’s genuinely risky and what’s manageable.
And when kept abstract, these conversations can imply that a decision to avoid AI is possible, when in reality, not making a decision invites in understandable informal experimentation.
Many agencies operate under a blanket “we can’t use AI” posture when what they actually need is clear guidance on which AI tools are permissible for which tasks.
The landscape of local LLMs, sandboxed enterprise environments, and cloud options with strong data governance has matured rapidly.
Agencies can run AI tools that never send case data to the open internet. But if nobody in the agency understands that, the default is prohibition across the board.
What would help:
Clear tiered policy frameworks that differentiate use cases and risk management
Develop AI use policies between IT and program staff that distinguish between tools that touch personally identifiable information and tools that don’t.
Federal guidance can provide a model AI acceptable-use framework that states adapt to their own legal and regulatory environments, complete with use cases ranked by risk, complexity, and other metrics.
5. The Organizational Architecture Question
Even agencies that want to adopt AI often don’t know where it should live organizationally.
In the IT shop? The risk is that adoption becomes too narrowly technical—driven by infrastructure rather than program needs.
In a CQI division? Closer to the use cases, but often under-resourced.
A dedicated AI team? Most agencies can’t justify the headcount.
Relying solely on contractors? Possible, but it creates dependency and knowledge drain when contracts end. There’s no standard model yet, and that ambiguity stalls action.
What would help:
Sharing, promoting, and testing best practices in AI adoption.
National organizations can convene peer learning where agencies further along share not just what they're doing but how they're organized to do it.
Federal policymakers or national organizations could publish a brief on organizational models for AI adoption in child welfare.
6. The Skill Perception Gap
“We don’t have the skills” is an increasingly outdated argument.
Generative AI has dramatically lowered the skill barrier. You don’t need to know Python or R to automate a data merge—you need to know how to describe what you want in plain English.
We now have PhD-level assistants at our disposal in virtually every domain. The barrier isn’t skill; it’s knowing the assistant exists and being given permission and time to use it. Well enough to do so effectively.
What would help: Creating a culture of AI literacy
Leaders across the field can reframe the conversation from “building technical capacity” to “building AI literacy.” Technical capacity—hiring data scientists, standing up infrastructure—takes years.
AI literacy—knowing what to ask for, how to evaluate outputs, when to trust and when to verify—can be built in hours, not months.
Agencies don’t have to become technology organizations; they have to become organizations that know how to use technology.
Workflow: The Low-Stakes Proving Ground
If the barriers feel daunting, the path forward is actually straightforward: start with the work that carries the least risk and the most immediate payoff.
The field doesn’t need to resolve every question about predictive analytics and ethics before it starts using AI for administrative tasks. Those are separate decisions on separate timelines.
The proving ground is workflow.
Draft a service plan. Summarize complex cases. Generate a first-pass AFCARS data validation report. Build a caseworker’s daily agenda. Automate a monthly caseload dashboard.
These are real time savings, immediately demonstrable, with minimal risk to families.
Other sectors are already showing the way.
In education, teachers are using generative AI to dramatically speed up lesson planning, rubric development, and grading—not to replace pedagogical judgment, but to reclaim hours for actual instruction.
In law, attorneys are using AI to summarize case law, draft briefs, and manage discovery documents—not to argue cases, but to reduce the administrative load that keeps them from client-facing work.
Child welfare can follow the same path: let AI handle the documentation and administrative burdens so practitioners can focus on families.
What 2026 Makes Unignorable for Decision Makers
The Fostering the Future executive order’s 180-day clock hits in May 2026, and it will produce some form of federal direction on technology modernization.
The field can't control what that direction entails, but it can control the posture with which it receives that direction.
The broader AI opportunity in child welfare is waiting on the other side of these six barriers, and none of them is insurmountable by closing the awareness gap, building AI literacy, developing responsible use policies, and piloting low-risk use cases that demonstrate value.
———
Kurt Heisler is the owner of ChildMetrix, a Virginia-based child welfare consulting company. He provides data analytics, dashboards, performance measurement, and continuous quality improvement services to state child welfare agencies.