From Data to Decisions: Leveraging AI for Mission Impact
As federal agencies move deeper into 2026, leaders are operating under tighter budgets, rising expectations for measurable outcomes, and an increasingly complex mission environment. Data volume is not the limiting factor. The challenge is turning scattered information into decisions that improve readiness, reduce risk, and strengthen program performance.
Leadership Connect’s February 26 webinar, From Data to Decisions: Leveraging AI for Mission Impact, hosted in partnership with Denodo, focused on what it really takes to move beyond dashboards and into day to day decision advantage. Panelists shared perspectives from defense, health and human services, oversight, and enterprise data architecture. The throughline was consistent: AI can accelerate mission impact, but only when trust, data foundations, and operating models are built deliberately.
Couldn’t attend the session live? Watch the whole webinar here and make sure to follow our events page to get in on the next conversation. Below are the key themes that shaped the discussion!
Why “data to decisions” is still difficult
The discussion opened with a familiar reality across government: agencies have more data than ever, yet outcomes often stall at reporting and analysis. One barrier is trust. Leaders and staff are still building confidence in AI outputs, and many organizations are cautious about using AI in environments where mistakes carry real consequences. That caution is compounded by limited training. Even motivated teams can struggle to use AI tools effectively if they do not understand how to test, validate, or apply them to their work.
Decentralization adds a second layer of friction. Within large departments, data is frequently stored in different systems, tagged in different ways, and governed by different rules. Even when teams want to share, questions about what can be shared, how it can be shared securely, and how to make it usable across divisions slow progress. The panel emphasized that AI readiness depends on data quality and interoperability, not simply access to large volumes of information.
Oversight work adds a third challenge. In investigative and audit environments, data may exist but remain hard to operationalize because access is complex, formats vary, and specialized tools do not connect cleanly to core operational systems. When insights are not delivered in a usable way, the risk is not only inefficiency. It can also mean cases go uninvestigated, audits miss issues, or organizations fail to see emerging patterns early enough to act.
Finally, the shift from traditional machine learning to generative AI has raised the bar on data discipline. Generative systems rely more heavily on context and language. They are less deterministic than earlier analytics workflows, and that makes it more important to manage semantics, security, and the way information is presented to the model. Fragmented data is not just hard to analyze. It can also produce inconsistent outputs when routed through modern AI tooling.
Trust, security, and the “mission stack” reality
A core portion of the conversation focused on how trust plays out in mission environments. Panelists discussed the “mission stack” as a way to understand where decision risk accumulates: the weapon or mission system, operational technology, information technology, installation infrastructure, and the defense industrial base that builds and services the capability. When trust breaks down in any part of that chain, it can undermine decision quality and increase vulnerability.
Data sharing constraints are not always a matter of preference. In defense environments, compartmentalization and classification levels are built into how systems operate. That makes it difficult to implement broad AI solutions that require unified access across domains. The result is a practical tension: leaders want speed and scale, but must preserve security boundaries that are essential to national security.
Health and human services face a different version of the same trust problem. Data silos can be driven by organizational behavior as much as by policy. The panel noted how “my data” mindsets can slow collaboration, and how teams need to be convinced that information can be brought together for AI safely, without leaking or misusing sensitive data. Building that confidence requires clear safeguards and communication with stakeholders across the organization.
What actually works in practice
When the conversation moved from barriers to solutions, the panel returned to a consistent principle: start small, prove value, and scale with intention.
At USPS OIG, progress has come through a crawl, walk, run approach grounded in real operational questions. Instead of beginning with broad enterprise ambitions, teams begin with concrete investigative or audit needs and build small solutions that deliver usable insights. Those early wins build credibility, clarify requirements, and create a foundation for expansion. Importantly, the panel emphasized that “doing nothing” also carries risk. The goal is to select manageable use cases, learn quickly, and mature capabilities responsibly.
Leadership sponsorship was presented as a key differentiator. When senior leaders are invested, they can support the cost decisions that AI requires and help the organization stay focused on learning and scaling instead of getting stuck in extended pilots.
Defense oriented perspectives reinforced the value of measured steps. In high consequence environments, AI should function as decision support, not decision replacement. Human in the loop approaches remain essential, particularly when AI begins to influence operational decisions. The panel used familiar analogies from other industries to make the point: even when automation is advanced, humans often remain responsible for critical outcomes, especially when safety and trust are at stake.
The foundation: semantics, architecture, and expectations
Scaling AI requires more than successful pilots. The panel highlighted foundational work that must happen for AI to deliver consistent value across teams.
A recurring concept was universal semantics, meaning shared definitions and vocabulary for data across the organization. When information is made available in a way that aligns with agency language and shared understanding, teams can reuse datasets more effectively and reduce time spent translating or reconciling meaning. This matters even more in generative AI contexts, where subtle differences in language, context, or data labeling can influence results.
The panel also emphasized reducing friction for the teams doing the AI work. AI researchers and developers should not be spending disproportionate time wrangling data, hunting for sources, or navigating avoidable security complexity. Organizations that separate responsibilities clearly and build predictable data pathways allow technical teams to focus on value creation rather than data scavenging.
Change management showed up as a practical requirement, not a soft add on. Iteration works only when expectations are managed at each stage. As systems evolve, stakeholders’ assumptions about what AI will do can drift. Keeping expectations aligned is part of responsible scaling.
Operating models that sustain momentum
As the discussion moved into collaboration, the focus shifted from technical readiness to organizational design.
At HHS, a key priority is visibility into what tools and systems exist across divisions, paired with a push toward shared services and streamlined purchasing. The panel described a “1HHS” approach aimed at reducing redundant investments, improving coordination, and reinforcing common goals. Shared tools also create opportunities for shared learning, which can accelerate adoption and reduce duplication across teams working on similar problems.
USPS OIG shared a complementary model: an executive level AI Council to drive strategic direction and communication, paired with a working level AI Impact Team to identify needs and align priorities across functional areas. One notable insight was the value of a constructive posture. Rather than treating AI as something to block, leadership framed it as something to enable with oversight and validation. The discussion emphasized that governance structures can support adoption by making risk management explicit and giving teams clarity on what is permitted and how to proceed.
Budgets, tradeoffs, and mission value
AI adoption does not occur in a vacuum. Budget constraints and tradeoffs shape what can be pursued and when. The panel discussed prioritization as a “hard problem,” particularly in large departments where every decision has second and third order effects.
A key theme was that AI can support prioritization itself. When developed and applied well, AI can help leaders understand trade space, model scenarios, and explore likely and dangerous futures. This is especially relevant when planning for complex adversaries and trying to avoid building for outdated assumptions. Even so, the panel was clear that none of this happens without sustained investment and disciplined decision making about where AI delivers mission value.
What changes over the next two to three years
The final segment looked ahead at what will shift as AI becomes more embedded in government work.
Panelists anticipated continued focus on large language models and adjacent capabilities, alongside greater maturity in how organizations “stage” and model information for accuracy. Meaning and shared understanding will become more important, not less, as AI is used across larger groups with different terminology and operating contexts.
There was also a strong expectation that adoption will expand through practical efficiency gains. Reducing administrative burden, streamlining paperwork, and improving routine production work were framed as realistic early wins that can free time for higher value analysis and mission planning.
In health contexts, the panel pointed to a longer runway for frontline decision use, paired with sustained emphasis on trust, ethical and legal frameworks, and responsible deployment strategies that protect patients and maintain confidence.
In oversight and operational environments, the panel described a mindset shift where baseline AI tools become normal and expected, paired with continued vigilance about new threat patterns. As AI improves logistics, analytics, and workflow speed, it will also create new opportunities for fraud and exploitation that oversight teams must anticipate.
Action oriented takeaways leaders can apply now
Several practical actions emerged repeatedly across the conversation.
Leaders can start by selecting narrow, high value use cases tied to real operational questions, then building confidence through measured pilots and clear success metrics. Trust grows faster when teams can validate results in context and refine the approach iteratively.
Organizations should invest in foundations that make scaling possible: shared semantics, reliable data pathways, and security and access models that preserve necessary boundaries while enabling legitimate sharing.
Governance and operating structures matter as much as technology. Councils, working groups, and shared services models can reduce duplication and help teams align around priorities, budgets, and accountability.
Finally, workforce readiness is a multiplier. Raising baseline AI and data literacy, pairing training with hands on adoption support, and building documentation and onboarding into every solution helps tools become usable, not just available.
Continue the Conversation
Watch the on demand webinar to hear the full discussion and explore additional Leadership Connect resources on AI adoption, data strategy, and public sector mission delivery. Stay connected to upcoming events as we continue convening leaders across government and industry to share practical lessons on scaling innovation responsibly.
To learn more about Leadership Connect and access additional insights from government and industry leaders, visit our website and explore our products!




