Skip to main content

The Hidden Risk in Federal Software Supply Chains and How to Fix It

Federal agencies rely heavily on software to build, secure, and operate modern digital systems. But software is no longer a single product that can be scanned once and trusted forever. It is made up of open-source packages, third-party components, libraries, container images, vendor tools, and dependencies that can introduce risk at every stage of the lifecycle.

That reality is changing how agencies think about software supply chain security. The focus is moving from broad trust in vendors or products to a more detailed understanding of what is actually inside the software agencies buy, build, and deploy.

That shift shaped Leadership Connect’s webinar, “The Hidden Risk in Federal Software Supply Chains and How to Fix It,” hosted in partnership with Chainguard on April 30, 2026. The discussion brought together leaders from the National Cybersecurity Center of Excellence at the National Institute of Standards and Technology, the U.S. Department of Transportation, the Department of Defense Cyber Crime Center, and Chainguard to examine how agencies can identify and reduce software supply chain risk in practice.

The conversation moved through the current software supply chain landscape, operational pressure points, secure software development frameworks, DevSecOps, zero trust, vulnerability disclosure, information sharing, and the future of automated defense. Across each topic, the same themes kept resurfacing: visibility, trust, verification, collaboration, and the need to make security part of the process from the beginning.

Couldn’t attend live? View the event here and make sure to follow our events page to join the next conversation. Below are the key themes that shaped the discussion.

Agencies are moving from perimeter trust to software pedigree

The conversation opened with a clear point: software supply chain risk is now something agencies are actively focused on, not a secondary concern. The old model of scanning software, bringing it into the environment, and then trusting it once it crosses the perimeter is no longer enough.

Instead, agencies are moving toward a model based on pedigree. They need to know what ingredients make up a piece of software, where those ingredients came from, how they behave, and whether they can be trusted. That is where software bills of materials, or SBOMs, and attestations come in.

The discussion framed SBOMs as more than documentation. They are part of a broader effort to understand software “top to bottom, inside out.” Agencies want to know what vendors are selling them, what components are inside, and what risk those components may introduce.

This reflects a larger change in mindset. Software is not one singular thing. It is many pieces of software combined into one product. Once leaders look at software that way, the surface area they need to protect becomes much larger.

Software supply chain risk is a whole-ecosystem problem

Another early theme was that software supply chain risk is systemic. It involves the full ecosystem of developers, vendors, agencies, open-source communities, commercial products, third-party components, and operational environments.

A single vulnerability, even one introduced through a low-level utility or small component, can have wide impact. That is why the conversation kept returning to guidance, frameworks, and practices that help organizations identify, assess, and mitigate cybersecurity risks throughout the supply chain.

The discussion highlighted several NIST resources and concepts, including cybersecurity supply chain risk management guidance, zero trust architecture, and the Secure Software Development Framework. Together, these approaches support a shift from assumption-based trust to evidence-based trust.

That shift is especially important because modern software often depends on third-party components. If an agency does not know what is inside those components, it may not understand the risks it is inheriting. The panel connected that issue to major supply chain incidents, including SolarWinds, as an example of why component-level visibility matters.

Trust has to be earned, verified, and revisited

Trust was one of the strongest throughlines of the webinar.

The conversation emphasized that agencies need to move away from trusting software based only on vendor reputation. Instead, they need verifiable evidence at the component level. That evidence can include SBOMs, attestations, cryptographic signatures, provenance, integrity checks, vulnerability data, and continuous monitoring.

This idea aligned closely with zero trust. Trust should not be assumed simply because a vendor is known, a tool is already inside the environment, or a system has been approved in the past. Trust must be earned and verified.

That also applies to the defense industrial base. The discussion noted that many of the software and network issues seen across those environments come back to understanding what is actually on the network, what is inside the software, and how those systems are secured.

The takeaway was simple but important: trust is not a one-time decision. It is an ongoing process supported by visibility, evidence, and validation.

Teams need more help turning goals into practice

As the conversation moved into operational pressure points, the panel discussed a common challenge: many organizations agree with the goals of secure software development, but they need more guidance on how to achieve them.

The Secure Software Development Framework is goal-based and intentionally flexible. That makes it useful across many types of environments, but it also means teams often need help understanding what artifacts, processes, or evidence demonstrate that they are meeting those goals.

The panel pointed to several open questions organizations are working through. How should teams prove what they have attested to? How should they manage dependencies at scale, especially with open-source components? How can they safely incorporate AI into the software development process?

These are not small questions. They sit at the center of modern software delivery. Agencies need practical examples, implementation support, and feedback loops that help them translate broad security goals into daily engineering and acquisition decisions.

Assurance and speed are competing pressures

Another major pressure point was the balance between assurance and speed.

Agencies are under pressure to modernize quickly, deliver products faster, consolidate systems, and shift toward product-oriented operating models. At the same time, they need confidence that the software they are building or buying is secure and that risks have been identified, decomposed, and mitigated.

That balance is difficult. Moving too slowly can delay modernization. Moving too quickly without enough assurance can create security and operational risks that are expensive and time-consuming to clean up.

The conversation pointed to SolarWinds as an example of how incident response can create a long tail. The immediate event may be disruptive, but the months of forensic work and recovery that follow can be even more painful.

The lesson was that agencies need security practices that are built into delivery. To move at speed without leaving themselves exposed, teams need inline, integrated processes that make assurance part of modernization instead of a separate checkpoint at the end.

There is still a maturity gap around software supply chain security

From the industry side, the conversation highlighted a maturity and awareness gap.

Many teams understand traditional cybersecurity. They may think about endpoint protection, virus scans, routine patching, and basic vulnerability management. But software security is another layer, and software supply chain security is another layer beyond that.

That gap matters because software supply chain attacks are becoming more visible and more frequent, especially around open-source ecosystems such as JavaScript NPM. As attacks against software supply chains become more common, agencies and technical teams need a clearer understanding of how risk enters through dependencies, build pipelines, libraries, and artifacts.

The panel emphasized the importance of education. Teams need to understand the scope of the problem before they can manage it well. That education also needs to happen in partnership, because government, industry, and technical teams all see different parts of the problem.

SSDF is meant to show goals, not prescribe one path

The webinar then turned to solutions and best practices, starting with the NCCoE DevSecOps project and its connection to the Secure Software Development Framework.

One key point was that SSDF was written to be agnostic to the software development lifecycle. It does not tell organizations exactly how to build software. Instead, it defines the goals organizations should be working toward as they move through the development process.

That flexibility is valuable, but it can also make implementation challenging. The NCCoE DevSecOps project is designed to provide more concrete examples of how a software factory can meet SSDF goals. The project uses a DevSecOps CI/CD pipeline model, with major components in dedicated cloud environments, and demonstrates how different technology components and processes can support SSDF objectives.

The goal is not to create one prescriptive model for every organization. It is to show how one software development enterprise can be decomposed and mapped to SSDF goals so other organizations can compare that approach to their own environments.

Secure software development requires a culture shift

The conversation made clear that SSDF is not just a technical framework. It requires a culture shift.

Panelists discussed the idea of “shifting left,” but also pushed beyond the phrase itself. The real point is to move security earlier into the product lifecycle and make it part of how software is built, rather than treating it as a bolt-on step or a gate near the end.

That means software development teams, product managers, project managers, and security teams all need to understand that security is part of modernization. It is not separate from delivery. It helps teams avoid costly rework, reduce risk earlier, and support faster outcomes over time.

This is a change in how teams think about their work. Security has to be part of the product, part of the cycle, and part of the decision-making process from the beginning.

RMF helps teams decide what not to fix

The Risk Management Framework came up as another practical decision-making tool.

The panel emphasized that RMF should not be used only as a compliance checklist. Its purpose is to help organizations manage risk. Used well, it can help teams decide not only what to fix, but also what not to fix right now.

That distinction matters because vulnerability backlogs can be enormous. Agencies cannot always address every finding at once. Teams need to understand where risk actually lives in their environment, how a vulnerability could be exploited, what the blast radius would be, and whether the issue deserves immediate time and resources.

This is a more mature way to manage security work. Instead of treating every vulnerability the same, teams can use risk context to focus on what matters most.

Zero trust is ongoing, not a checkbox

Zero trust was also discussed as a practical strategy, especially for today’s hybrid environments.

The conversation framed zero trust as a way to minimize implicit trust zones and secure resources where they are located. It relies on continuous verification of access based on attributes and telemetry such as identity, device health, context, and resource sensitivity. It also uses principles such as least privilege, identity governance, micro-segmentation, software-defined perimeter, SASE, monitoring, and analytics.

The panel was clear that zero trust is not something an organization turns on once. It is not a checkbox. Like SSDF and RMF, it has to be revisited as missions change, technologies evolve, new systems come online, and new vulnerabilities are discovered.

That was a repeated point across the webinar: cybersecurity decisions have to be ongoing. Frameworks and strategies are most valuable when they help organizations adapt as risk changes.

Vulnerability disclosure is shifting from reactive to proactive

The discussion then moved to vulnerability disclosure and risk reduction across the defense industrial base.

The Department of Defense Cyber Crime Center’s vulnerability disclosure program grew out of the Hack the Pentagon effort. The program uses white hat ethical researchers to identify vulnerabilities across public-facing infrastructure. More recently, a pilot program with about 50 defense industrial base companies allowed researchers to examine public-facing websites and infrastructure through a framework agreement.

The process does not stop at identifying vulnerabilities. There is also an expectation that the vulnerability will be mitigated, whether through patching or another process, and then validated to confirm the issue was addressed.

The important shift is from reactive to proactive. Instead of waiting for researchers to find issues or incidents to occur, teams are beginning to conduct campaigns and targeted hunts around critical vulnerabilities identified across the community.

That proactive mindset is key to reducing risk earlier.

Developers need secure starting points

The conversation also focused on where organizations can reduce risk earlier in the software lifecycle.

A major opportunity is with developers. Developers write the software and often pull in dependencies, libraries, base images, and components. If those inputs are already vulnerable, risk enters the system early.

That is why secure-by-default approaches matter. The discussion covered hardened base images, secure libraries, minimal container images, stronger build pipelines, and faster remediation. The goal is to make secure software easier to build from the start.

A useful point from the conversation was that teams cannot patch their way out of a container with hundreds of CVEs. If vulnerabilities are not there in the first place, teams can move faster and spend less time remediating old issues.

The broader message was to stop patching the past and start building the future. Secure defaults can give developers time back and reduce risk before software reaches production.

Trusted information sharing can raise security across the ecosystem

Collaboration and information sharing were another major part of the discussion.

The panel highlighted a collaborative information-sharing environment with roughly 1,350 defense industrial base companies. Through that model, organizations share threat intelligence voluntarily and anonymously across the partnership.

The goal is to raise security across the ecosystem, especially for small and medium-sized businesses that may not have large technical teams. In some cases, the same person may be acting as CEO, CIO, and CISO. In other cases, organizations may rely on basic tools and have limited cybersecurity capacity.

For that model to work, trust is essential. Organizations need to believe that sharing information will help them and others without increasing their own risk exposure. Building that trust takes time, repeated engagement, analyst-to-analyst exchanges, technical exchanges, regional partner engagement, and clear communication about how information will be used.

The takeaway was that sharing only works when organizations feel safe enough to participate.

Collaboration turns guidance into something usable

The panel also discussed why collaboration is important for turning cybersecurity guidance into practical implementation.

At the National Cybersecurity Center of Excellence, collaboration between the public and private sectors is central to how projects are designed. The center identifies difficult cybersecurity problems, brings together public and private sector partners, and uses real-world technologies to solve those problems.

That helps bridge the gap between best practices and real-world guidance. It also forces guidance developers to interpret their own documents from the perspective of people who actually have to use them.

This kind of feedback loop matters. Cybersecurity guidance can be comprehensive, but practitioners need to understand how it works in real environments. Collaboration helps make that translation possible.

Playbooks and lessons learned may matter more than short-lived indicators

The conversation also examined what kinds of information sharing are most valuable.

Indicators of compromise can help with immediate detection and blocking, but they are often short-lived. Attackers can change IP addresses, infrastructure, or other indicators quickly, which means an IOC can expire in minutes.

That does not make IOCs useless, but it does limit their long-term value. The panel emphasized that implementation playbooks, mitigation guidance, and lessons learned can often be more durable. If one agency or organization has already solved a problem, others can benefit from knowing what worked, what did not, and how the solution was implemented.

The conversation also touched on post-incident learning. There is often fear of being in the headlines, but the panel emphasized the value of no-fault or learning-oriented postmortems that help the broader community improve. There still needs to be accountability when negligence or bad practices are involved, but organizations also need ways to share lessons faster.

Different organizations need different kinds of help

Another important point was that software supply chain security cannot be approached as if every organization has the same resources.

Large organizations may have the staff, tools, and funding to implement advanced practices at scale. Smaller agencies and smaller businesses may not. They still have important missions, but they often have to make harder tradeoffs about where time and money go.

The panel discussed the need for partnerships that reach into those organizations and assist them. That may include shared services, inherited controls, cloud services, platform approaches, FedRAMP-related models, or support from larger agencies and partners.

The discussion also emphasized that information has to be usable. Providing a smaller organization with highly technical data may not help if they do not have the staff to interpret it or act on it. Effective partnership means giving organizations information and support they can actually use.

Partnerships have to be two-way

The panel was clear that collaboration cannot be one-sided.

Effective partnerships require feedback, shared value, and practical support. Too often, organizations are asked to provide information without receiving useful insight in return. That kind of model does not build trust.

For partnerships to work, especially with smaller organizations, the information being shared must be understandable, actionable, and useful. Automated solutions may help, but the core point is that partnerships have to be real partnerships.

The same theme applied to government-industry collaboration. Vendors and industry partners can help agencies make small improvements, get started, and build iteratively rather than trying to pull organizations too far into the future at once.

Industry can help build standards, tools, and momentum

The industry perspective added another layer to the collaboration discussion.

Government often works on longer timelines, especially when developing standards, policy, or statutes. Industry, particularly software companies, may move much faster. That difference can create challenges, but it also creates opportunity.

The panel discussed the role of industry in building tools, contributing to standards, and supporting implementation. Examples included software supply chain standards, open-source security efforts, and tools for attesting and verifying software artifacts.

The point was not that industry should replace government guidance. It was that standards and tools can be developed in partnership. Government brings mission needs, policy direction, and public trust. Industry brings implementation speed, engineering capacity, and technical tooling.

That partnership can help move software supply chain security from principle to practice.

The first practical step is knowing what you have

As the webinar closed, panelists were asked what practical step organizations can take now to reduce software supply chain risk.

One answer was inventory.

Organizations need to know what software they have, especially the 10 or 15 systems that would stop the mission if they were compromised. Without that knowledge, teams cannot prioritize, protect, or recover effectively.

Inventory may sound basic, but it is foundational. You cannot protect what you do not know you have. It is also the starting point for more advanced capabilities such as autonomous defense, self-healing networks, policy as code, and deeper defense-in-depth strategies.

Use SSDF as a guide for finding gaps

Another practical recommendation was to read and use the Secure Software Development Framework if your organization produces software.

The panel suggested using SSDF as a structure for describing your own process, identifying gaps, and finding opportunities for improvement. Organizations can also provide feedback and use resources that map guidance to existing NIST documents.

This reflects a broader theme from the webinar: frameworks are useful when they help organizations ask better questions. Where are the gaps? What evidence exists? What practices are already in place? What needs to improve?

Focus on third-party components

The discussion also returned to the importance of third-party components, including commercial and open-source components.

One practical step is to collect SBOMs from vendors, scan them for known vulnerabilities, and validate that components have not been tampered with. This helps organizations understand what is included in the software they use and where known risks may exist.

This is where the future appears to be heading: continuous verification to achieve trust in every component. As environments become more complex, much of that work will likely become increasingly automated through advanced tools and AI-assisted analysis.

What leaders can apply now

Taken together, the discussion offered a clear set of practical lessons for agencies and organizations working to reduce software supply chain risk.

The first is to know what is in the environment. Organizations need a reliable inventory of their most important software, systems, components, and dependencies.

The second is to move from implicit trust to verified trust. SBOMs, attestations, provenance, integrity checks, and continuous monitoring help agencies understand what they are using and whether it can be trusted.

The third is to use frameworks as decision-making tools. SSDF, zero trust, and RMF are most useful when they guide ongoing security decisions, not when they are treated only as compliance checklists.

The fourth is to reduce risk earlier. Secure-by-default artifacts, minimal images, hardened libraries, and stronger build pipelines can help prevent vulnerabilities from entering the software lifecycle in the first place.

The fifth is to collaborate in ways that are trusted and usable. Threat intelligence, playbooks, vulnerability data, lessons learned, and industry partnerships all matter, but they need to be shared in ways organizations can act on.

Finally, leaders should prepare for a future built around continuous verification. Software supply chain security is not a one-time review. It is an ongoing effort to understand what is in the software, verify that it can be trusted, and adapt as threats, systems, and missions change.

Continue the conversation

Watch the on-demand webinar to hear the full discussion on software supply chain risk, secure software development, zero trust, vulnerability management, and cross-sector collaboration. You can also explore additional Leadership Connect resources as we continue convening leaders across government and industry to share practical lessons on strengthening public sector cybersecurity.

Thank you again to Chainguard for partnering with Leadership Connect on this conversation. For more on secure-by-default software and public sector software supply chain risk reduction, explore Chainguard’s resources:

  • Secure-by-default: Chainguard customers unaffected by the Trivy supply chain attack: view here!
  • Learn more about Chainguard’s public sector work: learn more!

To learn more about Leadership Connect and access additional insights from government and industry leaders, visit our website and explore our products!