The Future of Federal Contracting: Innovation, Oversight, and Industry Partnerships
Federal contracting is entering a period of accelerated change. Agencies are working through acquisition reform, contract consolidation, emerging AI use cases, cybersecurity requirements, and growing expectations around transparency, accountability, and efficiency.
At the same time, the basics still matter. Agencies need to buy the right tools, manage risk, work with trusted partners, and deliver mission outcomes without getting slowed down by unnecessary complexity.
That reality shaped Leadership Connect’s Cocktails & Convos event, “The Future of Federal Contracting: Innovation, Oversight, and Industry Partnerships,” held on April 23 at Leadership Connect’s Washington, D.C. office. The discussion brought together perspectives from government, industry, and the broader contracting community to explore what is changing, what still feels stuck, and what leaders should be watching next.
The conversation moved from acquisition reform and AI to contract consolidation, cybersecurity, zero trust, protests, education, and the role of human judgment in an increasingly automated environment. It stayed grounded in practical experience, with a clear focus on what these changes mean for agencies, vendors, and mission delivery.
Couldn’t attend live? View the event here and make sure to follow our events page to join the next conversation. Below are the key themes that shaped the discussion.
Acquisition reform is moving, but the process still feels slow
The conversation opened with a familiar tension in federal contracting: everyone knows change is needed, but meaningful change takes time.
There was discussion around updates to the Federal Acquisition Regulation and broader acquisition reform efforts. While new guidance is starting to move through the system, the conversation made clear that guidance alone does not immediately change behavior. Contracting culture has been shaped by years of rules, risk concerns, legal requirements, and established processes.
Even when reforms are intended to create more agility, contracting officers and program teams still have to operate carefully. They are responsible for following the law, protecting fairness, and avoiding mistakes that could create legal or mission risk.
That means acquisition reform is not only a policy issue. It is also a culture issue. Agencies may receive new guidance, but teams still need time, clarity, and confidence before those changes show up in everyday procurement decisions.
Contract consolidation is one of the clearest areas of progress
One of the strongest examples of practical modernization was contract consolidation.
The conversation highlighted the problem of agencies maintaining multiple contracts for similar tools or services. In one example, a department had 12 separate ServiceNow contracts. That kind of duplication creates unnecessary costs, but it also creates extra work for contracting officers, program managers, contracting officer’s representatives, and technical teams.
The takeaway was simple: agencies can often improve efficiency by reducing redundant contracts and managing common tools at the enterprise level.
This is especially important in large, federated agencies where bureaus or offices have historically made separate IT decisions. Over time, that structure can create overlapping contracts, inconsistent tools, and fragmented management. Moving toward enterprise IT and consolidated contracts can help agencies reduce costs, strengthen oversight, and make technology environments easier to manage.
For industry, this shift matters too. Vendors may need to think less about isolated office-level sales and more about how their solutions fit broader enterprise needs.
AI could reduce acquisition workload, especially in proposal review
AI came up quickly and remained a major part of the conversation.
One of the most practical use cases discussed was proposal review. Today, acquisition teams often have to gather groups of people to evaluate proposals, review documentation, score submissions, and compare responses against stated criteria. That process can take significant time and pull people away from their normal work.
The conversation explored how AI could support this process by reviewing proposals, comparing them against evaluation criteria, identifying strengths and weaknesses, and helping teams understand which submissions best align with cost, value, and capability requirements.
The point was not that AI should make final award decisions on its own. Rather, AI could reduce manual burden and make evaluation more efficient. It could help teams move faster, especially when much of the process involves reading, organizing, and comparing large amounts of information.
Still, the discussion acknowledged that human judgment remains part of the process. AI can assist with analysis, but people still define the criteria, understand the mission, evaluate relationships, and remain accountable for final decisions.
AI expectations are high, but not every promise is real yet
The conversation also made clear that while AI has real potential, expectations can get ahead of reality.
Several examples focused on vendors claiming that AI can automate complex work simply by ingesting agency data. Governance, risk, and compliance was one area discussed. In theory, AI could help review controls, identify risks, and support compliance workflows. In practice, some tools are not yet able to deliver what vendors claim.
That gap matters for agencies and contractors. AI can be powerful, but it is not magic. It still depends on good data, clear instructions, defined goals, and proper oversight.
The conversation reflected a shift many leaders are experiencing. Early generative AI tools may have seemed like a more advanced version of search. Now, with agentic AI and workflow automation, the technology feels more capable and more consequential. But agencies still need to test tools against real use cases before assuming they can replace existing processes.
The practical takeaway is to embrace AI, but stay grounded. Leaders should ask what the tool can actually do today, what data it needs, what risk it introduces, and where human review still matters.
Accountability and oversight remain hard to define
Another important theme was accountability.
In large acquisition and technology decisions, many people may review, approve, or sign off on a contract. But the conversation raised a practical question: who is ultimately accountable?
That question becomes even more important after a product or service enters the environment. From a cybersecurity and operational standpoint, leaders need to understand who accepts the risk, who manages implementation, and who is responsible if something goes wrong.
This matters in contracting because the award is not the end of the process. Once a tool is purchased, agencies still need to implement it securely, manage access, monitor performance, and evaluate risk over time.
The takeaway was that innovation cannot be separated from accountability. Agencies can move faster and adopt new tools, but they still need clear ownership for risk decisions and implementation outcomes.
Pre-competed vehicles and internal storefronts can help agencies move faster
The conversation included several examples of ways agencies can make procurement easier without waiting for the entire acquisition system to change.
One example was the Continuous Diagnostics and Mitigation program, commonly known as CDM. For cybersecurity tools, CDM can give agencies access to pre-approved vendors and products, helping them move faster than they might through a traditional acquisition process.
Another example was an internal hardware “storefront” model. Instead of having different offices purchase laptops, phones, or tablets through separate contracts, users could select from a pre-approved list of devices. The request would then move through supervisor approval, funding, and fulfillment.
These examples point to a broader lesson: agencies can reduce friction by standardizing common purchases and using pre-competed options where possible.
This does not remove the need for oversight. It creates a more efficient process for recurring needs, especially when agencies already know what tools or hardware are approved and widely used.
Government-industry partnerships work best when they are real partnerships
The conversation also focused on what strong government-industry partnerships look like in practice.
A strong partnership is not just a vendor delivering a product. It requires shared understanding, respect, and a clear view of the mission. Agencies rely on industry partners for technical expertise, implementation support, and additional capacity. Industry partners, in turn, need to understand agency constraints, including procurement rules, budget limitations, cybersecurity requirements, and oversight obligations.
The discussion emphasized that government cannot do everything alone. In areas like zero trust, cybersecurity, and IT modernization, industry partners often play a major role in helping agencies move from strategy to implementation.
The most effective partnerships are built over time. They depend on trust, honesty, and a shared commitment to solving real problems, not just selling a tool.
Contract protests are a major planning factor
The conversation also touched on a common frustration in federal contracting: protests.
Protests play an important role in preserving fairness and accountability. But the discussion highlighted how frequently agencies and vendors expect protests to happen, especially on major contracts. In some cases, teams plan for a protest period because they assume one will occur.
That expectation can slow down mission delivery. Even when an agency ultimately prevails, the process adds time, uncertainty, and administrative burden.
The takeaway was not that protests should disappear. Rather, the conversation raised a question about whether the current process creates too much room for delays when a vendor simply does not like the outcome.
This is another example of the balance federal contracting has to strike. The system needs fairness and due process, but it also needs to deliver needed tools and services on a realistic timeline.
AI is changing the competitive landscape
Audience questions expanded the discussion into how AI may reshape the vendor market.
The conversation compared today’s AI boom to the early internet era. Many startups are entering the market with new ideas, new tools, and major funding. Some may become long-term leaders. Others may disappear as the market matures.
For federal contracting, that creates both opportunity and uncertainty. AI may allow smaller or newer vendors to build powerful tools and compete in areas that once required much larger teams. At the same time, the market may eventually consolidate around a smaller number of dominant players.
This creates a challenge for agencies. They need to stay open to innovation, but they also need to evaluate vendors carefully. New tools may be promising, but agencies still need to understand security, performance, reliability, and long-term support.
Relationships still matter, even with AI in the process
One audience question focused on the role of relationships in a world where AI may help evaluate proposals or support acquisition decisions.
The discussion acknowledged what many in federal contracting already know: relationships still matter. Government and industry teams build trust over time. Agencies may prefer working with vendors who understand their mission, have delivered successfully before, or have shown they can solve problems in a responsible way.
AI may help compare proposals against criteria, but it does not remove the human context around trust, past performance, communication, and mission fit.
The discussion also raised an important point about where human judgment may move in the process. If AI helps with later-stage review, humans may need to spend even more time earlier in the process shaping requirements, defining success, and setting evaluation criteria.
In other words, AI may change the workflow, but it does not remove the need for thoughtful relationship-building and strong acquisition planning.
AI is useful, but it reflects the information it learns from
Another major part of the audience discussion focused on AI bias, truth, and representation.
AI systems learn from available information. If certain perspectives are overrepresented online and others are underrepresented, AI outputs can reflect that imbalance. That matters for government, where decisions and tools need to serve broad and diverse communities.
The conversation made clear that AI does not automatically produce truth. It produces outputs based on patterns in the information it has learned from. If the available information is incomplete, biased, or misleading, the output can be too.
This is especially important when AI is used in public sector environments. Agencies and contractors need to understand the limits of AI-generated answers and avoid treating them as neutral or complete.
The takeaway was that AI requires both technical controls and human awareness. Leaders need to ask where the data came from, whose perspectives may be missing, and how outputs should be reviewed before they are used.
Sensitive data and intellectual property remain major concerns
The discussion also explored the risk of putting sensitive information into AI tools.
This issue is especially relevant for companies and agencies that need to protect intellectual property, procurement-sensitive information, cybersecurity data, or internal documents. Once information is entered into certain AI tools, organizations may lose control over how that information is stored, used, or exposed.
The conversation framed this as a risk management issue. Organizations can choose to block tools, allow limited use, or create policies and technical controls that guide what employees can and cannot enter into AI systems.
There is no risk-free option. Blocking AI may reduce exposure, but it can also limit productivity and experimentation. Allowing AI use can create efficiency, but only if employees understand the boundaries.
The practical lesson is that organizations need clear rules, not vague warnings. Employees need to know what is acceptable, what is prohibited, and why those boundaries matter.
AI creates workforce and education questions that go beyond contracting
Audience questions also pushed the conversation beyond procurement and into education.
The discussion raised concerns about whether the U.S. is preparing future leaders with enough data science, computational thinking, and AI literacy. If students are not exposed to these skills early enough, the future workforce may struggle to keep pace with the technology.
This matters for federal contracting because agencies and industry will both need people who understand AI, cybersecurity, data, and risk. Procurement teams do not need to become engineers, but they do need enough technical understanding to evaluate tools, ask better questions, and manage risk.
The takeaway was that AI readiness is not only about buying tools. It is also about building the workforce needed to use them responsibly.
Robots, automation, and AI are raising bigger questions about the future of work
The conversation also moved into broader questions about robotics and automation.
While this was a lighter part of the discussion, it connected to a serious theme: AI is no longer limited to written outputs or analysis. It is increasingly tied to physical systems, automated workflows, cybersecurity response, and tools that may act with less direct human involvement.
That raises bigger questions about what work will look like, how much autonomy systems should have, and where organizations need safeguards.
The discussion did not try to answer every question about robotics or future automation. Instead, it reinforced the broader theme of the evening: powerful tools need thoughtful controls, clear purpose, and responsible implementation.
Zero trust still matters in an AI-enabled environment
The event closed with a discussion of zero trust and whether AI changes the zero trust model.
The conversation emphasized that AI does not replace zero trust. Instead, AI can help support it.
Zero trust starts from the idea that organizations should not automatically trust users, devices, applications, or systems just because they are inside the network. Agencies need to understand what assets they have, how applications communicate, and what access is truly needed.
That approach remains critical as AI tools become more embedded in agency environments. If anything, AI makes visibility and access control more important. More systems, services, and automated tools are interacting with data, which means agencies need stronger controls around what can communicate, what can act, and what should be limited.
AI may help identify application flows, monitor activity, and enforce policies faster. But the underlying zero trust principles remain the same: visibility, least privilege, segmentation, and continuous monitoring.
What leaders can apply now
Taken together, the conversation offered a practical look at where federal contracting is headed.
The first lesson is that acquisition reform will take more than updated guidance. Agencies need cultural change, workforce support, and clearer confidence in how to move faster while staying compliant.
The second is that consolidation can create real value. Reducing duplicate contracts and standardizing common purchases can save money, reduce workload, and improve enterprise oversight.
The third is that AI has major potential in acquisition, cybersecurity, compliance, and proposal review, but leaders should stay grounded about what it can do today. AI works best when the goals, criteria, data, and controls are clear.
The fourth is that relationships still matter. Strong government-industry partnerships depend on trust, mission understanding, and a shared commitment to solving real problems.
Finally, leaders should treat oversight as part of innovation, not a barrier to it. Federal contracting can move faster, but only if agencies and contractors maintain accountability, protect sensitive information, and manage risk as they modernize.
To learn more about Leadership Connect and access additional insights from government and industry leaders, visit our website and explore our products!



