All Domain

DoD releases new AI adoption strategy building on industry advancements

“Meanwhile, as commercial tech companies and others continue to push forward the frontiers of AI, we're making sure we stay at the cutting edge with foresight, responsibility and a deep understanding of the broader implications for our nation,” Deputy Defense Secretary Kathleen Hicks said.

Low poly brain or Artificial intelligence concept. Symbol of Wisdom point. Abstract vector image of a human Brine. Low Polygonal wireframe blue illustration on dark background. Lines and dots.

Low poly brain or Artificial intelligence concept. Symbol of Wisdom point. Abstract vector image of a human Brine. Low Polygonal wireframe blue illustration on dark background. Lines and dots. (Getty images)

WASHINGTON — The Pentagon today released a new strategy to accelerate the department’s adoption of artificial intelligence capabilities, one which accounts for industry advancements in federated environments, decentralized data management and generative AI tools like large language models.

But relying on commercial capabilities means the technologies available may not yet be compatible with the department’s own ethical AI principles, Deputy Secretary of Defense Kathleen Hicks acknowledged to reporters at the Pentagon. 

“Unlike some of our strategic competitors, we don’t use AI to censor, constrain, repress or disempower people,” Hicks said. “By putting our values first and playing to our strengths, the greatest of which is our people, we’ve taken a responsible approach to AI that will ensure America continues to come out ahead.

“Meanwhile, as commercial tech companies and others continue to push forward the frontiers of AI, we’re making sure we stay at the cutting edge with foresight, responsibility and a deep understanding of the broader implications for our nation,” she added.

The “2023 Data, Analytics, and AI Adoption Strategy” [PDF] is the first update to DoD’s AI Strategy since the 2018 edition. That older strategy designated the now-defunct Joint AI Center as the “focal point” for carrying out its vision. The JAIC was subsumed into the Chief Digital and AI Office (CDAO) — which was stood up last year as the Pentagon’s central hub for all things AI — along with the Defense Digital Service and ADVANA teams. 

“In 2018, the then-JAIC focused on building a centralized AI/[machine learning] pipeline and that makes a lot of sense for 2018 because even industry hadn’t yet figured out how to deliver that as a product to customers,” Chief Digital and Artificial Intelligence Office head Craig Martell said in a briefing with reporters today.

“But in 2022, every one of the major vendors deliver[ed] a robust and industrial scale MLOps pipeline. So there’s really no need for us to build that internally … And so our view now is let’s let any component use whichever MLOps pipeline they need as long as they’re abiding by the patterns of behavior that we need them to abide by.”

That will include things like how the AI/ML model is monitored and evaluated, and how data is labeled and made accessible, he added.

According to the strategy, DoD will focus on several strategic efforts that support the “AI Hierarchy of Needs,” which starts with quality data as its foundation, followed by analytics and metrics and responsible AI at the top. The pyramid is will help assess DoD AI readiness.

Artificial Intelligence Ethics

DoD last year unveiled its long-awaited Responsible AI (RAI) Strategy and Implementation Pathway, which acknowledged the Pentagon wouldn’t be able to maintain a competitive advantage without transforming itself into an AI-ready and data-centric organization that holds RAI as a prominent feature. Prior to the RAI strategy and implementation pathway, DoD adopted five broad principles for the ethical use of AI: responsible, equitable, traceable, reliable and governable. 

Hicks said today that DoD is “mindful of the potential risks and benefits offered by large language models and other generative AI tools,” and that Task Force Lima, established in August, will aim to responsibly adopt and implement those technologies.

“Candidly, most commercially available systems enabled by large language models aren’t yet technically mature enough to comply with our ethical AI principles, which is required for responsible operational use,” she said. “But we have found over 180 instances where such generative AI tools could add value for us with oversight, like helping to debug and develop software faster, speeding analysis of battle damage assessments … not all of these use cases are notional.”

The Pentagon in 2021 stood up the AI and Data Acceleration initiative where operational data and AI flyaway teams of technical experts would be sent to the military’s 11 combatant commands to help them better understand their data and create AI tools to streamline decision-making. 

“All of this and more is helping realize Combined Joint All Domain Command and Control, CJADC2” Hicks said. “To be clear, CJADC2 isn’t a platform or single system we’re buying. It’s a whole set of concepts, technologies, policies and talent that are advancing a core US warfighting function — the ability to command and control forces.”

The AI strategy will be followed by an implementation plan that will be released in the next couple of months and will not look like a “traditional” implementation plan, Martell said.

“Each of the services have wildly different needs,” he said. “And they’re at wildly different points in their journey and they have wildly different infrastructure. So we’re going to insist on patterns of shareability, patterns of accessibility, patterns of discoverability and how those are implemented we’re going to allow a lot of variance for.”

Today’s announcement follows Monday’s executive order from the White House that was hailed by the Biden administration as one of the “most significant actions ever taken by any government to advance the field of AI safety” in order to “ensure that America leads the way” in managing risks posed by the technology.

The executive order directed DoD to establish a pilot program to identify how AI can find vulnerabilities in critical software and networks and develop plans to attract more AI talent, among other things. Analysts have raised concerns on whether the executive order could potentially stifle DoD innovation as it directs “developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.”

“I think the biggest implication for DoD is how this will impact acquisition because … anybody who’s developing AI models and wanting to do business with the DoD is going to have to adhere to these new standards,” Klon Kitchen, the head of the global technology policy practice at Beacon Global Strategies, told Breaking Defense Monday.