Artificial intelligence is rapidly becoming one of the most important assets in global competition, including AI-assisted autonomy and decision-making in battlefield applications. However, todayβs AI models are vulnerable to novel cyberattacks and could be exploited by adversaries.
Moreover, the models are not sufficiently robust and dependable to orchestrate and execute inherently human-centric, mission-critical decisions.
βAI and autonomous vehicles have great potential to let our military operate in contested environments without having to needlessly put our brave men and women in harmβs way β as long as we can trust the AI,β said U.S. Rep. Chuck Fleischmann. βORNL and Vanderbilt University have the infrastructure and expertise to develop solutions that will give national security leaders the confidence that these AI systems are secure, reliable and dependable.β
Under a new partnership announced during the Tennessee Valley Corridor 2024 National Summit in Nashville this week, Vanderbilt and ORNL will build on complementary research and development capabilities and create science-based AI assurance methods to:
- Ensure AI-enabled systems deployed for national security missions are able to function in the most challenging and contested environments.
- Test and evaluate the resilience and performance of AI tools at large scales in mission-relevant environments.
- Provide decision-makers with the confidence to rapidly adopt and deploy AI-enabled technologies to maintain U.S. competitive advantage.
Vanderbiltβs basic and applied research in the science and engineering of learning-enabled cyber-physical systems, particularly through the renowned Vanderbilt Institute for Software Integrated Systems, provides a foundation for AI assurance research.
Building on expertise in high-performance computing, data sciences and national security sciences, ORNL recently established the Center for Artificial Intelligence Security Research, or CAISER, to address emerging AI threats. CAISER leads AI security research and AI evaluation at scale, capable of training and testing the largest AI models.
The partnership will initially focus on enabling the U.S. Air Force to fully utilize autonomous vehicles, such as the AI-enabled X-62A VISTA that recently took Air Force Secretary Frank Kendall for a flight featuring simulated threats and combat maneuvers without human intervention.
Together, Vanderbilt and ORNL will provide evidence-based assurance that enables Air Force systems to meet DoDβs requirements for Continuous Authorization to Operate in vital national security roles.
βThe growth in AI applications is breathtaking, most notably in the commercial marketplace but increasingly in the national defense space as well. While all users of AI are concerned about security and trust of these systems, none is more concerned than the DoD, which is actively developing processes to ensure their appropriate use,β stated Mark Linderman, chief scientist at Air Force Research Laboratoryβs Information Directorate. βThis partnership will advance the science to enable the U.S. Air Force to confidently field autonomous vehicles, such as the AI-enabled X-62A VISTA, improve situation awareness and accelerate human decision making.β
Autonomous vehicles operating in a truly independent fashion could be a game-changer for the U.S. military.
The collaborative new research program at Vanderbilt and ORNL continues Tennesseeβs tradition of helping the U.S. maintain global leadership.