NSF Funds $10 M for AI Security

Recent advances in machine learning have vastly improved the capabilities of computational reasoning in various domains, exceeding human-level performance in many tasks. However, significant vulnerabilities still remain as image recognition systems can be easily deceived, malware detection models can be evaded, and models meant to catch problems can be left vulnerable if they are attacked and manipulated.

The NSF https://www.nsf.gov program “Secure and Trustworthy Cyberspace” (SaTC)” program includes a $78.2 million portfolio. This portfolio contains more than 225 new projects in 32 states spanning a broad range of research topics including Artificial Intelligence, Cryptography, Network Security, Privacy, and Usability.

The award for $10 million will fund the new NSF frontier project “Center for Trustworthy Machine Leaning” (CTML) https://ctml.psu.edu” which is a consortium of seven universities including the University of California San Diego.

The grant will be led by researchers Patrick McDaniel Principal Investigator and William L. Weiss at Pennsylvania State University. Researchers will also work on the project at Stanford University, University of Virginia, University of Wisconsin, and the University of California at Berkeley. Researchers will work to understand the risks inherent to machine learning and then develop the tools, metrics, and methods to manage and mitigate the risks. .

“This research is important because machine learning is becoming more pervasive in our daily lives, powering technologies we interact with including services like e-commerce and internet searches as well as the use of devices such as internet-connected smart speakers”, according to Kamalika Chaudhuri, who will be leading the UC San Diego portion of the research.

According to Jim Kurose, Assistant Director for Computer and Information Science and Engineering at NSF, “The NSF Frontier CTML project will develop an understanding of vulnerabilities in today’s machine learning approaches, along with developing methods for mitigating against these vulnerabilities to strengthen future machine learning-based technologies and solutions,”

The five year CTML will focus on three interconnected and parallel thrusts of machine learning to include:

  • Investigating methods to defend a trained model from adversarial inputs
  • Exploring rigorously grounded measures of model and training data robustness
  • Identifying ways adversaries may be abusing generative machine learning models and then develop countermeasures for defending against attacks

 

For more information, email Patrick McDaniel at mcdaniel@cse.psu.edu.