Apostol Vassilev
Research Manager, Computer Security Division, NIST
Biography
Apostol Vassilev is a research manager in the Computer Security Division at NIST. His group’s research agenda covers topics in Trustworthy and Responsible AI, with a focus on Adversarial Machine Learning and Robust AI for Autonomous Vehicles. Vassilev works closely with academia, industry and government agencies on the development and adoption of standards in AI. He holds a Ph.D. in mathematics. Vassilev has been awarded a bronze medal by the U.S. Commerce Department and his work has been profiled in the Wall Street Journal, Politico, VentureBeat, Fortune, Forbes, the Register, podcasts, and webinars. Apostol frequently speaks at conferences.
Presentation
AI Risks and Rewards: Calculus for the Future
Artificial intelligence (AI) systems have been on a global expansion trajectory for several years. The pace of development and adoption of AI systems has been accelerating worldwide.
These systems are being widely deployed into the economies of numerous countries, leading to the emergence of AI-based services for people to use in
many spheres of their lives, both real and virtual. There are two broad classes of AI systems, based on their capabilities: Predictive AI (PredAI) and Generative AI (GenAI). Although the majority of industrial applications of AI systems are still dominated by PredAI systems, we are starting to see adoption of GenAI systems in business. When adopted responsibly, GenAI systems can also improve the productivity of workers and quality of service.
As these systems permeate the digital economy and become inextricably essential parts of daily life, the need for their secure, robust, and resilient operation grows.
However, despite the significant progress that AI has made, these technologies are also vulnerable to attacks that can cause spectacular failures with dire consequence. In this talk we will provide an overview of the main sources of risk and categories of attacks on AI systems and propose directions for increasing their robustness.