The ETSI Securing Artificial Intelligence Industry Specification Group (SAI ISG) released its first Group Report, ETSI GR SAI 004, which gives an overview of the problem statement regarding the securing of AI. ETSI SAI is the first standardization initiative dedicated to securing AI.
The Report describes the problem of securing AI-based systems and solutions, with a focus on machine learning, and the challenges relating to confidentiality, integrity and availability at each stage of the machine learning lifecycle. It also points out some of the broader challenges of AI systems including bias, ethics and ability to be explained. A number of different attack vectors are outlined, as well as several cases of real-world use and attacks.
To identify the issues involved in securing AI, the first step was to define AI. For the ETSI group, artificial intelligence is the ability of a system to handle representations, both explicit and implicit, and procedures to perform tasks that would be considered intelligent if performed by a human. This definition still represents a broad spectrum of possibilities. However, a limited set of technologies are now becoming feasible, largely driven by the evolution of machine learning and deep-learning techniques, and the wide availability of the data and processing power required to train and implement such technologies.