There are different ways to attack machine learning systems and most data science teams are not equipped with the skills to secure these systems. In this talk, we will talk about the different ways these systems can be attacked and then we will share relevant strategies to protect these systems.
Designing and building machine learning systems require a lot of skill, time, and experience. Data scientists, developers, and ML engineers work together in building ML systems and pipelines that automate different stages of the machine learning process. Once the ML systems have been set up, these systems need to be secured properly to prevent these systems from being hacked and compromised.
Some attacks have been customized to take advantage of vulnerabilities present in certain libraries. Other attacks may take advantage of vulnerabilities present in the custom code of ML engineers as well. There are different ways to attack machine learning systems and most data science teams are not equipped with the skills required to secure the systems they built. In this talk, we will discuss in detail the cybersecurity attack chain and how this affects a company’s strategy when setting up different layers of security. We will discuss the different ways ML systems can be attacked and compromised and along the way, we will share the relevant strategies to mitigate these attacks. Finally, we will talk about the different types of attacks on data privacy and ML model privacy which includes membership inference attack, model extraction attack, attribute inference