AI systems can be targets of malicious actors. This risk is known as what?

Study for the ISACA AI Fundamentals Test. Prepare with flashcards and multiple-choice questions, each with hints and explanations. Get ready for your exam!

Multiple Choice

AI systems can be targets of malicious actors. This risk is known as what?

Explanation:
AI systems face the risk of cyberattacks by malicious actors. This category, cyberattacks on AI, covers attempts to manipulate, disrupt, or steal from AI systems—such as poisoning training data to change behavior, injecting prompts to extract or corrupt outputs, tampering with model parameters, or introducing backdoors. It specifically describes intentional threats aimed at compromising AI integrity, confidentiality, or availability. Training data issues refer to problems inherent in the data itself (quality, bias, privacy) rather than deliberate attacks on the system. Data governance deals with policies, roles, and controls for managing data, not the threat landscape. Leading indicators are metrics used to monitor risk or performance, not threats to AI systems.

AI systems face the risk of cyberattacks by malicious actors. This category, cyberattacks on AI, covers attempts to manipulate, disrupt, or steal from AI systems—such as poisoning training data to change behavior, injecting prompts to extract or corrupt outputs, tampering with model parameters, or introducing backdoors. It specifically describes intentional threats aimed at compromising AI integrity, confidentiality, or availability.

Training data issues refer to problems inherent in the data itself (quality, bias, privacy) rather than deliberate attacks on the system. Data governance deals with policies, roles, and controls for managing data, not the threat landscape. Leading indicators are metrics used to monitor risk or performance, not threats to AI systems.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy