Which technique is foundational for training large language models through predicting masked tokens?

Study for the ISACA AI Fundamentals Test. Prepare with flashcards and multiple-choice questions, each with hints and explanations. Get ready for your exam!

Multiple Choice

Which technique is foundational for training large language models through predicting masked tokens?

Explanation:
The technique is about teaching the model by hiding parts of the input and training it to predict the missing pieces. This masking objective lets the model learn to use the surrounding context to infer each missing token, which builds strong representations of language that capture syntax and semantics. It’s a form of self-supervised learning because the supervision comes from the text itself, not from external labels. While self-supervised learning covers many possible objectives, the specific method described—masking tokens and predicting them—is the direct training signal being used. Supervised learning relies on labeled data for each target, which isn’t required here, and Generative AI is a broad category that includes many techniques beyond this masking approach.

The technique is about teaching the model by hiding parts of the input and training it to predict the missing pieces. This masking objective lets the model learn to use the surrounding context to infer each missing token, which builds strong representations of language that capture syntax and semantics. It’s a form of self-supervised learning because the supervision comes from the text itself, not from external labels. While self-supervised learning covers many possible objectives, the specific method described—masking tokens and predicting them—is the direct training signal being used. Supervised learning relies on labeled data for each target, which isn’t required here, and Generative AI is a broad category that includes many techniques beyond this masking approach.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy