Approximate domain unlearning: Enabling safer and more controllable vision-language models

Vision-language model (VLM) is a core technology of modern artificial intelligence (AI), and it can be used to represent different forms of expression or learning, such as photographs, illustrations, and sketches.