Pro-Cap: Leveraging a Frozen Vision-Language Model for Hateful Meme Detection

Abstract

Hateful meme detection is a challenging multimodal task, requiring not only comprehension of both vision and language but also cross-modal interactions. Recent studies tried to fine-tune pre-trained vision-language models (PVLMs) for the task. However, with increasing model sizes, it is important to leverage powerful PVLMs more efficiently rather than fine-tuning. Recently, researchers have tried to convert meme images into textual captions and prompt language models for predictions. This approach shows good performances but suffers from non-informative image captions. Considering the two factors above, we propose a probing-based captioning approach to leverage PVLMs in a zero-shot visual question answering~(VQA) manner. Specifically, we prompt a frozen PVLM by asking hateful content related questions and use the answers as image captions (which we call Pro-Cap) so that the captions contain information critical for hateful content detection. The good performance of models with Pro-Cap on three benchmarks validate the effectiveness and generalization of the proposed method.

Type
Publication
In Proceedings of the 31st ACM International Conference on Multimedia