PromptMTopic: Unsupervised Multimodal Topic Modeling of Memes using Large Language Models

Abstract

The proliferation of social media has given rise to a new form of communication: memes. Memes are multimodal and often contain a combination of text and visual elements that convey meaning, humor, and cultural significance. While meme analysis has been an active area of research, little work has been done on unsupervised multimodal topic modeling of memes, which is important for content moderation, social media analysis, and cultural studies. We propose PromptMTopic, a novel multimodal prompt-based framework designed to learn topics from both text and visual modalities by leveraging the language modeling capabilities of large language models. Our framework effectively extracts and clusters topics learned in memes, considering the semantic interaction between the text and visual modalities. We evaluate our proposed framework through extensive experiments on three real-world meme datasets, which demonstrate its superiority over state-of-the-art topic modeling baselines in learning descriptive topics in memes. Additionally, our qualitative analysis shows that PromptMTopic can identify meaningful and culturally relevant topics from memes. Our work contributes to the understanding of the topics and themes of memes, a crucial form of communication in today’s society.

Type
Publication
In Proceedings of the 31st ACM International Conference on Multimedia