Skip to content
#

large-vision-language-models

Here are 48 public repositories matching this topic...

A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository aggregates surveys, blog posts, and research papers that explore how LMMs represent, transform, and align multimodal information internally.

  • Updated Jun 19, 2025

Improve this page

Add a description, image, and links to the large-vision-language-models topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the large-vision-language-models topic, visit your repo's landing page and select "manage topics."

Learn more