
Mitigating Hallucinations in Large Vision-Language Models: A Latent Space Steering Approach
Hallucination remains a significant challenge in deploying Large Vision-Language Models (LVLMs), as these models often generate text misaligned with visual inputs. Unlike hallucination in LLMs, which arises from linguistic inconsistencies, LVLMs struggle with cross-modal discrepancies, […]