Multi-modal Large Language Models (MLLMs) have shown remarkable capabilities across a wide range of vision-language tasks. However, due to the restricted input resolutions, MLLMs face significant challenges in precisely understanding and localizing visual details in high-resolution images—particularly when dealing with extra-small objects embedded in cluttered contexts. To address this issue, we propose FINERS, a two-stage MLLM-based reinforcement learning framework for jointly reasoning and segmenting extremely small objects within high-resolution scenes. FINERS adopts a coarse-to-fine pipeline comprising Global Semantic Exploration (GSE) and Localized Perceptual Refinement (LPR). Specifically, GSE performs instruction-guided reasoning to generate a textural response and a coarse target region, while LPR refines this region to produce an accurate bounding box and segmentation mask. To couple the two stages, we introduce a locate-informed retrospective reward, where LPR’s outputs are used to optimize GSE for more robust coarse region exploration. Additionally, we present FINERS-4k, a new dataset for evaluating MLLMs on attribute-level reasoning and pixel-level segmentation on subtle, small-scale targets in complex high-resolution scenes. Experimental results on FINERS-4k and public datasets demonstrate that our method consistently outperforms state-of-the-art MLLM-based approaches on both instruction-guided segmentation and visual reasoning tasks.
Visual results on Open-ended VQA (OVQA), Multiple-choice VQA (MVQA), and Instruction-guided Segmentation (IS). The listed images are sampled from FINERS-4k test set.
(a) The innermost ring shows three instruction types. The middle ring presents the mask size distribution within each type. The outermost ring breaks each type down into four attribute categories: color, shape, position, and others. (b) Visualization of six representative examples, each illustrating a different attribute category (color, shape, position, others) across the three instruction types in the outermost ring.
Results is generated using FINERS.
(a) During training, we design specific reward functions to train GSE and LPR, where the LPR is optimized first and adopted to form a retrospective reward to enhance the coarse region accuracy of GSE. (b) During inference, GSE takes a high-resolution image and user instructions as input and produces an answer and a coarse region containing referred objects. Then, LPR processes the instruction and coarse region to generate the object box and adopts SAM2 to generate the final mask. (c) To unify VQA and segmentation into a single MLLM, we design a multi-task reward pool and assign the items to supervise GSE and LPR.
@article{zhang2025,
title={Fine-grained Reasoning and Segmentation of Small Objects with Reinforcement Learning},
author={Zhang, Lu and Yu, Jiazuo and Xiong, Haomiao and Hu, Ping and Zhuge, Yunzhi and Lu, Huchuan and He, You},
year={2025}
}