Visual explanation methods have been effective in interpreting the outputs of object detectors by highlighting important regions corresponding to each model prediction. However, existing approaches have largely overlooked inter-object relationships—particularly the relative importance of each pixel across different objects, a concept we refer to as Object Discrimination (OD). In this paper, we propose difference maps, a novel visual explanation technique designed to enhance the interpretability of object detectors with respect to OD. Serving as a complementary tool to existing instance-specific heat maps, difference maps improve their ability to isolate the impact of key features of individual objects on model outputs. Our qualitative and quantitative evaluation results show that the proposed difference maps can effectively distinguish key features specific to the target object, capturing the relative importance of each pixel across different predictions within the same scene. Our method is applicable to a wide range of object detectors, including one-stage, two-stage, and transformer-based architectures. Furthermore, it enhances heatmap-based visual explanation methods by improving focus on the detected object. These results demonstrate the utility of our approach in improving model transparency and interpretability across different detection architectures and explanation techniques.