Abstract:Object detection models are markedly vulnerable to adversarial patches, posing serious safety risks to applications such as autonomous driving and security surveillance. Although transfer-based black-box attacks have made progress, they often suffer from poor cross-model transferability and uneven suppression across multi-scale detection heads. To address these issues, we propose MSBR for adversarial patch attacks. During patch training, MSBR explicitly regularizes the variance of confidence outputs across different detection scales, thereby enforcing consistent suppression of targets at multiple scales, mitigating scale-wise imbalance, and substantially improving cross-model transferability. Experiments on several mainstream detectors show that our method maintains strong attack success rates while outperforming representative approaches (e.g.T-SEA) in black-box transfer performance, demonstrating the practical effectiveness of MSBR. This work provides a new perspective for designing adversarial patch attacks against complex multi-scale detection architectures.