Laser & Optoelectronics Progress, Volume. 59, Issue 18, 1815015(2022)
Object Detection Based on Semantic Sampling and Localization Refinement
The two important techniques for object detection are training samplers and localization refinement. To solve the problem of unreasonable distribution of positive and negative samples, and get better image classification features and localizations, this study presented an accurate and effective single step anchor-free algorithm for object detection. The algorithm consists of three modules: semantic based positioning, adaptive feature enhancement, and efficient localization refinement. Firstly, the positioning module proposes a semantic based sampling method, which distinguishes the front/background regions according to the semantic characteristics of the object, reasonably selects positive samples and negative samples, and preferentially selects the foreground region with large amount of semantic information as the positive samples. Secondly, the feature enhancement module uses the target semantic probability map and detection frame offset to adjust the image classification features pixel by pixel, increases the proportion of foreground features, and adaptively adjusts the feature coding range according to the object size. Finally, the localizations are optimized in parallel, and the classification loss is calculated for the localizations before and after optimization, which improves the positioning performance almost without cost, and ensures the feature alignment and consistency. In the MS COCO dataset, the proposed algorithm achieves 42.8% in average precision, the detection time of a single image reaches 78 ms, realizing the balance between detection accuracy and speed.
Get Citation
Copy Citation Text
Yu Li, Shaoyan Gai, Feipeng Da, Ru Hong. Object Detection Based on Semantic Sampling and Localization Refinement[J]. Laser & Optoelectronics Progress, 2022, 59(18): 1815015
Category: Machine Vision
Received: Jul. 23, 2021
Accepted: Aug. 31, 2021
Published Online: Sep. 5, 2022
The Author Email: Da Feipeng (qxxymm@163.com)