Spatial domain method | method Pixel-based | Raman et al.[6] | 2009/EUROGRAPHICS | Compositing based bilateral filter |
Li et al.[7] | 2012/IEEE | Fusion with median filter and recursive filter |
Lee et al.[8] | 2018/IEEE | Adaptive weighting reflects relative pixel intensity and global gradients |
Xu et al.[9] | 2020/Optik | Using patterns of oriented edge agnitudes to extract local contrast |
Ulucan et al.[10] | 2021/Signal Processing | Using linear embeddings and watershed masking to fusion |
Li et al.[11] | 2021/China Sciencepaper | Adaptive weights constructed from pixel intensity and global gradients |
Kinoshita et al.[12] | 2019/IEEE | Segmentation based on luminance distribution |
Wang et al.[13] | 2021/Visual Computer | Determined the region of enhancement(RoE)for each image |
Patch-based method | Goshtasbyet al.[14] | 2005/Image and Vision Computing | Firstly multi-exposure fusion using image blocks |
Ma et al.[15] | 2015/IEEE | Decomposing image patch into signal strength,signal structure,and mean intensity |
Huang et al.[16] | 2018/IEEE | Three weight measurements build signal structure and mean intensity |
Li et al.[17] | 2020/Journal of Computer Applications | Local variance,local saliency features,and local visibility build weight graph |
Li et al.[18] | 2020/IEEE | Non-normalized operations |
Li et al.[19] | 2021/IEEE | Incorporating the edge-preserving factors into mean intensity |
Wang et al.[20] | 2020/IEEE | Using the super-pixel segmentation approach |
Optimization-based method | Shen et al.[21] | 2011/IEEE | A generalized random walk framework |
Li et al.[22] | 2012/IEEE | Using quadratic optimization method |
Liu et al.[24] | 2019/IEEE | Using optimal weighted multi-exposure fusion mechanism |
Ma et al.[25] | 2018/IEEE | Describing a gradient ascent-based algorithm |
Transform domain method | Multi-scale decomposition-based method | Burt et al.[26] | 1993/IEEE | Applying a gradient pyramid model to multi-exposure fusion firstly |
Mertens et al.[27] | 2007/IEEE | Weighted Gaussian pyramid to Laplacian pyramid |
Shen et al.[28] | 2014/IEEE | Building weight graphs using local weights,global weights,and significance weights |
Li et al.[29] | 2013/IEEE | Weighted guided filtering on Gaussian pyramids |
Yan et al.[30] | 2019/Pattern Recognition Letters | A simulated exposure model for generating multiple images |
Li et al.[31] | 2017/IEEE | Using guided filtering to break down image into base layer and detail layer |
Singh et al.[32] | 2014/Scientific World Journal | Adding details to Laplacian pyramid |
Wang et al.[33] | 2020/IEEE | Laplacian pyramid in the YUV color space |
Kou et al.[34] | 2018/Journal of Visual Communication and Image Representation | Edge-preserving smoothing pyramid |
Yang et al.[35] | 2018/IEEE | Generating a moderately exposed analog mapping image |
Tang et al.[36] | 2022/Laser & Optoelectronics Progress | Proposing a high dynamic range imaging method based on YCbCr spatial fusion |
Liu et al.[37] | 2022/Laser & Optoelectronics Progress | A full sequence based on images is proposed multi-exposure image fusion method for feature weights |
Wu et al.[38] | 2021/Laser & Optoelectronics Progress | Proposing a multi-exposure image fusion method based on improved exposure evaluation and double pyramid |
Gradient-based method | Zhang et al.[39] | 2012/IEEE | A two-dimensional Gaussian filter to calculate the gradient value and gradient direction |
Paul et al.[40] | 2016/Journal of Circuits,Systems and Computers | Add the gradient value of the chroma to the gradient value of the light value |
Liu et al.[41] | 2020/IET Image Processing | Computing luminance levels in the gradient domain |
Gu et al.[42] | 2012/Journal of Visual Communication & Image Representation | Modifying the gradient field iteratively with twice average filtering and nonlinearly compressing in multi-scales |
Sparse representation-based method | Wang et al.[43] | 2014/Neurocomputing | A novel recognition framework based on the discriminative sparse representation |
Shao et al.[44] | 2018/Applied Sciences | A halo-free multi-exposure fusion method based on sparse representation of gradient features |
Yang et al.[45] | 2020/IEEE | An exposure fusion method with sparse decomposition and a sparsity exposure dictionary |
Deep learning method | CNN | Kalantari et al.[46] | 2017/ACM Transactions on Graphics | The first learning-based technique to produce an HDR image |
Li et al.[47] | 2018/IEEE | Using CNN to extract the features of each image |
Liu et al.[48] | 2019/Information Fusion | Proposing the FusionNet |
Chen et al.[49] | 2019/Journal of Visual Communication and Image Representation | Constructing a dual network cascade model |
Cai et al.[50] | 2018/IEEE | A large-scale multi-exposure image data |
Prabhakaret al.[51] | 2017/IEEE | First ever unsupervised deep learning method(DeepFuse) |
Qi et al.[52] | 2021/Information Fusion | An unsupervised deep learning approach based on quantitative evaluation |
Han et al.[53] | 2022/ Information Fusion | Proposing a depth-aware enhancement network |
Zhang et al.[54] | 2020/ Information Fusion | Proposing a general image fusion framework based on the convolutional neural network |
Ma et al.[55] | 2019/IEEE | Multi-exposure fusion network based on deep guided learning |
Xu et al.[56] | 2022/IEEE | An unsupervised end-to-end image fusion network(U2Fusion) |
| Gao et al.[57] | 2021/Electronics | Applying to the identification of traffic signs |
GAN | Xu et al.[58] | 2020/IEEE | Proposing a GAN-based multi-exposure image fusion network firstly(MEF-GAN) |
Yang et al.[59] | 2021/Neural Computing and Applications | A novel GAN-based multi-exposure image fusion method(GANFuse) |
Le et al.[60] | 2022/ Information Fusion | Generative adversarial networks based on continuous learning(UIFGAN) |
Zhou et al.[61] | 2022/ Information Fusion | Proposing an image fusion method based on GAN(GIDGAN) |