1 Introduction
In the textile industry, the retrieval and classification of fabric images are still completed manually, which is highly subjective and high in cost. Moreover, manual search methods cannot efficiently and accurately retrieve the same or similar fabric images[1-2]. Therefore, this paper intends to implement a fast and accurate fabric image retrieval algorithm.
Many researchers have studied fabric image retrieval[3-5]. Content-based fabric image feature descriptions used in image retrieval are mainly as follows: Discrete Cosine Transform (DCT)[6-7], color moments[8], gray level co-occurrence matrix[9], Local Binary Patterns (LBP)[10], etc.
Single image features cannot accurately describe all the information of fabric images for retrieval and the results in low retrieval accuracy. Jamil et al. combined regional growth and edge detection segmentation methods to segment the geometric patterns in fabric images[11]. Fu et al.[12] used the K-means algorithm combined with color histograms to identify and extract clothing contours, which is simple and effective but sensitive to noise and outliers. Fabric images are complex. Considering the complexity and the problems caused by improper photography, which bring challenges to image retrieval. Zernike moments[13-14] are suitable for feature extraction in fabric image retrieval due to their advantages in image rotation invariance, scale and translation invariance, and noise sensitivity. However, the calculation of Zernike moments is complex and the higher-order moments are more sensitive to noise. Therefore, we introduce wavelet transform[15-16] to effectively change the distribution of image energy without damaging the original image information. After transformation, one low-frequency component and three high-frequency components can be obtained. The low-frequency component contains most of the information while the high-frequency components are redundant noise. Taking the low-frequency component as the subimage, the subimage size becomes only
$ 1/4$ of the original image, which reduces the computational complexity of the Zernike moment feature extraction process.
In order to ensure the accuracy of retrieval, it is not enough to only extract Zernike moments under a wavelet transform as the feature of fabric image retrieval. Considering the self-similarity in fabric images, fractal coding[17-18] is suitable for its high compression ratio but its computation requirements are high and the encoding and decoding processes are time consuming. The wavelet transform has smoothness, which can improve the quality of the reconstructed image effectively. After the image is decomposed by a multi-level wavelet, the wavelet sub-images with different resolutions in the same direction have obvious similarities, which can be used in fractal image compression. This can shorten the encoding and decoding time and improves the quality of decoded images. Therefore, fractal coding is performed on the wavelet-transformed low frequency sub-image to obtain retrieval features.
According to the advantages and disadvantages of Zernike moments and the fractal coding analyzed above[19-21], a fabric image retrieval algorithm based on fractal coding and Zernike moments under a wavelet transform is proposed. Firstly, the low-frequency sub-images are obtained by the wavelet transform of the query image, and the coding parameters are obtained by fractal encoding. Then, the Zernike moments of the low-frequency sub-images are calculated. We combine them as retrieval features. The experimental results show that the proposed algorithm has certain advantages in improving the accuracy and speed of image retrieval.
The rest of this paper is organized as follows. In Section 2, we introduce the extraction process of fractal coding and Zernike moments under a wavelet transform. The proposed method and its process are also presented in this section. The experimental results that verify the accuracy and speed of image retrieval are presented in Section 3. Finally, we conclude the paper in Section 4.
2 Methods
2.1 Extraction of fractal parameters under a wavelet transform
For
$ f\left( t \right) \in {L^2}\left( R \right) $ , the continuous wavelet function is defined as
$ {\psi _{a,b}}\left( t \right) $ ,
$ \psi \left( t \right) $ is the mother wavelet, and the continuous wavelet transform
$ {W_f}\left( {a,b} \right) $ is defined as:
$ \begin{split} {W_f}\left( {a,b} \right) =& \int_{{{ - }}\infty }^\infty f(t)\psi _{a,b}^*(t){\rm{d}}t =\\ &\frac{1}{{\sqrt a }} \int_{ - \infty }^\infty {f(t)} {\psi ^*}\left(\frac{{t - b}}{a}\right){\rm{d}}t\quad, \end{split} $ (1)
where
$ \psi _{a,b}^*(t) $ is a complex conjugate of
$ {\psi _{a,b}}(t) $ ,
$ a > 0 $ , and
$ b $ and
$ t $ are both continuous variables.
We perform the 2D wavelet transform on the images by filtering twice on two dimensions, and obtain four sets of coefficients
$[a_{j + 1},D_{j + 1}^1,D_{j + 1}^2, D_{j + 1}^3]$, where
$ a{}_{j + 1} $ is the low-frequency component,
$ D_{j + 1}^1 $ ,
$ D_{j + 1}^2 $ and
$ D_{j + 1}^3 $ represent the horizontal, vertical and diagonal components respectively. The fast decomposition algorithm of the wavelet transform is as follows:
$ {a_{j + 1}}(m,n) = \sum\limits_{l \in {\text{Z}}} {\sum\limits_{k \in {\text{Z}}} {{h_{l - 2m}}{h_{k - 2n}}} } {a_j}(l,k) \quad,$ (2)
$ D_{j + 1}^1(m,n) = \sum\limits_{l \in Z} {\sum\limits_{k \in Z} {{h_{l - 2m}}} } {g_{k - 2n}}{a_j}(l,k)\quad, $ (3)
$ D_{j + 1}^2(m,n) = \sum\limits_{l \in Z} {\sum\limits_{k \in Z} {{g_{l - 2m}}} } {h_{k - 2n}}{a_j}(l,k) \quad,$ (4)
$ D_{j + 1}^3(m,n) = \sum\limits_{l \in {\text{Z}}} {\sum\limits_{k \in {\text{Z}}} {{g_{l - 2m}}{g_{k - 2n}}{a_j}(l,k)} }\quad, $ (5)
where
$ j,m,n \in {\text{Z}} $ ,
$ h $ is the scale coefficient and
$ g $ is the wavelet coefficient.
The experiment uses 3000 fabric images with a size of 512×512 from a clothing design company as the test images. The extraction steps of fractal coding under a wavelet transform are as follows:
(1) The two-layer wavelet transform is performed on the fabric images. Firstly, four components with a size of 256×256 are obtained through the first-layer wavelet transform, which are the low-frequency component and high-frequency components in horizontal, vertical, and diagonal directions. Then, we perform the two-layer wavelet transform on the low-frequency components and also obtain one low-frequency component and three high-frequency components with a size of 128×128. Fractal encoding is performed on the low-frequency component to obtain the encoding parameters for image retrieval, which reduces the calculation requirements in the image retrieval process.
(2) The low-frequency sub-image is segmented into non-overlapping blocks with a size of 4×4 called range blocks (
$ {R_i} $ blocks) and overlapping domain blocks with a size of 8×8 (
$ {D_i} $ blocks).
(3) Isometric transforms
$q(j)\; (j = 1,2,\cdots,8)$ are applied to the domain blocks recorded as
$ D_i^{q(j)} $ .
$q(j)$ is shown in Table 1.

Table 1. Isometric transform
Table 1. Isometric transform
j | $q(j)$![]() ![]() | 1 | Identity transformation | 2 | symmetry of the X axis
| 3 | symmetry of the Y axis
| 4 | Rotate 180 degrees | 5 | $y = - x$ | 6 | $y = x$ | 7 | Rotate 90 degrees counterclockwise | 8 | Rotate 270 degrees counterclockwise |
|
After the j-th isometric transform,
$ D_i^{q(j)} $ is denoted as
$ \gamma _i^q(D_i^{q(j)}) $ . The optimal affine transformation of
$ D_i^{q(j)} $ is defined as:
$ {L_i}(D_i^{q(j)}) = {s_i}\gamma _i^q(D_i^{q(j)}) + o{}_iU\quad . $ (6)
The minimum mean square error of
$ {L_i}(D_i^{q(j)}) $ and
$ {R_i} $ can be expressed as:
$ \mathop {\min }\limits_{j,q} \left\{ \mathop {\min }\limits_{s,o \in R,\left| s \right| < 1} {\left\| {{R_i} - ({s_i}D_i^{q(j)} + o{}_iU)} \right\|^2}\right\} \quad, $ (7)
where
$ U $ is a matrix whose elements are all ones and “
$ \left\| \; \right\| $” is the 2-norm. For each
$ {R_i} $ block, calculate the contrast of scaling parameter
$ {s_i} $ and the brightness adjustment parameter
$ {o_i} $ by minimizing Eq.(7).
(4) According to Eq.(7),
$ {R_i} \approx {s_i}D_i^{q(j)} + {o_i}U $ , we differentiate
$ {s_i} $ and
$ {o_i} $ separately to get the following parameters:
$ s{}_i = \frac{{\left\langle {{R_i} - {{\bar R}_i}U,D_i^{q(j)} - \bar D_i^{q(j)}U} \right\rangle }}{{{{\left\| {D_i^{q(j)} - \bar D_i^{q(j)}U} \right\|}^2}}} \quad,$ (8)
$ {o_i} = {\bar R_i} - {s_i}\bar D_i^{q(j)} \quad,$ (9)
where
$ {s_i} $ is the contrast scaling parameter,
$ {\bar R_i} $ is the mean of range block
$ {R_i} $ and
$ q(j) $ is the isometric transform. They are all fractal parameters.
Since each block is determined iteratively by Eq.(10) during image decoding,
$ R_i^k = {s_i} \cdot \gamma _i^q(D_i^{k - 1}) + {o_i} \cdot U,\;\;\;\;D_i^0 = D_i^{q(j)}\quad . $ (10)
The combination of Eq.(8) and Eq.(10) gives the following formula:
$ R_i^k = {s_i}(\gamma _i^q(D_i^{k - 1}) - \overline {\gamma _i^q(D_i^{q(j)})} \cdot U) + \overline {{R_i}} \cdot U,\;\;\;D_i^0 = D_i^{q(j)} . $ (11)
$\overline {\gamma _i^q(D_i^{q(j)})} $ is replaced by
$\overline {\gamma _i^q(D_i^{k - 1})} $ , as shown in Eq.(11) where
$D_i^{k - 1}$ represents the mean of the
${D_i}$ blocks at the k-1 iteration and
$D_i^{q(j)}$ represents the isometric transformed
${D_i}$ .
$D_i^{k - 1}$ will change in the iterative process, thus accelerating the convergence rate. Therefore, the new fractal parameters are:
$\left\{ {s_i},\overline {{R_i}} ,i,q(j)\right\}$ .
(5) We calculate the histograms of the fractal parameters under wavelet transform which can effectively capture the statistical characteristic of fabric images.
It is not enough to just use fractal parameters under a wavelet transform as retrieval features. As we know, Zernike moments have advantages that include image rotation invariance, scale and translation invariance, and low noise sensitivity, and these can be used in fabric image retrieval.
2.2 Extraction of Zernike moments under a wavelet transform
Zernike moments were proposed by Teague in 1980[22] which has rotationally invariant[23]. The Zernike moment of an image
$ f(x,y) $ is defined as:
$ {Z_{nm}} = \frac{{n + 1}}{{\text{π}} }\int\limits_0^{2{\text{π}} } {\int\limits_0^1 {{R_{nm}}{e^{jm\theta }}f(\rho ,\theta ){\text{d}}} } \rho {\text{d}}\theta \quad,$ (12)
where j is an imaginary unit, n is a non-negative integer, and m is a non-zero integer.
$ n - \left| m \right| $ is even and
$ n \geqslant \left| m \right| $ .
$ f(\rho ,\theta ) $ is the function of
$ f(x,y) $ in polar coordinates.
$ \rho = \sqrt {{x^2} + {y^2}} $,
$ \theta = \arctan (y/x) $,and
$ x > - 1,y < 1 $.
We know that Zernike moment
$ {Z_{nm}} $ is a complex number according to Eq.(12). The real and imaginary parts are
$ {C_{nm}} $ and
$ {S_{nm}} $ .
$ {C_{nm}} = \frac{{2n + 2}}{{\text{π}} }\int\limits_0^{2{\text{π}} } {\int\limits_0^1 {{R_{nm}}(\rho )\cos (m\theta )f(\rho ,\theta )\rho {\text{d}}} } \rho {\text{d}}\theta \quad,$ (13)
$ {S _{nm}} = \frac{{2n + 2}}{{\text{π}} }\int\limits_0^{2{\text{π}} } {\int\limits_0^1 {{R_{nm}}(\rho )\sin (m\theta )f(\rho ,\theta )\rho {\text{d}}} } \rho {\text{d}}\theta \quad .$ (14)
For a digital image
$ f(x,y) $ of size
$ N \times N $ ,
$ {C_{nm}} $ and
$ {S_{nm}} $ are converted into polar coordinates in the unit circle and discretized as:
$ {C_{nm}} = \frac{{2n + 2}}{{{N^2}}}\sum\limits_{r = 1}^{N/2} {{R_{nm}}(2r/N)} \sum\limits_{\sigma = 1}^{8r} {\cos \frac{{{\text{π}} m\sigma }}{{4r}}} f(r,\sigma ) \quad,$ (15)
$ {S _{nm}} = - \frac{{2n + 2}}{{{N^2}}}\sum\limits_{r = 1}^{N/2} {{R_{nm}}(2r/N)} \sum\limits_{\sigma = 1}^{8r} {\sin \frac{{{\text{π}} m\sigma }}{{4r}}} f(r,\sigma ) \quad,$ (16)
where
$ r = \max (\left| x \right|,\left| y \right|) $ when
$ r = \left| y \right| $,
$ \sigma = 2y - \dfrac{{xy}}{r} $ when
$ r = \left| x \right| $, and
$ \sigma = \dfrac{{2(r - x)y}}{{\left| y \right|}} + \dfrac{{xy}}{r} $,
$ \rho = 2r/N,\; \theta = {\text{π}} \sigma (4r) $ .
Therefore, the calculation of Zernike moments under a wavelet transform is as follows :
(1) A two-layer wavelet transform is performed on the 512×512 fabric images, and the 128×128 low-frequency component is obtained as the sub-images. Then, the Zernike moments are calculated.
(2) The ranges of
$ r $ and
$ \theta $ are calculated.
(3)
$ {C_{nm}} $ and
$ {S_{nm}} $ are calculated followed by
$ \left| {{Z_{nm}}} \right| $.
2.3 Fabric image retrieval algorithm
There are 3000 512×512 fabric images from a clothing design company as the test images. Images in Figure 1 are some of them.

Figure 1.Part of the fabric images
We take Flower1 in Fig.1 as a sample for the experiment. Flower1 is transformed by a two-layer wavelet transform. After the first layer wavelet transform, four components are obtained. The low frequency component is approximately coefficient ca1 and the high frequency components are horizontal component chd1, vertical component cvd1 and diagonal component cdd1. Then, the low-frequency component ca1 is further decomposed into low-frequency components: approximate coefficient ca2, the horizontal component chd2, the vertical component cvd2, and the diagonal component cdd2. The experimental results are shown in Fig.2. After two-level wavelet decomposition, the approximate coefficient ca2 is similar to the original image but its high frequency component can be regarded as noise.

Figure 2.Results of two-layer wavelet transform:(a) approximate coefficient ca2; (b) horizontal component chd2; (c) vertical component cvd; (d) diagonal component cdd2
Fractal coding is performed on the wavelet-transformed low-frequency component ca2 to obtain retrieval features. Fractal coding parameters of all the
$ {R_i} $ and
$ {D_i} $ blocks are obtained as
$ {s_i} $ ,
$ {\bar R_i} $ and
$ q{\text{(}}j{\text{)}} $ . We verify the image quality of fractal decoding under different iterations. Peak Signal-to-Noise Ratio (PSNR)[24] and Structural Similarity Index Measurement (SSIM)[25] are applied to measure the quality of the decoded images. PSNR is defined as:
$ PSNR = 10{\lg}\left(\frac{{{{255}^2}}}{{\dfrac{1}{{{N^2}}}\displaystyle\sum\limits_{i = 1}^M {\displaystyle\sum\limits_{j = 1}^N {{{({x_{ij}} - {y_{ij}})}^2}} } }}\right)\quad, $ (17)
where
$ N $ is the side length of the low-frequency sub-images,
$ {x_{ij}} $ and
$ {y_{ij}} $ represent the pixel values on the coordinates of the low-frequency sub-images and fractal decoded images respectively. SSIM evaluates an image quality in terms of its brightness, contrast and structure. We define:
$ \begin{gathered} l(x,y) = \frac{{2{\mu _x}{\mu _y}}}{{\mu _x^2 + \mu _y^2}} \\ c(x,y) = \frac{{2{\sigma _x}{\sigma _y}}}{{\sigma _x^2 + \sigma _y^2}} \\ s(x,y) = \frac{{{\sigma _{xy}}}}{{{\sigma _x}{\sigma _y}}} \quad, \\ \end{gathered} $ (18)
where
$ x $ and
$ y $ represent the low-frequency sub-images and fractal decoded images, respectively;
$ \mu {}_x $ ,
$ {\mu _y} $ and
$ \sigma {}_x $ ,
$ {\sigma _y} $ represent the luminance mean and standard deviation in the x and y directions; and
$ l(x,y) $ ,
$ c(x,y) $ , and
$ s(x,y) $ represent image brightness, contrast and structure comparison functions, respectively. SSIM is defined as:
$SSIM = l(x,y) \cdot c(x,y) \cdot s(x,y) = \frac{{2{\mu _x}{\mu _y}2{\sigma _{xy}}}}{{(\mu _x^2 + \mu _y^2)(\sigma _x^2 + \sigma _y^2)}}.$ (19)
We can use PSNR and SSIM to objectively evaluate image quality. The greater the value, the better the image quality.
Due to the self-similarity of fabric images, fractal coding under a wavelet transform not only has a higher compression ratio, but it also can greatly shorten the coding time. Therefore, we propose a fabric image retrieval algorithm based on fractal coding and Zernike moments under a wavelet transform. Fractal coding under a wavelet transform of a query image and the other images in the database are expressed as
${V_F} = \{ {F_{v1}},{F_{v2}},\cdots,{F_{vn}}\}$ and
${U_F} = \left\{ {{F_{u1}},{F_{u2}},\cdots,{F_{{{un}}}}} \right\}$ , respectively, and the Zernike moments under a wavelet transform of a query image and the other images in the database are expressed as
$ {V_Z} = \left\{ {{Z_{v1}},{Z_{v2}},\cdots,{Z_{{{vn}}}}} \right\} $ and
$ {U_Z} = \left\{ {{Z_{u1}},{Z_{u2}},\cdots,{Z_{{{un}}}}} \right\} $ , respectively. In this paper, Manhattan distance[26,27] is chosen to calculate similarity and is defined as:
$ d({V_F},{U_F}) = \sum\limits_{i = 1}^n {\left| {{F_{vi}} - {F_{ui}}} \right|}\quad, $ (20)
$ d({V_Z},{U_Z}) = \sum\limits_{i = 1}^n {\left| {{Z_{vi}} - {Z_{ui}}} \right|} \quad,$ (21)
where
$ d({V_F},{U_F}) $ is the difference between fractal parameters under a wavelet transform of a query image and the other images in the database.
$ d({V_Z},{U_Z}) $ is the difference of Zernike moments under a wavelet transform. Then the total similarity distance is defined as:
$ D = {\lambda _1}d({V_F},{U_F}) + {\lambda _2}d({V_Z},{U_Z}) \quad,$ (22)
where
$ {\lambda _1} $ and
$ {\lambda _2} $ are weights,
$ 0 < {\lambda _1} < 1 $ ,
$ 0 < {\lambda _2} < 1 $ and
$ {\lambda _1} + {\lambda _2} = 1 $ .
The obtained distances are sorted in ascending order. Precision and recall[28] are used to evaluate the retrieval performance, as defined in Eq.(23) and Eq.(24).
$ Pr {{e}}cision = \frac{r}{{r + M}}\quad, $ (23)
$ R{{ecall}} = \frac{r}{{r + P}}\quad, $ (24)
where
$ r $ refers to the relevant retrieved images, and
$ r + M $ and
$ r + P $ are the total retrieved and relevant images, respectively. Precision refers to the ratio of the relevant retrieved images to the total retrieved images. Recall refers to the ratio of the relevant retrieved images to the total relevant images. A precision-recall (P-R) curve is usually used to represent the retrieval performance.
3 Experiments and analysis
We use 3000 fabric images with a size of 512×512 from a clothing design company as the experimental images. All experiments are carried out by MATLAB R2020a simulation software on a 3.60 GHz computer with 16GB of RAM. We compare the proposed algorithm (FZW) with the Basic Fractal Image Compression (BFIC) and the joint orthogonal fractal parameters with the improved Hu invariant moment and Variable bandwidth Kernel density estimation of Fractal parameters (HVKF)[29] and the Sparse Fractal Image Compression algorithm (SFIC)[30].
We compare the average PSNR of 3000 images with four different methods. The experiment results in Table 2 show that the proposed algorithm (FZW) works better than the others.

Table 2. Average PSNR of 3000 images with four different methods
Table 2. Average PSNR of 3000 images with four different methods
Method | BFIC | HVKF | SFIC | FZW | PSNR/dB | 28.26 | 31.47 | 36.38 | 37.21 |
|
Fig.3 presents some decoding images under different algorithms. It can be seen that the reconstructed images under the BFIC algorithm has a block effect. Image quality is improved under the HVKF algorithm but the reconstructed images under the SFIC algorithm and the proposed algorithm look almost the same as the original images.

Figure 3.Decoding images under different algorithms (from left to right are original image, BFIC, HVKF, SFIC and FZW results)
As seen in Table 3, the decoding image quality and encoding speed under the proposed algorithm are significantly improved compared with the BFIC and HVKF algorithms. Meanwhile, compared with the SFIC algorithm, the PSNR of its decoded images is better except that of Trellis and Flower2, which is slightly lower. Furthermore, the encoding time is reduced by about half and the SSIM of an image and the reconstructed image are improved. The results were consistent with the subjective sensory evaluation of human eyes.

Table 3. Comparison of decoding image quality and encoding time under different algorithms
Table 3. Comparison of decoding image quality and encoding time under different algorithms
Images | BFIC | | HVKF | | SFIC | | FZW | PSNR/dB | Time/s | SSIM | PSNR/dB | Time/s | SSIM | PSNR/dB | Time/s | SSIM | PSNR/dB | Time/s | SSIM | Trellis | 28.44 | 727.81 | 0.805 | | 32.72 | 165.37 | 0.852 | | 37.86 | 65.76 | 0.938 | | 35.89 | 43.67 | 0.921 | Flower1 | 27.72 | 748.32 | 0.742 | 31.85 | 148.88 | 0.823 | 38.53 | 83.63 | 0.955 | 38.82 | 43.85 | 0.978 | Cluster | 29.01 | 733.70 | 0.846 | 30.46 | 160.53 | 0.869 | 35.48 | 90.08 | 0.937 | 36.30 | 38.23 | 0.945 | Stripes1 | 28.56 | 742.24 | 0.784 | 30.80 | 156.60 | 0.858 | 36.06 | 82.34 | 0.943 | 36.71 | 38.13 | 0.960 | Leaves | 28.99 | 736.59 | 0.808 | 33.54 | 163.47 | 0.889 | 37.25 | 71.09 | 0.946 | 37.57 | 42.97 | 0.966 | Stripes2 | 29.12 | 740.68 | 0.812 | 29.23 | 163.26 | 0.856 | 37.93 | 87.72 | 0.933 | 38.15 | 44.28 | 0.974 | Flower2 | 28.70 | 728.76 | 0.774 | 30.53 | 155.04 | 0.842 | 38.22 | 73.51 | 0.986 | 37.64 | 38.11 | 0.982 | Rhombus | 27.85 | 730.05 | 0.692 | 30.07 | 163.77 | 0.870 | 33.10 | 82.55 | 0.969 | 35.21 | 42.93 | 0.983 | Flame | 28.13 | 757.90 | 0.802 | 31.07 | 169.90 | 0.858 | 36.37 | 80.61 | 0.944 | 37.29 | 47.57 | 0.975 | Diamond | 27.32 | 724.81 | 0.769 | 30.32 | 166.23 | 0.810 | 34.75 | 78.85 | 0.926 | 38.43 | 51.52 | 0.979 | Curve | 28.66 | 675.55 | 0.807 | 32.45 | 147.96 | 0.871 | 36.18 | 63.48 | 0.932 | 37.06 | 35.92 | 0.968 | Dots | 29.57 | 701.53 | 0.821 | 32.76 | 158.03 | 0.864 | 36.92 | 78.80 | 0.939 | 38.28 | 37.69 | 0.986 | Wave | 28.29 | 681.63 | 0.794 | 30.97 | 150.54 | 0.806 | 37.15 | 76.05 | 0.945 | 37.18 | 38.50 | 0.964 | Scroll | 27.73 | 717.84 | 0.654 | 29.60 | 161.86 | 0.797 | 35.10 | 79.37 | 0.899 | 36.99 | 38.03 | 0.943 | Twill1 | 29.54 | 727.54 | 0.811 | 31.55 | 163.43 | 0.853 | 37.22 | 75.91 | 0.938 | 37.61 | 39.44 | 0.955 | Circle1 | 27.94 | 720.09 | 0.763 | 31.89 | 159.87 | 0.847 | 34.88 | 73.00 | 0.917 | 37.26 | 40.16 | 0.967 |
|
We can see in Fig.4 (color online) that the image decoding quality under the proposed algorithm is higher. The experimental results above demonstrate that the proposed algorithm can obtain higher decoded image quality with a higher encoding speed, which is an effective algorithm that can be used for image retrieval.

Figure 4.Comparison of decoding image quality under different algorithms
To prove the superiority of the proposed method, we select other fabric image retrieval algorithms for experimental comparison, including the BFIC algorithm, the HVKF algorithm and the algorithm in [31] literature. In order to ensure fairness in the comparisons, all the algorithms are performed on the same fabric images and Manhattan distance is used to calculate the similarity. Each fabric image is used as a query image, and the precision rate and recall rate are calculated. We also compare and analyze the experimental results under the four algorithms so that a total number of
$ 3000 \times 4 $ retrievals are carried out. For each algorithm, average precision and average recall are calculated respectively. The P-R curve is shown in Figure 5.

Figure 5.Precision-recall (P-R) curves under different algorithms
As exhibited in Fig.5, the retrieval performance of the proposed fabric image retrieval algorithm based on fractal coding and Zernike moments under a wavelet transform is better than the other algorithms. Fractal parameters are the only retrieval feature used in the BFIC algorithm so the retrieval performance is not good, and the matching process between domain blocks and range blocks is time-consuming so the retrieval efficiency is low. The retrieval performance of HVKF and literature [31] are close. Compared with Hu invariant moments, Zernike moments have the advantages of image rotation invariance, scale and translation invariance, and low noise sensitivity so they are more suitable for fabric image retrieval. However, literature [31] only uses texture features as the retrieval features which results in low retrieval accuracy. The comparison results indicate that the proposed method is superior for fabric image retrieval.
4 Conclusion
Aiming at the problems of low accuracy and low efficiency in fabric image retrieval, an image retrieval algorithm based on fractal coding and Zernike moments under a wavelet transform is proposed in this paper to improve retrieval efficiency and accuracy. Experimental results show that the average precision and average recall are greatly improved. Based on the above experiments and analysis, it is concluded that the retrieval accuracy and retrieval efficiency of the proposed algorithm are both high, and that it performs better than other algorithms. Meanwhile, this image retrieval algorithm can help workers in a factory retrieve the same or similar fabric images accurately and quickly, saving significant labor resources.