Echinococcosis is a serious parasitic disease caused by
Chinese Optics Letters, Volume. 17, Issue 4, 041703(2019)
Improved recognition of
Echinococcosis—a parasitic disease caused by
Echinococcosis is a serious parasitic disease caused by
Eosin exclusion is a common method for detecting the viability of protoscoleces
Despite its advantages, eosin exclusion can be time consuming and result in larvae survival rate errors attributable to the manual monitoring process by human observers. The development of intelligent recognition capabilities could increase the accuracy of the overall eosin exclusion method. However, no such studies combining intelligent recognition capabilities and eosin exclusion were reported in the literature.
Sign up for Chinese Optics Letters TOC Get the latest issue of Advanced Photonics delivered right to you!Sign up now
Although we found no direct application to eosin exclusion, a number of studies have applied intelligent recognition algorithms to conventional egg recognition efforts. For example, Rema
The previous research has collectively advanced the state of knowledge regarding egg recognition technology in micromedicine. However, each of the proposed methods indicates only the need to intervene without explicitly reflecting the range of image characteristics or their potential overlap.
In response to these methodological shortcomings, we developed a novel method based on computer vision and machine learning that detects protoscoleces parasites using eosin exclusion. Using this method, areas with suspected live
The results of this study may improve the accuracy of eosin exclusion when detecting
The human visual system possesses image understanding, recognition, and processing capabilities. In the image processing field, efforts are focused on simulating the human visual system using computers and establishing a visual attention model. Human visual observation is selective. Broadly speaking, images contain a variety of information that can be perceived by humans, such as color, texture, and brightness. However, not all information is of interest, and as such, not everything that we see is processed by the brain. To be effective, computer simulation of human visual observation must quickly locate salient regions and extract relevant images.
At present, four primary models of visual saliency exist that were developed by (1) Hou and Zhang, (2) Hu, Rajan, and Chia, (3) Stentiford, and (4) Itti, Koch, and Niebur. Among these four models, the Itti model was determined to be most appropriate for the egg recognition task in this study because it simulates human perception and extracts regions of interest based on differences between the target and background[
Visual saliency can be graphically depicted using maps. When developing visual saliency maps, the Itti model extracts primary characteristics, resolves multiple features and multidimensional visual space using center-surround methods, filters and obtains feature maps using pyramids with a depth of up to nine levels, and compounds maps using fusing and computing methods. The resultant visual saliency maps include brightness, color, and directional characteristics.
Four broadly tuned color channels can be created as follows:
For the multiscale feature map, a central periphery differential operation is performed, and the feature graph is subtracted under different scales (
Similarly, a color feature map can be generated as follows:
The
Providing good directional selectivity, the Gabor filter is suitable for the extraction of directional features in an image. Local orientation information can be obtained from
A directional feature map can be subsequently generated as follows:
After the eigenvalues of each feature map are measured using nonlinear normalization, the brightness, color, and directional feature maps must be added separately to form corresponding brightness, color, and directional saliency maps. Formulations for the brightness, color, and directional saliency maps, respectively, are as follows:
These three individual feature saliency maps are linearly weighted and combined to form a total visual saliency map[
In the total visual saliency map, each target competes for attention and focus. A winner-take-all mechanism is used to detect the point of highest salience in the map at any given time, and draws the focus of attention towards this location. Figure
Figure 1.(a) Microscopic image of
In this study, the Itti visual saliency model, which considers the brightness, color, and directional characteristics of an image, was originally selected to complement eosin exclusion methods, which enhance image color. However, the accuracy of the Itti model was low when extracting the image’s saliency region, and it was difficult to extract the entire area of interest.
In an effort to improve accuracy, we modified the Itti model to reflect the human eye’s different sensitivities to different saliency features. Considering the microscopic image of
The method proposed in this study also includes the use of the SIFT algorithm to extract common features from the images. The features are highly distinctive and invariant to image scale and rotation[
To obtain stable and effective extreme points, we first build a scale pyramid using a scale-space kernel based on the Gaussian function as follows:
The scale space of an image can then be defined as a function
To efficiently detect stable keypoint locations in scale space, we use scale-space extrema in the difference-of-Gaussian function convoluted with the image
Figure
Figure 2.Graphical depiction of the difference-of-Gaussian function.
A large number of extreme points are likely detected in scale space during the first step of the SIFT algorithm. These points need to be further filtered and localized to ensure fully reliable feature points. During this second step, the position and scale of the keypoints are accurately determined by fitting three-dimensional functions. Unstable edge points and keypoints with low contrast are removed.
After the keypoints are filtered and localized, each remaining keypoint is assigned a location based on local image gradient directions. For each image sample
The first three steps of the SIFT algorithm produce a set of feature points, each described by unique locational, scalar, and directional characteristics. The final step is to determine a keypoint descriptor for the local image region[
Figure 3.Keypoint descriptor determination process.
The method proposed in this study and applied to the recognition of
Figure 4.Proposed method for detecting
Using these combined methods, a clear egg image and the center points of any suspected living eggs can be obtained (all suspected living eggs are cut at their center points to produce sample slices). The SIFT algorithm can be used to extract the scale-invariant features of the known living eggs and produce a scale-invariant feature vector. The
To validate the efficacy of the method proposed in this study, we performed an experiment using MATLAB R2016a and microscopic images of parasites treated by the eosin exclusion method at the Xinjiang Medical University. The living parasite image samples included different targets and backgrounds. Subsets of 60 living tapeworm parasites and background images were used to develop an SVM classifier. The SVM classifier was subsequently used to detect living/nonliving parasites based on scale-invariant features.
As noted previously, the method proposed in this study and applied to the recognition of
Figure 5.(a) Microscopic image of
Figure
Figure 6.Egg recognition results using the proposed method.
Additional comparative results further demonstrated the efficacy of the proposed method. First, we compared the parasite recognition rates for methods using the conventional and the proposed modified Itti models for visual saliency. Table
|
Next, we considered the transferability of the proposed method by applying it to three different eosin exclusion test images containing living parasites and comparing the resultant parasite recognition rates. Table
|
These collective results demonstrated that the proposed method, based on visual saliency and scale-invariant features, offers a higher level of accuracy when detecting
In response to the need for a convenient and cost-effective method for early detection of echinococcosis, we developed a novel method based on computer vision and machine learning that detects
Most notably, this proposed parasite recognition method limits analysis to suspected living parasite areas determined through visual saliency, which in turn reduces feature extraction time using the SIFT algorithm. Use of the
The efficacy of this proposed method was validated with experimental methods. Experimental results indicated that the proposed method, based on visual saliency and scale-invariant, features, offers a higher level of accuracy (sufficiently high to meet hospital clinical test requirements) when detecting
Experiments are in progress to optimize the algorithm and improve the recognition efficiency.
[2] H. Li, T. Song, Y. Shao, W. Zhang, H. Wen. Chin. J. Epidemiol., 36, 1002(2015).
[6] H. Shi, H. Lv, Y. Lei, W. Qin, B. Wang, Z. Wang, Z. Xing, R. Yang, Y. Jiang. J. Pathog. Biol., 11, 220(2016).
[7] M. Rema, M. S. Nair. Biomed. Eng. Lett., 14, 3179(2013).
[8] B. Chen, J. Xie, B. Wang. Chin. J. Digital Med., 5, 35(2010).
Get Citation
Copy Citation Text
Zhuang Li, Guodong Lü, Xiaoyi Lü, "Improved recognition of
Category: Medical optics and biotechnology
Received: Nov. 11, 2018
Accepted: Jan. 10, 2019
Published Online: Apr. 3, 2019
The Author Email: Xiaoyi Lü (xiaoz813@163.com)