Journal of Innovative Optical Health Sciences, Volume. 16, Issue 5, 2350004(2023)
Deep learning method for cell count from transmitted-light microscope
[1] Components of the complete blood count as risk predictors for coronary heart disease: In-depth review and update. Tex. Heart Inst. J., 40, 17-29(2013).
[2] A semiautomated approach using GUI for the detection of red blood cells. Proc. Int. Conf. Electr. Electron. Optim. Techn., ICEEOT, 525-529(2016).
[3] Automated handheld instrument improves counting precision across multiple cell lines. BioTechniques, 48, 325-327(2010).
[4] Automatic red blood cell detection and counting system using Hough transform. Am. J. Pharm. Sci., 5, 7913-7920(2018).
[5] Identification and red blood cell automated counting from blood smear images using computer-aided system. Med. Biol. Eng. Comput., 56, 483-489(2018).
[6] Automated red blood cells counting in peripheral blood smear image using circular hough transform. Int. Conf. Artif. Intell. Modelling Simul. (AIMS), 320-324(2013).
[7] Blood cell detection using thresholding estimation based watershed transformation with sobel filter in frequency domain. Procedia Comput. Sci., 89, 651-657(2016).
[8] Bone marrow cells detection: A technique for the microscopic image analysis. J. Med. Syst., 43, 82(2019).
[9] Somatic cell count in buffalo milk using fuzzy clustering and image processing techniques. J. Dairy Res., 88, 69-72(2021).
[10] Improved detection performance in blood cell count by an attention-guided deep learning method. OSA Continuum, 4, 323-333(2021).
[11] Face detection in untrained deep neural networks. Nat. Commun., 12, 7328(2021).
[12] Transfer learning for pedestrian detection. Neurocomput., 100, 51-57(2013).
[13] You only look once: Unified, real-time object detection. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 779-788(2016).
[14] A survey on performance metrics for object-detection algorithms. Int. Conf. Syst. Signals Image Process. (IWSSIP), 237-242(2016).
[15] Single-image crowd counting: A comparative survey on deep learning-based approaches. Int. J. Multimed. Info. Retr., 9, 63-80(2020).
[16] U-Net: Deep learning for cell counting, detection, and morphometry. Nat. Meth., 16, 67-70(2019).
[17] Annotation-efficient cell counting. Proc. Int. Conf. Med. Image Comput. Comput.-Assisted Intervention (MICCA), 405-414(2021).
[18] Simultaneous cell detection and classification with an asymmetric deep autoencoder in bone marrow histology images. Annu. Conf. Med. Image Underst. Anal. (MIUA), 829-838(2017).
[19] A survey on deep learning in medical image analysis. Med. Image. Anal., 42, 60-88(2017).
[20] SAU-net: A unified network for cell counting in 2d and 3d microscopy images. IEEE/ACM Trans. Comput. Biol. Bioinform., 19, 1920-1932(2021).
[21] Deeply-supervised density regression for automatic cell counting in microscopy images. Med. Image Anal., 68, 101892(2021).
[22] Efficient and robust cell detection: A structured regression approach. Med. Image Anal., 44, 245-254(2018).
[23] White blood cell differential count of maturation stages in bone marrow smear using dual-stage convolutional neural networks. Plos One, 12, e0189259(2017).
[24] Deep learning for label-free nuclei detection from implicit phase information of mesenchymal stem cells. Biomed. Opt. Exp., 12, 1683-1706(2021).
[25] Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy. Nat. Meth., 15, 917-925(2018).
[26] AI-powered transmitted light microscopy for functional analysis of live cells. Sci. Rep., 9, 1-9(2019).
[27] Image-to-image translation with conditional adversarial networks. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 5967-5976(2017).
[28] Fluo-Fluo translation based on deep learning. Chin. Opt. Lett., 20, 031701(2022).
[29] High-speed multimode fiber imaging system based on conditional generative adversarial network. Chin. Opt. Lett., 19, 081101(2021).
[30] Image super-resolution using dense skip connections. Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 4799-4807(2017).
[31] YOLO9000: better, faster, stronger. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 21-26(2017).
[32] YOLOv3: An incremental improvement,.
[33] Attention-YOLO: YOLO detection algorithm that introduces attention mechanism. Comput. Eng. Appl., 55, 13-23(2019).
[34] Deep residual learning for image recognition. Conf. Comput. Vis. Pattern Recognit. (CVPR), 770-778(2016).
[35] Multi-scale structural similarity for image quality assessment. Asilom. Conf. Signals, Syst. Comput. (ACSSC), 1398-1402(2003).
[36] Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging, 3, 47-57(2017).
[37] Unpaired image-to-image translation using cycle-consistent adversarial networks. Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2242-2251(2017).
[38] The unreasonable effectiveness of deep features as a perceptual metric. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 586-595(2018).
[39] Machine learning approach of automatic identification and counting of blood cells. Healthc. Technol. Lett., 6, 103-108(2019).
[40] Practical fluorescence reconstruction microscopy for large samples and low-magnification imaging. PLoS Comput. Biol., 16, e1008443(2020).
[41] Phase imaging with an untrained neural network. Light Sci. Appl., 9, 77(2020).
[42] An unsupervised approach to solving inverse problems using generative adversarial networks,(2018).
[43] Image restoration using total variation regularized deep image prior. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), 7715-7719(2019).
[44] NucleAIzer: A parameter-free deep learning framework for nucleus segmentation using image style transfer. Cell Syst., 10, 453-458(2020).
Get Citation
Copy Citation Text
Mengyang Lu, Wei Shi, Zhengfen Jiang, Boyi Li, Dean Ta, Xin Liu. Deep learning method for cell count from transmitted-light microscope[J]. Journal of Innovative Optical Health Sciences, 2023, 16(5): 2350004
Category: Research Articles
Received: Sep. 2, 2022
Accepted: Dec. 31, 2022
Published Online: Sep. 26, 2023
The Author Email: Lu Mengyang (xinliu.c@gmail.com), Shi Wei (xinliu.c@gmail.com), Liu Xin (xinliu.c@gmail.com)