Advanced Imaging, Volume. 2, Issue 4, 041002(2025)

Micron scale 3D imaging with a multi-camera array On the Cover

Amey Chaware1, Kevin C. Zhou1, Vinayak Pathak1, Clare B. Cook1, Ramana Balla1, Kanghyun Kim1, Lucas Kreiss1, Bryan Hilley2, Julia McHugh2, Praneeth Chakravarthula3, and Roarke Horstmeyer1、*
Author Affiliations
  • 1Department of Biomedical Engineering, Duke University, Durham, USA
  • 2Nasher Museum of Art, Duke University, Durham, USA
  • 3Department of Computer Science, University of North Carolina, Chapel Hill, USA
  • show less
    Figures & Tables(6)
    We present a multi-camera array system that rapidly captures large focal stacks of macroscopically curved objects at microscopic resolutions over a wide FOV. A novel reconstruction algorithm then fuses the acquired imagery into large-area all-in-focus composites and recovers the associated depth maps to provide RGBD scans with uniquely high resolution. We demonstrate our method by obtaining high-resolution RGBD images of curved objects up to 10 cm in size.
    Overview of the data acquisition process using the multi-camera array imager. (a) Hardware schematic of the multi-camera array, and more than 50% overlap in the FOVs of individual cameras allows imaging over a large continuous area. (b) The multi-camera array scans objects axially to create a focal stack for each camera with a total of 6×9=54 camera images per z slice. (c) Actual setup: The sample is mounted on a stage and illuminated using 4 LED panels placed symmetrically around the sample stage.
    Joint 3D reconstruction and stitching algorithm. There are two sets of losses to optimize the CNN. (a) Multi-view consistency losses. We use a CNN to convert captured focal stacks into camera-centric depth maps. These depth maps are used to create an AiF RGB image. Both the camera-centric depth map and RGB image are dewarped using calibrated camera parameters to create RGB and depth composites. The composites are rewarped to create forward predictions of camera-centric depth maps and AiF RGB images. The mean square error (MSE) between camera-centric and forward predictions is used to train the CNN. (b) Sharpness losses. The MSE between the depth map produced by the CNN and the depth from sharpest focus, and the negative sharpness of the composite created using the CNN-predicted depth map are also used to optimize the CNN.
    Characterization of the imaging system. (a) 3D-printed pyramid rendering. (b) All-in-focus composite image and predicted depth map. (c) Line profile of the sample along the green line in (b). (d) Packaging foam piece rendering. (e) All-in-focus composite image and predicted depth map. (f) Line profile of the sample along the green line in (e).
    High-resolution RGBD reconstructions of large objects. For each object, (a) shows the all-in-focus composite image, and (b) shows the depth map. Insets (c) and (e) are enlarged images of the all-in-focus composites, and insets (d) and (f) are enlarged images of the depth maps. Our method can identify small surface features like holes (A) and divots (C), effectively reconstruct objects with height variations much larger than the DOF (B), and is robust to variation in object material (D).
    3D rendering using reconstructed RGBD data. (a)–(c) show 3D renders of an object imaged by our system in different lighting conditions. (d)–(f) show enlarged renders of the marked region under the same lighting conditions. Due to the high spatial and depth resolution afforded by our method, small surface variations can be seen easily.
    Tools

    Get Citation

    Copy Citation Text

    Amey Chaware, Kevin C. Zhou, Vinayak Pathak, Clare B. Cook, Ramana Balla, Kanghyun Kim, Lucas Kreiss, Bryan Hilley, Julia McHugh, Praneeth Chakravarthula, Roarke Horstmeyer, "Micron scale 3D imaging with a multi-camera array," Adv. Imaging 2, 041002 (2025)

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Research Article

    Received: Mar. 19, 2025

    Accepted: Jun. 19, 2025

    Published Online: Jul. 21, 2025

    The Author Email: Roarke Horstmeyer (rwh4@duke.edu)

    DOI:10.3788/AI.2025.10005

    Topics