Laser & Optoelectronics Progress, Volume. 58, Issue 18, 1811013(2021)

Multi-Camera System: Imaging Enhancement and Application

Peiyao Guo, Zhiyuan Pu, and Zhan Ma*
Author Affiliations
  • School of Electronic Science and Engineering, Nanjing University, Nanjing, Jiangsu 210023, China
  • show less
    Figures & Tables(25)
    Typical deep learning based computational imaging techniques[9]
    Illustration of various dimensional and multi-scale data acquisition with multi-camera systems in the toy model[15]. (a) The illustration of three cameras’ viewpoints in simulation scene; (b) uniform sampling in the multi-dimensional scene features, namely same imaging control setting for all cameras; (c) non-uniform sampling in the multi-dimensional scene features, namely various imaging control settings among cameras. For example, camera #1 captures at the low spatial resolution and camera #2 doesn’t record color information
    Mosaic results acquired with single camera through scanning[24]. (a) Captured images with only geometric alignment; (b) result after radiometric alignment and tone mapping
    A wide-field high-resolution imaging array camera and its synthesized image[27]. (a) Architecture of the array camera; (b) captured image using the array camera; (c) zoomed-in view of (b)
    Flexible array camera and captured street scene image[12]
    Optical structure and image reconstruction of AWARE-2. (a)(b) Structure of AWARE-2’s parallel array of micro-camera [25]; (c) an image captured using AWARE-2, of which the insets are digitally magnified images[31]
    The UnstructuredCam[26]. (a) Schematic of UnstructuredCam system consisting of multiple columns of subarrays and UnstructuredCam module, and the cameras and lenses are selected heterogeneously based on the nature of the targets; (b) gigapixel-level videography captured using the UnstructuredCam, where the red and blue frames represent the distributions of the global and local cameras
    Motion aliasing[34]. (a) A ball moving in a sinusoidal trajectory; (b) an image sequence of the ball is captured at low frame-rate, the perceived motion is along a straight line, this false perception is referred to as “motion aliasing”; (c) the filled-in frames (indicated with blue dashed lines) which are obtained via an ideal temporal interpolation can not produce the correct motion
    Illustration of multi-cameras triggered with the same frame rate[34]. (a)--(d)Captured sequences from different cameras at different time; (e) reconstructed sequences with high frame rate
    Illustration of Wilburn et al.’s multiple camera array[35]. (a) Detailed structure; (b) firing order for camera array; (c) the final view consists of different rows (shown in gray) in each camera at the same time, which suppresses the distortion caused by rolling shutter; (d) reconstructed high-speed video of a popping balloon
    System setup and input/output proposed by Wang et al.[36]. (a) Camera setup; (b) the inputs consist of a standard 30 frame/s video and a 3 frame/s light field sequence; (c) finally generated 30 frame/s light field video
    Overall image processing architecture proposed by Wang et al.[36]. (a) Estimates the disparity at the key frames, as well as the temporal flow in the 2D video to generate the disparity at the target frame and then warps high frame-rate video to the low frame-rate viewpoint; (b) fuses all the reference images to output the final reconstruction
    Illustration of Cheng et al.’s proposal[37]. (a) Dual camera setup; (b) propagation of high-resolution map along with the temporal dimension; (c) the detailed pipeline of fusing the high resolution map and the low resolution map
    Visualized results of Cheng et al.’s proposal[37]. (a) Input frame (Iref) with high resolution and low frame rate; (b) input frame (ILSR↑) with low resolution and high frame rate; (c)(d) close-up of patches in Iref and ILSR↑; (e) reconstructed frame with the proposal
    Flow chart of camera imaging principle[38]
    High dynamic range imaging based on multi-exposure image sequence and high dynamic range imaging based on inverse tone mapping[39]. (a) High dynamic range imaging based on multi-exposure image sequence; (b) high dynamic range imaging based on inverse tone mapping
    Schematic and reconstructed frames of high dynamic range camera based on beam splitter[41]. (a) Capture with 1∶16 beam splitter; (b) capture with 50% semitransparent mirror and camera mounted with a neutral density filter; (c) reconstructed scene using the high dynamic range camera
    Multi-sensor high dynamic range camera and a group of captured images[42]. (a) Optical architecture of the high dynamic range camera, the terms high, medium, and low exposure (HE, ME, LE, respectively) refer to the sensors based on the amount of light each sensor receives; (b) final reconstructed image, where the inset photos show the low dynamic range images from the high, medium, and low-exposure sensors, respectively
    High dynamic range (HDR) reconstruction based on deformable convolution[50]. (a) Flow chart of HDR reconstruction network; (b) HDR reconstruction on multi-exposure multi-camera input
    Illustration of imaging with flash and without flash[55]. (a) Standard photography, flash photography and the combined photography;(b)--(d) enlarged views of the local details
    Illustration of spectral response curves for traditional RGB imaging and dark flash imaging along with the luminance comparison between different channels[56]. (a) (c) Spectral response curves for RGB imaging and dark flash imaging, j∈{1,2,3,4,5} for red, green, blue, IR, and UV channels; (b) absolute irradiance at 1 m from the dark flash; (d)--(l) under low-light condition, luminance and its gradient comparison between the red channel and IR channel of the corresponding pixel in the long-exposure imaging, ambient light imaging, and dark flash imaging
    Illustration of hybrid imaging setup with RGB and NIR-G-NUV cameras[57]. (a) An idealized prototype; (b) an actual camera system and spectral curves
    Visualization of Wang et al.’s method[57]. (a) RGB input (visualized here with a digital gain of 5×); (b) dark-flash input;(c) output
    Workflow of low-light and high-quality color imaging with hybrid monochrome and color cameras[59]
    Close-up performance of Guo et al.’s method on the real scenes in the low-light condition[59]. (a) Low-resolution RGB input images (for comparison, the low-resolution inputs are enlarged to the high-resolution sizes); (b) high-resolution monochromic images; (c) reconstructed high-resolution RGB images; (d) the value-amplified low-resolution images from (a) for showing noise and blur
    Tools

    Get Citation

    Copy Citation Text

    Peiyao Guo, Zhiyuan Pu, Zhan Ma. Multi-Camera System: Imaging Enhancement and Application[J]. Laser & Optoelectronics Progress, 2021, 58(18): 1811013

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Imaging Systems

    Received: May. 31, 2021

    Accepted: Jul. 20, 2021

    Published Online: Sep. 3, 2021

    The Author Email: Ma Zhan (mazhan@nju.edu.cn)

    DOI:10.3788/LOP202158.1811013

    Topics