Chinese Optics Letters, Volume. 17, Issue 3, 030604(2019)

Visible light positioning: moving from 2D planes to 3D spaces [Invited]

E. W. Lam* and T. D. C. Little**
Author Affiliations
  • Electrical and Computer Engineering Department, Boston University, Boston, Massachusetts 02215, USA
  • show less

    The global navigation satellite system (GNSS) is a well-established outdoor positioning system with industry-wide impact due to the multifaceted applications of navigation, tracking, and automation. At large, however, is the indoor equivalent. One hierarchy of solutions, visible light positioning (VLP) with its promise of centimeter-scale accuracy and widespread coverage indoors, has emerged as a viable, easy to configure, and inexpensive candidate. We investigate how the state-of-the-art VLP systems fare against two hard barriers in indoor positioning: the need for high accuracy and the need to position in the three-dimensions (3D). We find that although most schemes claim centimeter-level accuracy for some proposed space or plane, those accuracies do not translate into a realistic 3D space due to diminishing field-of-view in 3D and assumptions made on the operating space. We do find two favorable solutions in ray–surface positioning and gain differentials. Both schemes show good positioning errors, low-cost potential, and single-luminaire positioning functionality.

    1. INTRODUCTION

    Ubiquitous positioning is an enabling technology. Its manifestation in the outdoor environment as global navigation satellite system (GNSS) positioning systems (GPS, GLONASS, Galileo, Beidou, etc.) propelled into existence countless navigation, tracking, and automation applications industry wide. Nevertheless, these GNSS systems are defunct indoors due to the poor and unreliable attenuation of satellite signals through buildings, walls, and other obstacles. But despite this restricted access to satellite signals indoors, easy access to many other technologies exists.

    The current candidate indoor positioning technologies fit into two broad branches: RF techniques including WiFi[1], bluetooth[2], and Zigbee[3], to name a few; and lighting techniques. Lighting techniques are extensively surveyed[47]. The RF solutions achieve meter-level accuracies according to a recent survey on real-world deployments[8]. Time-synchronized RF solutions can reach sub-meter accuracy according to another recent survey focused on theoretical approaches[9]. These accuracies are better or level with the best of satellite-based positioning, but there is a need for improvement. Higher-resolution positioning, in the centimeter or even sub-centimeter range, is desirable indoors, as the operating space is smaller; thus, error margins also must decrease. The ideal positioning resolution is fine-grained on the order of human error. Higher-resolution positioning would also accommodate future technologies that would benefit from finer resolution positioning, such as location-based services (LBS) in smart buildings, autonomous warehouse robots, tracking medical devices, and next-generation directional beam-formed wireless links. Over RF, light-based solutions offer this additional accuracy.

    Theoretically, indoor positioning using any wavelength of light, characterized as typically absorbed by matter, works the same whether that light is invisible or visible. Visible light though is harmless, when compared to ultraviolet light, and has the benefit of cost-sharing with the already existing indoor lighting infrastructure, when compared to safe invisible wavelengths, such as near-IR. In fact, the assumption that lighting luminaires are already sequenced at fixed interval locations and providing line-of-sight (LOS) coverage to where human activity occurs and by virtue of coverage reliable 3D positioning is a key attribute of visible light positioning (VLP). Without this proponent, VLP is less enticing.

    The importance of 3D positioning is twofold. First is the requirement to position human-centric devices by definition that humans are not floor robots; we move our devices freely through space and not on some predetermined plane. Secondly, an ideal positioning system accommodates both human-centric devices (smartphones, wearables, remotes, laptops) and human-independent devices [autonomous robots, personal computers, internet-of-things (IOT) devices, screens]. Figure 1 shows the various 3D coordinates these human-centric and human-independent devices occupy in a room; each of these devices benefits in knowing its position relative to the room, e.g., for communicating, tracking, or navigating, and even for efficiency and convenience. Additionally, each of these devices should not require its own separate positioning scheme.

    Positioning 3D coordinates for devices in an indoor space using visible light.

    Figure 1.Positioning 3D coordinates for devices in an indoor space using visible light.

    As it stands, the current work in 3D VLP is limited. Some VLP techniques assume fixed two-dimensional (2D) planes for positioning, which as mentioned above are not ideal for tracking devices that change positions freely in 3D or for tracking a range of different devices at different planes—it is not worthwhile to define positioning planes for each device in a room. Other techniques only quantify in a single 2D plane and do not extend their benchmarking to 3D. Our work in the active zone, i.e., 3D benchmarking spaces, shows 3D errors can be drastically different from 2D errors[10]. Another dilemma, a fundamental one, is that some VLP solutions require LOS from multiple luminaires to position, which becomes nontrivial in 3D when field-of-view (FOV) decreases with height. A third consideration is the cost and complexity of 3D positioning. Fortunately, there are solutions that position in 3D in the centimeter range, depend on only one source, and are low-cost. One such technique is ray–surface positioning[11], which has recently emerged as a new way to realize fine-grain positioning.

    This paper provides an overview of 3D VLP. We do this by first describing VLP in its various forms, including typical configurations and key models. Then, we concentrate on the 3D elements, such as FOV, that make some 2D VLP techniques fail and others hard to implement. Next, we review a sample of the best state-of-the-art solutions and discuss trends as they relate to 3D positioning. Finally, we posit on future applications of 3D positioning and which techniques are suitable for these applications.

    The remainder of the paper is organized as follows: Section 2 highlights the overarching concepts of VLP; Section 3 furthers the VLP discussion as it relates to 3D positioning and the limits of 2D positioning schemes; Section 4 highlights and discusses the state-of-the-art in 3D VLP; Section 5 describes future applications and novel emerging techniques; finally, Section 6 concludes the paper.

    2. VISIBLE LIGHT POSITIONING

    A typical VLP system herein will assume a wide FOV LOS configuration of at least one luminaire, or transmitter—both terms are used interchangeably in this paper—at a fixed position following some layout optimized for modern lighting coverage. Light signals are then captured by a photosensitive detector or receiver, such as a photodiode (PD), some distance d away from the lights. The device itself is responsible for estimating its own location in relation to the reference coordinates of the luminaires—the anchor coordinates of the luminaires are known prior or communicated to the device wirelessly via RF, visible light communication (VLC), IR, etc. If there is more than one luminaire visible to the receiver, the light signals will interfere, and signal multiplexing and demultiplexing is required[12]. Figure 2 shows an example luminaire device layout of a typical space serviced by multiple luminaires.

    Typical room layout for a VLP system. The geometric angles between the receiver and transmitter are also noted.

    Figure 2.Typical room layout for a VLP system. The geometric angles between the receiver and transmitter are also noted.

    Wide FOV luminaires are modeled as Lambertian sources with the Lambertian radiant intensity dependent on the angle between each transmitter and that same transmitter’s normal axis ϕ and Lambertian order m of the luminaire. Lambertian radiant intensity is defined as[13,14]L(ϕ,m)=m+12πcosm(ϕ),where Lambertian order m is calculated based on the semiangle at half-power Φ1/2 of the luminaire[13]: m=[ln2ln(cosΦ1/2)].

    In typical lighting luminaires, for example, lamps using Cree XLAMP LEDs, a semiangle at half-power of Φ1/2=60° corresponds to Lambertian order m=1.

    Factoring in a squared distance d, dependency and receiver characteristics, PD area A, effective responsivity of the PD Reff, and the angle between transmitter and receiver with respect to the normal axis of the receiver ψ, the full LOS flat-fading (DC) channel model for a single luminaire, single PD channel at any point in space is given as HLOSDC={L(ϕ,m)Ad2Reff(ψ)cosψ0ψΨc0ψ>Ψc,where Ψc is the FOV semiangle of the receiver concentrator, and Reff(ψ) is defined as the product of Ts(ψ), the signal transmission of the receiver filter, i.e., losses over wavelengths, and g(ψ), the receiver concentrator gain, which relies on the refractive index n of the concentrator: g(ψ)={n2sin2Ψc0ψΨc0ψ>Ψc.

    The key takeaway from this channel model is that unique coordinates in a space correspond to unique sets of distances and angles. Based on these distance and angle dependencies, the channel model is exploited by numerous different physical modalities and mathematical techniques to estimate position. Figure 3 shows these physical modalities and mathematical techniques as well as extra peripheral sensors and hardware that aid VLP.

    Taxonomy of positioning algorithms showing the main physical modalities, mathematical techniques, and extra peripherals.

    Figure 3.Taxonomy of positioning algorithms showing the main physical modalities, mathematical techniques, and extra peripherals.

    Physical modalities are the raw information collected from the sensors, such as time-of-arrival (TOA)/time-difference-of-arrival (TDOA)[15,16], received signal strength (RSS)[17], phase-difference-of-arrival (PDOA)[18], and angle-of-arrival (AOA)[19,20], which themselves do not correspond to a position estimate. Mathematical techniques then manipulate and convert these raw values into a position estimate, often by converting the measured values to either a distance or angle measurement and using the channel model from above.

    Geometric-based algorithms, a popular and low complexity cluster of mathematical techniques, transform the raw collected data into a distance or directionality, i.e., angle, via a distance or angle relationship, e.g., TOA measurements into distances dependent on speed of light or RSS values into distances depending on signal attenuation. From the transformed distances or angles, trilateration or triangulation techniques can resolve position[17,19]. The shortcoming of trilateration and triangulation is their reliance on multiple luminaires.

    Machine learning techniques are another subset of techniques. One example is fingerprinting the raw physical values to a feature database for each coordinate point before estimating position based on these previously captured values[21,22]. Machine learning, however, requires prior learning for each operating space. Image processing is yet another technique, but requires more computation power and multi-pixel camera receivers[23,24]. The simplest, but also coarsest, mathematical technique is proximity beaconing. The mathematical technique chosen and the physical modality used are highly codependent and dependent on the number of available luminaires and PDs.

    In some scenarios, particularly when concerning 3D positioning, signal blocking, and different device orientations, light by itself is not sufficient in positioning. In those cases, peripheral devices add further information to aid the light-based positioning system. Some of these added peripherals include inertial measurement units (IMUs)[25,26], lasers[11], RF[27], and more PDs and transmitters than conventional[2830]. These peripherals add information on receiver orientation as well as additional raw physical data: angle, RSS, time, and phase. The value added by these peripheral devices should outweigh the additional cost and complexity.

    Since most VLP techniques take advantage of multiple luminaires in a space, some form of multiplexing at the source and/or sink is required to be able to isolate the signal information from one luminaire to the next. The simplest multiplexing scheme is time-domain multiplexing (TDM), which requires minimal hardware and just allocates time slots for each source. Another technique is frequency-domain multiplexing (FDM), which requires fewer time slots to communicate information, but requires additional signal processing to take a signal to and from the time domain. There is also spatial multiplexing (SM) via imaging where the transmitters are separated out in space—this requires even more hardware, usually a multi-pixel camera, and processing power. Finally, for niche applications, wavelength-division multiplexing (WDM), which encodes each light on a separate color, can also be used. Figure 4 shows the different multiplexing schemes available to VLP.

    Example multiplexing schemes to prevent luminaire signal interference: a, TDM, b, FDM, c, SM, and d, WDM.

    Figure 4.Example multiplexing schemes to prevent luminaire signal interference: a, TDM, b, FDM, c, SM, and d, WDM.

    3. CHALLENGES IN ADAPTING 2D TO 3D

    One of the largest barriers in appreciating VLP is reconciling the extensive quantity of work: many works are incremental, some works are repetitive, and a small number of works are innovative. The reasons behind this massive interest are that it is an exciting field and the potential to use energy-efficient LED lighting as a substrate. However, benchmarking among different proposals is lacking and inconsistent, leaving room for nearly identical works that appear at first to be different. This also challenges review authors to sort out the fundamental contributions.

    For starters, Fig. 5 shows an example cumulative distribution function (CDF) of total mean square error (MSE) in a 2D plane—a separate confusing point is that some errors are reported as absolute errors in X, Y and sometimes Z, and others are reported as total MSE. From this graphic, three different common benchmarks are pointed out: a, best accuracy possible, b, accuracy of 95% of cases, and c, accuracy for 100% of cases. Lacking in these benchmarks, however, is where the errors occur, as positioning errors are not homogeneous. This is particularly the case for RSS-based positioning systems, such as RSS-distance systems[17,31] and RSS-angle systems[28], as errors are highly dependent on the signal-to-noise ratio (SNR), which is location dependent.

    Common benchmarks shown on a CDF: a, best accuracy, b, accuracy for 95% of cases, and c, accuracy for 100% of cases.

    Figure 5.Common benchmarks shown on a CDF: a, best accuracy, b, accuracy for 95% of cases, and c, accuracy for 100% of cases.

    We demonstrated in prior work[10] that errors change depending on the distance a 2D plane is from the ceiling (Fig. 6, 95% and 100% cases). This discrepancy is problematic when comparing work due to the lack of a consistent definition in the VLP field of lighting parameters regarding power, spacing, and receiver characteristics. RSS-based solutions are impacted the most by the different signal power and noise. We make an attempt at solving this issue by defining active zones, i.e., zones of interests, that other researchers can benchmark with to provide some 3D reference plane[10]. In this way techniques can be compared for the same space, power, and layout. In addition, active zones can be defined for different positioning spaces, i.e., an active zone for floor-based positioning or an active zone for near ceiling tracking.

    Changing planes affects positioning accuracy as seen in these CDF curves.

    Figure 6.Changing planes affects positioning accuracy as seen in these CDF curves.

    While solutions that do not take into effect signal power and noise power directly are less affected by the varying signal levels, they are still limited by FOV. The FOV restriction arises from committing to piggybacking the VLP infrastructure to the lighting infrastructure. Because the spacing of lamps is defined for lighting and not VLP, lights are not usually placed for multiple overlapping coverage in all dimensions. In lighting, flat lighting is more important, and LOS coverage for only one luminaire is guaranteed but not for more than one. This impacts 3D positioning regardless of physical modality, as it affects the maximum FOV attainable, i.e., the lateral coverage of a luminaire decreases the closer the device is to the ceiling. This is due to the cosine dependency of both the transmitter and receiver on signal attenuation. Additionally, when using a concentrator on the receiver, no signal is received when the angle between transmitter and receiver with respect to the normal axis of the receiver is greater than the FOV semiangle of the receiver concentrator, ψ>Ψc. Figure 7 shows how changes in heights affect the number of transmitters a receiver with a concentrator sees with the same FOV. In this example, depending on height, the receiver either sees two transmitters or one. This is due to the minimal 2 m separation typical of modern lights. If lighting was not a concern, luminaires could be installed densely to combat varying FOVs due to height.

    Receivers at positions a and b have the same FOV. However, the receiver at position b sees one less transmitter than the receiver at position a.

    Figure 7.Receivers at positions a and b have the same FOV. However, the receiver at position b sees one less transmitter than the receiver at position a.

    For schemes that make use of more than one luminaire, this FOV restriction is problematic as multi-luminaire coverage may be fine 3 m away from the lights but not 1 m away from the lights. Figure 8 summarizes this effect by showing the total number of visible transmitters, each placed 2 m from each other and the walls of the room to a receiver with Ψc=60° at planes 1 m away (Fig. 8a) and 2 m away (Fig. 8b). For 1 m away, only a small section in the middle has access to all four luminaires. For 2 m, the section with access to four luminaires increases but not at the corners or edge. This explains why positioning schemes that make use of more than one luminaire falter at corners and edges.

    In a typical 6 m×6 m space with four luminaires placed at positions 2 m away from each other, depending on the plane, a, 1 m away and b, 2 m away, the number of transmitters seen across the space changes.

    Figure 8.In a typical 6m×6m space with four luminaires placed at positions 2 m away from each other, depending on the plane, a, 1 m away and b, 2 m away, the number of transmitters seen across the space changes.

    As mentioned, the angle between the transmitter and receiver also affects 3D coverage and SNR. Figure 9 shows examples of the receding signal strength as a device approaches the ceiling separate from the receiver FOV restrictions discussed above. Signal strength directly underneath the luminaire increases as power is preserved. This concentrated increase in signal is not particularly useful when full room positioning coverage is desired. With less signal strength, signals become more susceptible to noise. This renders RSS measurements useless for areas with low SNR. This is one of the primary reasons why 2D RSS-based solutions do not translate well to 3D.

    Signal strength fading with height. While the signal strength directly under the luminaire increases, the region of poor signal strength increases.

    Figure 9.Signal strength fading with height. While the signal strength directly under the luminaire increases, the region of poor signal strength increases.

    The implications of FOV at both the receiver and transmitter are far-reaching. With less SNR, RSS-based systems become noise dominant. Also, with less SNR, the VLP system’s ability to communicate reliably to devices via VLC not within view, for instance, communicating real-time configurations, is limited. With fewer transmitters available for mathematical techniques, positioning schemes that rely on more than one luminaire fail; this includes trilateration, triangulation, and imaging.

    In addition to FOV, another 2D assumption that breaks down in 3D positioning is the reliance of a known height. Positioning schemes that require a prior known height will obviously not position in 3D. But, there is also this notion of characterizing only for a single 2D lateral plane and assuming that the results translate to 3D. Actual benchmarking of 3D spaces is sparse, which is surprising given that very few devices operate within one plane. Another random complaint is that many 2D techniques have arbitrary light placements that do not correspond with lighting whatsoever.

    4. STATE-OF-THE-ART

    In this section, we review the current state-of-the-art in VLP. Table 1 summarizes the state-of-the-art, focusing on key parameters: physical modality, mathematical technique, number of sinks and sources, any extra peripherals, the reported test space, positioning errors, and whether the solution is capable of 3D or not. The following text discusses the major trends. Most accurate. It is clear from the review of the state-of-the-art that time-based physical modalities, TOA/TDOA and PDOA, result in the best accuracy resolution[15,32] because both show millimeter accuracy with the potential for 3D positioning. However, time-based positioning schemes require the transmitters to all be synchronized with one another, which is expensive, requiring atomic clocks and/or connecting the transmitters together. This relegates TOA and PDOA solutions to niche applications where high accuracy is desired and cost is trivial. A newer work flips the transmitter and receiver paradigm where there are multiple receivers and one transmitter[32]. Receivers are colocated and therefore easier to synchronize than transmitters. This provides good results but requires a special receiver for each tracked device (more to follow).Simplest solutions. The simplest solution[17] requires just a single PD and RSS measurements from at least three lights, provides good accuracy, around 6 cm, but is severely limited. This technique uses a linear least square estimator (LLSE) that assumes a fixed height in its estimation and thus does not position in 3D. It also suffers from FOV restrictions for varying heights, as Section 3 describes. Other solutions that use RSS only require more complicated mathematical techniques, like fingerprinting[27,22]. However, collecting data in 3D is tedious with a complicated overhead.AOA enables 3D. An interesting but not unexpected observation is that AOA seems to be the enabling technology in 3D positioning. The schemes that do 3D positioning all take in some angular information. This is because angles provide more diversity than distance. Strictly speaking, there exists an entire surface that corresponds to the same distance but only a line that corresponds to the same angle. Therefore, triangulation provides better results in 3D. There are several AOA techniques highlighted, many of which use triangulation[19,26,30,33]. The biggest downside to triangulation is that it requires access to more than one luminaire—positioning would be hampered due to FOV. Another reason to hesitate on AOA is the need for more complex receivers: AOA receivers come in the form of cameras and multiple PDs, and Ref. [26] only uses one PD but requires consecutive measurements at different angles provided with the accelerometer. Reference [11] also uses one PD but the technique is not purely AOA. This brings us to our next point: peripherals.Dawn of peripherals. The need to position with additional sensor information, or hybrid systems, is becoming more commonplace with some of these peripherals already built into devices: IMUs[25], bluetooth[27], and cameras[24]. IMUs are added to aid in coordinating transformations when dealing with device orientation. Bluetooth and cameras provide more diversity in the collected information. But some peripherals, such as a steerable laser[11], added PDs[28,32], and the rotating receiver[34], eliminate the need to position with more than one luminaire while still providing 3D positioning. The reliance on additional sensors increases complexity, but their added worth in 3D is valuable.Single luminaire. Positioning with only one luminaire is highly advantageous, as LOS coverage from one luminaire is guaranteed in indoor lighting, avoiding FOV restrictions in 3D. The holy grail solution would position using one luminaire with low complexity. However, there is no technique that uses only one light source and provides centimeter-level accuracy: beaconing uses one luminaire, but is only as accurate as the spacing between the luminaires, so about 1 m. But, there are techniques that use one luminaire with peripherals for aid[11,28,32,34].Combating poor SNR. When a signal becomes noise dominant, its usefulness in estimating position is limited. This occurs mainly for two reasons: physical occlusions and signal attenuation over distance and angle, which are more drastic in 3D. Redundancy mitigates the effects of occlusions, reaffirming the need to position with as few transmitters as possible; access to additional transmitters should provide redundancy instead of bare minimum performance. Attenuation over distance and angle is unavoidable, thus techniques that rely primarily on precise RSS measurements[17,25] are non-starters in low SNR regions such as corners. Furthermore, relying on signal measurements of a distant transmitter is yet another con of multi-luminaire positioning.Device complexity. Device complexity plays a role in implementation. A positioning scheme that requires a specific receiver is less favorable, as every device that is positioned in the space would require that specific receiver. Single PD receivers work best, as PDs are low-cost and easy to implement. Cameras are not out of the question, as their presence is pervasive, but the signal processing associated with it is cumbersome. Special tilted[28] and aperture receivers[30] seem promising, but more evaluation of practical solutions is required. However, a rotating receiver[34] is out of the question, as moving parts at the receiver will be a source of mechanical failure.Novel techniques. New mathematical techniques have emerged in recent years. These new techniques are neither trilateration nor triangulation and make use of RSS and AOA information collectively. One new technique is differential modeling[28,34]. In these models, differential gains are matched to a model determined by both RSS and AOA for tilted receivers: RSS determines an operating height plane, while AOA resolves the lateral position on that plane using differentials between the PDs. Another new technique is ray–surface positioning, which combines RSS and AOA similarly: AOA of a steerable laser provides a vector line (ray), and RSS provides a surface, where the position estimate is the intersection of the ray and surface[11]. These three works are the only works to look explicitly at 3D positioning and to characterize errors in different 3D planes.

    • Table 1. Representative Sampling of State-of-the-art in Visible Light Positioning

      Table 1. Representative Sampling of State-of-the-art in Visible Light Positioning

      ReferencePhysicalMathematicalSources/SinksPeripheralsReported Volume or PlaneAccuracy (cm)3D
      [17]RSSMultilateration4 TXs/1 PD Pl: [6m×6m]@3m5.9No
      [25]RSSTrilateration16 TXs/1 PDIMUPl: [20m×20m]@3m40No
      [27]RSSFingerprinting4 TXs/1 PD6 bluetooth APsPl: [5m×5m]@3m6No
      [22]RSSFingerprinting4 TXs/1 PDCameraPl: [5m×5m]@3m10No
      [18]PDOATrilateration3 TXs/1 PDTime Sync.Pl: [1m×1.2m]@3m1.8No
      [32]TDOAMultilateration1 TX/5 PDsTime Sync.Pl: [5m×5m]@3m0.01No
      [15]TOA/PDOAMultilateration5 TXs/1 PDTime Sync.Pl: [5m×5m]@3m0.01Yes
      [19]AOATriangulation5 TXs/Camera Pl: [0.71m×0.73m]@2.46m10Yes
      [33]AOA/ADOATriangulation4 TXs/Camera Pl: [8m×8m]@3m3.2Yes
      [26]AOATriangulation3 TXs/1 PDAccelerometerPl: [5m×3m]@3m25Yes
      [30]AOATriangulation4 TXs/8 PDs8 aperturesPl: [5m×5m]@2m10Yes
      [28]AOA/RSSDifferential1 TX/3 PDsTilted RXsVo: [2m×2m×2.5m]6Yes
      [34]AOA/RSSDifferential1 TX/1 TXRotating RXPl: [6m×6m×11.25m]4Yes
      [11]AOA/RSSRay–surface1 TX/1 PDSteerable laserPl: [6m×6m]@3m13Yes
      [24]RSSImaging4 TXs/Camera Pl: [1.2m×1.2m]@1.2m6Yes

    Figure 10 illustrates the ray–surface intersection method. A narrow-beam laser source is steered and aligned quickly using micro-electro-mechanical systems (MEMS) to a receiver to provide precise angular information; a modulated laser would ensure active angle communication. A Lambertian source with a wide FOV provides radial distance information between the receiver and itself. As such, the laser source is used to pinpoint where in this radial surface the receiver is located: i.e., there is only a limited number of points in this isointense curve that the receiver can take for given angles and received signal. The two sources augment the weaknesses of each other. Therefore, given RSS and laser angles, we can solve for position. We show example results in Fig. 11 comparing ray–surface to multilateration. Given the same SNR and luminaire layout, at locations 3 m away from the lights, ray–surface improves positioning accuracies in both 3D and regions of low SNR. Ray–surface does not suffer from FOV restrictions, as it requires only one transmitter to position, which lets ray–surface advantageously use the strongest of the four signals at all times, resulting in significant gains.

    Concept of ray–surface positioning showing angles of the steerable laser and Lambertian profile.

    Figure 10.Concept of ray–surface positioning showing angles of the steerable laser and Lambertian profile.

    MSE comparing ray–surface positioning to multilateration. Ray–surface provides 3D positioning and is significantly better than multilateration.

    Figure 11.MSE comparing ray–surface positioning to multilateration. Ray–surface provides 3D positioning and is significantly better than multilateration.

    5. FUTURE APPLICATIONS AND EMERGING TECHNIQUES

    Commercial realization of first-generation indoor positioning systems is becoming prevalent. This class of positioning encompasses lower accuracies and easy deployment. An example of this level of positioning accuracy would be whether or not a device is in a room. However, future applications may warrant the additional cost of peripheral sensors and higher accuracies. As discussed in Section 4, 3D VLP using simple light positioning schemes is not feasible. Some interesting new applications that would benefit from high-precision 3D positioning include the following.Telemedicine. Telemedicine encompasses the realm of remote medicine. The optimistic solution would be scalpel tracking in 3D, but that requires finer resolution than the current VLP research provides. But, even simple things like remotely operating a stethoscope would need 3D centimeter accuracy.Performance analytics. Performance analytics outdoors with GPS has allowed athletes to track and optimize their training. This could translate to indoor analytics in an indoor arena.Location-based services. With improved accuracies, LBSs can move from simple advertising and prompts to secure services. Services like payments and military communication can be promised for only users at specific locations.Autonomous navigation. Fully autonomous robots would benefit from being able to navigate through a building. Robots, both floor level and free space, e.g., drones, could automate warehouse delivery, packing, and inventorying.Wireless communications. Future-generation wireless communications would benefit from dense beam-formed deployment. Steerable RF and VLC signals could service augmented reality/virtual reality (AR/VR) headsets moving through space. With enough communication speed, AR/VR headsets could offset processing to the network.

    These are all very interesting future applications and, with no doubt, there is potential for applications beyond these. From this review, we see two emerging 3D VLP techniques that could tackle these new applications. The two technologies we see as having the largest impact are the ray–surface work[11] and differential gain work[28]. Interestingly, these two technologies both employ unique techniques combining RSS and AOA into one positioning scheme. But, these two techniques are overwhelmingly positive in their ability to position using only one luminaire. This makes these solutions highly adaptable to any space: from small spaces serviced by one luminaire to areas closer to the ceiling with fewer transmitters within LOS to the common area. Both technologies are also relatively low-cost and provide centimeter-scale accuracy.

    There are shortcomings though. The tilted receiver work relies on three receivers, and, if one is blocked, positioning is sacrificed. Ray–surface suffers from a similar scenario in that if the laser source is blocked, the position cannot be resolved. However, redundancy can be implemented for both: extra lasers and extra PDs. But unlike extra diodes at the receiver, extra lasers in the room could service multiple devices. This becomes a cost and complexity tradeoff. Adding complexity in the ceiling is less expensive than at the receiver because of laser reuse across multiple devices. Thus, even though these two techniques both seem poised for 3D positioning, ray–surface edges out differential gain due to laser reuse with multiple devices.

    6. CONCLUSION

    Indoor positioning is an exciting goal for which to apply innovation. Much of the excitement lies in there being no clear winner, as respective technologies all exhibit some shortcoming, but also in the potential to kickstart and impact many new applications. The biggest challenge for light-based solutions is a result of an inherent characteristic of light: LOS is required. Pertaining to 3D positioning, schemes that require access to more than one luminaire to estimate position are less desirable. This is because realistically in 3D, given the current lighting configuration and FOV of luminaires and receivers, LOS from a device to more than one luminaire is unlikely. Thus, the ideal VLP solution will use only one light, which is given in any room, with more light providing better results. Using one light, however, does require additional peripherals to augment light. Ray–surface positioning is such a technique that can bridge the gap.

    [10] E. W. Lam, T. D. C. Little. Proceedings of the 4th ACM MobiHoc Workshop on Experiences with the Design and Implementation of Smart Objects(2018).

    [11] E. W. Lam, T. D. C. Little. Proceedings of 2018 ICC 2018 Workshop—OWC(2018).

    [19] Y. S. Kuo, P. Pannuto, K. J. Hsiao, P. Dutta. Proceedings of the 20th Annual International Conference on Mobile Computing and Networking, MobiCom 14, 447(2014).

    [20] Y. S. Eroglu, I. Guvenc, N. Pala, M. Yuksel. Proceedings of the IEEE 16th Annual Wireless and Microwave Technology Conference (WAMICON), 15(2015).

    [22] Z. Vatansever, M. Brandt-Pearce, C. L. Brown. Proceedings of the 51st Asilomar Conference on Signals, Systems, and Computers, 903(2017).

    [23] B. Lin, X. Tang, Y. Li, M. Zhang, C. Lin, Z. Ghassemlooy, Y. Wei, Y. Wu, H. Li. Proceedings of the 16th International Conference on Optical Communications and Networks (ICOCN), 1(2017).

    [24] R. Zhang, W. D. Zhong, K. Qian, D. Wu. IEEE Access, 5, 6087(2017).

    [25] L. Li, P. Hu, C. Peng, G. Shen, F. Zhao. Proceedings of the 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI 14), 331(2014).

    [32] A. Naz, N. U. Hassan, M. A. Pasha, H. Asif, T. M. Jadoon, C. Yuen. Proceedings of the IEEE 4th World Forum on Internet of Things (WF-IoT), 682(2018).

    [33] B. Zhu, J. Cheng, Y. Wang, J. Yan, J. Wang. IEEE J. Sel. Areas Commun., 36, 822(2018).

    Tools

    Get Citation

    Copy Citation Text

    E. W. Lam, T. D. C. Little. Visible light positioning: moving from 2D planes to 3D spaces [Invited][J]. Chinese Optics Letters, 2019, 17(3): 030604

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Fiber optics and optical communications

    Received: Oct. 2, 2018

    Accepted: Dec. 27, 2018

    Published Online: Mar. 8, 2019

    The Author Email: E. W. Lam (emilylam@bu.edu), T. D. C. Little (tdcl@bu.edu)

    DOI:10.3788/COL201917.030604

    Topics