What are the roles of artificial intelligence and machine learning in GNSS positioning? – Inside GNSS

For decades, artificial intelligence and machine learning have advanced at a rapid pace. Today, there are many ways artificial intelligence and machine learning are used behind the scenes to impact our everyday lives, such as social media, shopping recommendations, email spam detection, speech recognition, self-driving cars, UAVs, and so on.

The simulation of human intelligence is programmed to think like humans and mimic our actions to achieve a specific goal. In our own field, machine learning has also changed the ways to solve navigation problems and taken on a significant role in advancing PNT technologies in the future.

LI-TA HSU, HONG KONG POLYTECHNIC UNIVERSITY

Q: Can machine learning replace conventional GNSS positioning techniques?

Actually, it makes no sense to use ML when the exact physics/mathematical models of GNSS positioning are known, and when using machine learning (ML) techniques over any appreciable area to collect extensive data and train the network to estimate receiver locations would be an impractically large undertaking. We, human beings, designed the satellite navigation systems based on the laws of physics discovered. For example, we use Keplers laws to model the position of satellites in an orbit. We use the spread-spectrum technique to model the satellite signal allowing us to acquire very weak signals transmitted from the medium-Earth orbits. We understand the Doppler effect and design tracking loops to track the signal and decode the navigation message. We finally make use of trilateration to model the positioning and use the least square to estimate the location of the receiver. By the efforts of GNSS scientists and engineers for the past several decades, GNSS can now achieve centimeter-level positioning. The problem is; if everything is so perfect, why dont we have a perfect GNSS positioning?

The answer for me as an ML specialist is that the assumptions made are not always valid in all contexts and applications! In trilateration, we assume the satellite signal always transmitted in direct line-of-sight (LOS). However, different layers in the atmosphere can diffract the signal. Luckily, remote-sensing scientists studied the troposphere and ionosphere and came up with sophisticated models to mitigate the ranging error caused by transmission delay. But the multipath effects and non-line-of-sight (NLOS) receptions caused by buildings and obstacles on the ground are much harder to deal with due to their high nonlinearity and complexity.

Q: What are the challenges of GNSS and how can machine learning help with it?

GNSS performs very differently under different contexts. Context means what and where. For example, a pedestrian walks in an urban canyon or a pedestrian sits in a car that drives in a highway. The notorious multipath and NLOS play major roles to affect the performance GNSS receiver under different context. If we follow the same logic of the ionospheric research to deal with the multipath effect, we need to study 3D building models which is the main cause of the reflections. Extracting from our previous research, the right of Figure 1 is simulated based on the LOD1 building model and single-reflected ray-tracing algorithm. It reveals the positioning error caused by the multipath and NLOS is highly site-dependent. In other words, the nonlinearity and complexity of multipath and NLOS are very high.

Generally speaking, ML derives a model based on data. What exactly does ML do best?

Phenomena we simply do not know how to model by explicit laws of physics/math, for example, contexts and semantics.

Phenomena with high complexity, time variance and nonlinearity.

Looking at the challenges of GNSS multipath and the potential of ML, it becomes straightforward to apply artificial intelligence to mitigate multipath and NLOS. One mainstream idea is to use ML to train the models to classify LOS, multipath and NLOS measurements. This idea is illustrated in Figure 2. Three-steps, data labeling, classifier training, and classifier evaluation, are required. In fact, there are also challenges in each step.

Are we confident in our labeling?

In our work, we use 3D city models and ray-tracing simulation to label the measurements we received from the GNSS receiver. The label may not be 100% correct since the 3D models are not conclusive enough to represent the real world. Trees and dynamic objects (vehicles and pedestrians) are not included. In addition, the multiple reflected signals are very hard to trace and the 3D models could have errors.

What are the classes and features?

For the classes, popular selections are the presence (binary) of multipath or NLOS and their associated pseudorange errors. The features are selected based on the variables that are affected by multipath, including carrier-to-noise ratio, pseudorange residual, DOP, etc. If we can assess a step deeper into the correlator, the shape of correlators in code and carrier are also excellent features. Our study evaluates the comparison between the different levels (correlator, RINEX, and NMEA) of features for the GNSS classifier and reveals that the rawer the feature it is, the better classification accuracy can be obtained. Finally, the methods of exploratory data analysis, such as principle component analysis, can better select the features that are more representative to the class.

Are we confident that the data we used to train the classifier are representative enough for the general application cases?

Overfitting of the data is always being a challenge for ML. Multipath and NLOS effects are very difficult in different cities. For example, the architectures in Europe and Asia are very different, producing different multipath effects. Classifiers trained using the data in Hong Kong do not necessarily perform well in London. The categorization of cities or urban areas in terms of their effects on GNSS multipath and NLOS is still an open question.

Q: What are the challenges of integrated navigation systems and how can machine learning can help with them?

Seamless positioning has always been the ultimate goal. However, each sensor has a different performance in different areas. Table 1 gives a rough picture. Inertial sensors seem to perform stably in most areas. But the MEMS-INS suffers from drift and is highly affected by the random noise caused by the temperature variations. Naturally, integrated navigation is a solution. The sensor integration, in fact, shall be regarded in both long-term and short-term.

Long-term Sensor SelectionIn the long term, available sensors for positioning are generally more than enough. The determination of the best subsets of sensors to integrate is the question to ask. Consider an example of seamless positioning for a city dweller travelling from home to the office:

Walking on a street to the subway station (GNSS+IMU)

Walking in a subway station (Wi-Fi/BLE+IMU)

Traveling on a subway (IMU)

Walking in an urban area to the office (VPS+ GNSS+ Wi-Fi/BLE+IMU)

This example clearly shows that seamless positioning should integrate different sensors. The selection of the sensors can be done heuristically or by maximizing the observability of sensors. If the sensors are selected heuristically, we must have the ability to know what context the system is operating under. This is one of the best angles for ML to cut in. In fact, the classification of the scenarios or contexts is exactly what ML does best. A recently published journal paper demonstrates how to detect different contexts using smartphone sensors for context-adaptive navigation (Gao and Groves 2020). Sensors in smartphones are used in the models trained by supervised ML to determine not only the environment but also the behavior (such as transportation modes, including static, pedestrian walk, and sitting on a car or a subway, etc.).

According to their result, the state-of-the-art detection algorithm can achieve over 95% for pedestrians under indoor, intermediate, and outdoor scenarios. This finding encourages the use of ML to intelligently select the right navigation systems for an integrated navigation system under different areas. The same methodology can be easily extended to vehicular applications with a proper modification in the selections of features, classes, and machine learning algorithms.

Short-term Sensor Weighting

Technically speaking, an optimal integrated solution can be obtained if the uncertainty of the sensor can be optimally described. Presumably, the sensors uncertainty remains unchanged under a certain environment. As a result, most of the sensors uncertainty is carefully calibrated before its use in integration systems.

However, the problem is that the environment could change rapidly within a short period of time. For example, a car drives in an urban area with several viaducts or a car drives in an open sky with a canopy of foliage. These scenarios affect the performance of GNSS greatly, however, the affecting periods were too short to exclude the GNSS from the subset of sensors used. The best solution against these unexpected and transient effects are de-weighting the affected sensors in the system.

Due to the complexity of these effects, adaptive tuning of the uncertainty based on ML is getting popular. Our team demonstrated this potential by an experiment of a loosely coupled GNSS/INS integration. This experiment took place in an urban canyon with commercial GNSS and MEMS INS. Different ML algorithms are used to classify the GNSS positioning errors into four classes: healthy, slightly shifted, inaccurate, and dangerous. These are represented as 1 to 4 in the bottom of Figure 4. The top and bottom of the figure show the error of the commercial GNSS solution and the predicted classes by different ML. It clearly shows that ML can do a very good job predicting the class of the GNSS solution, enabling the integrated to allocate proper weighting to GNSS. Table 2 shows the improvement made by the ML-aided integration system.

This is just an example to preliminarily show the potential of ML in estimating/predicting sensors uncertainty. The methodology can also be applied to different sensor integration such as Wi-Fi/BLE/IMU integration. The challenge of the trained classifier may be too specific for a certain area due to the over-fitting of the data. This remains an open research question in the field.

Q: Machine Learning or Deep Learning for Navigation Systems?

Based on research in object recognition in computer science, deep learning (DL) is the currently the mainstream method because it generally outperforms ML when two conditions are fulfilled, data and computation. The trained model of DL is completely data-driven, while ML trains models to fit assumed (known) mathematical models. A rule of thumb to select ML or DL is the availability of the data in hand. If extensive and conclusive data are available, DL achieves excellent performance due to its superiority in data fitting. In the other words, DL can automatically discover features that affect the classes. However, a model trained by ML is much more comprehensible compared to that trained by DL. The DL model becomes like a black box. In addition, the nodes and layers of convolution in DL are used to extract features. The selection of the number of layers and the number of nodes is still very hard to determine, so that in trial-and-error approaches are widely adopted. These are the major challenges in DL.

If a DL-trained neutral network can be perfectly designed for the integrated navigation system, then it should consider both long-term and short-term challenges. Figure 5 shows this idea. Several hidden layers will be designed to predict the environments (or contexts) and the others are to predict the sensor uncertainty. The idea is straightforward, whereas the challenges remain:

Are we confident that the data we used to train the classifier are representative enough for the general applications cases?

What are the classes?

What are the features?

How many layers and the number of nodes should be used?

Q: How does machine learning affect the field of navigation?

ML will accelerate the development of seamless positioning. With the presence of ML in the navigation field, a perfect INS is no longer the only solution. These AI technologies facilitate the selection of the appropriate sensors or raw measurements (with appropriate trust) against complex navigation challenges. The transient selection of the sensors (well-known as plug-and-play) will affect the integration algorithm. Integration R&D engineers in navigation have been working on the Kalman filter and its variants. However, the flexibility of the Kalman filter makes it hard to accommodate the plug-and-play of sensors. The graph optimization that is widely used in the robotics field could be a very strong candidate to integrate sensors for navigation purposes.

Other than GNSS and the integrated navigation system mentioned above, the recently developed visual positioning system (VPS) by Google could replace the visual corner point detection by the semantic information that detected by ML. Looking at how we navigated before GNSS, we compare visual landmarks with our memory (database) to infer where we are and where we are heading. ML can segment and classify images taken by a camera into different classes, including building, foliage, road, curb, etc., and compare the distribution of the semantic information with that in the database in the cloud server. If they match, the associated position and orientation tag in the database can be regarded as the user location.

AI technologies are coming. They will influence navigation research and development. In my opinion, the best we can do is to mobilize AI to tackle the challenges to which we currently lack solutions. It is highly probable that technology advances and learning focus will depend greatly on MLs development and achievement in the field of navigation.

References

(1) Groves PD, Challenges of Integrated Navigation, ION GNSS+ 2018, Miami, Florida, pp. 3237-3264.

(2) Gao H, Groves PD. (2020) Improving environment detection by behavior association for context-adaptive navigation. NAVIGATION, 67:4360. https://doi.org/10.1002/navi.349

(3) Sun R., Hsu L.T., Xue D., Zhang G., Washington Y.O., (2019) GPS Signal Reception Classification Using Adaptive Neuro-Fuzzy Inference System, Journal of Navigation, 72(3): 685-701.

(4) Hsu L.T. GNSS Multipath Detection Using a Machine Learning Approach, IEEE ITSC 2017, Yokohama, Japan.

(5) Yozevitch R., and Moshe BB. (2015) A robust shadow matching algorithm for GNSS positioning. NAVIGATION, 62.2: 95-109.

(6) Chen P.Y., Chen H., Tsai M.H., Kuo H.K., Tsai Y.M., Chiou T.Y., Jau P.H. Performance of Machine Learning Models in Determining the GNSS Position Usage for a Loosely Coupled GNSS/IMU System, ION GNSS+ 2020, virtually, September 21-25, 2020.

(7) Suzuki T., Nakano, Y., Amano, Y. NLOS Multipath Detection by Using Machine Learning in Urban Environments, ION GNSS+ 2017, Portland, Oregon, pp. 3958-3967.

(8) Xu B., Jia Q., Luo Y., Hsu L.T. (2019) Intelligent GPS L1 LOS/Multipath/NLOS Classifiers Based on Correlator-, RINEX-and NMEA-Level Measurements, Remote Sensing 11(16):1851.

(9) Chiu H.P., Zhou X., Carlone L., Dellaert F., Samarasekera S., and Kumar R., Constrained Optimal Selection for Multi-Sensor Robot Navigation Using Plug-and-Play Factor Graphs, IEEE ICRA 2014, Hong Kong, China.

(10) Zhang G., Hsu L.T. (2018) Intelligent GNSS/INS Integrated Navigation System for a Commercial UAV Flight Control System, Aerospace Science and Technology, 80:368-380.

(11) Kumar R., Samarasekera S., Chiu H.P., Trinh N., Dellaert F., Williams S., Kaess M., Leonard J., Plug-and-Play Navigation Algorithms Using Factor Graphs, Joint Navigation Conference (JNC), 2012.

Read more here:
What are the roles of artificial intelligence and machine learning in GNSS positioning? - Inside GNSS

Related Posts

Comments are closed.