International Conference on Indoor Positioning and Indoor Navigation

October 30, 2017 | Author: Anonymous | Category: N/A
Share Embed


Short Description

Reckoning, W. Kang [et al.] Indoor Positioning using Ultrasonic Waves with CSS and FSK Modulation ......

Description

International Conference on Indoor Positioning and Indoor Navigation

28-31 Oct 2013 France

Table of Contents Position Estimation Using a Low-cost Inertial Measurement Unit with Help of Kalman Filtering and Fastening-Pattern Recognition, T. Chobtrong [et al.] .............................................................................. 1 REFERENCE NAVIGATION SYSTEM BASED ON WI-FI HOTSPOTS FOR INTEGRATION WITH LOW-COST INERTIAL NAVIGATION SYSTEM, M. Kamil [et al.] ................................................... 3 Survey of accuracy improvement approaches for tightly coupled ToA/IMU personal indoor navigation system, V. Maximov [et al.] ..................................................................................................................... 7 Enhancement of the automatic 3D Calibration for a Multi-Sensor System, E. Koeppe [et al.] ............. 11 A gait recognition algorithm for pedestrian navigation system using inertial sensors, W. Liu [et al.] .. 14 An UWB based indoor compass for accurate heading estimation in buildings, A. Norrdine [et al.] .... 18 Accuracy of an indoor IR positioning system with least squares and maximum likelihood approaches, F. Domingo-perez [et al.] ........................................................................................................................... 22 An indoor navigation approach for low-cost devices, A. Masiero [et al.] ............................................. 24 ARIANNA: a two-stage autonomous localisation and tracking system, E. De marinis [et al.] ............ 26 Source localisation by sensor array processing using a sparse signal representation, J. Lardies [et al.] ... 30

OFDM Pulse Design with Low PAPR for Ultrasonic Location and Positioning Systems, D. Albuquerque [et al.] ..................................................................................................................................................... 34 Dynamic Collection based Smoothed Radiomap Generation System, J. Kim [et al.] ........................... 36 Pedestrian Activity Classification to Improve Human Tracking and Localization, M. Bocksch [et al.] ... 39

Accurate Smartphone Indoor Positioning Using Non-Invasive Audio, S. Lopes [et al.] .......................44 Locally Optimal Confidence Ball for a Gaussian Mixture Random Variable, P. Sendorek [et al.] ....... 48 Evaluating robustness and accuracy of the Ultra-wideband Technology-based Localization Platform under NLOS conditions, P. Karbownik [et al.] .................................................................................................53 Robust Step Occurrence and Length Estimation Algorithm for Smartphone-Based Pedestrian Dead Reckoning, W. Kang [et al.] ................................................................................................................... 55 Context Aware Adaptive Indoor Localization using Particle Filter, Y. Zhao [et al.] ..............................60 Verification of ESPAR Antennas Performance in the Simple and Calibration Free Localization System, M. Rzymowski [et al.] ............................................................................................................................64 Optimal RFID Beacons Configuration for Accurate Location Techniques within a Corridor Environment, E. Colin [et al.] ....................................................................................................................................... 68 A Cooperative NLoS Identification and Positioning Approach in Wireless Networks, Z. Xiong [et al.] . 73

Visual Landmark Based Positioning, H. Chao [et al.] ........................................................................... 79 RFID System with Tags Positioning based on Phase Measurements, I. Shirokov ................................. 84 Broadcasting Alert Messages Inside the Building: Challenges & Opportunities, F. Spies [et al.] ........ 91 For a Better Characterization of Wi-Fi-based Indoor Positioning Systems, F. Lassabe [et al.] .............95 Locating and classifying of objects with a compact ultrasonic 3D sensor, W. Christian [et al.] ........... 99 I

Location Estimation Algorithm for the High Accuracy LPS LOSNUS, M. Syafrudin [et al.] ............ 103 Infrastructure-less TDOF/AOA-based Indoor Positioning with Radio Waves, C. Aydogdu [et al.] .... 105 Sound Based Indoor Localization - Practical Implementation Considerations, J. Moutinho [et al.] ... 109 Proposed Methodology for Labeling Topological Maps to Represent Rich Semantic Information for Vision Impaired Navigation., A. Jayakody ........................................................................................... 113 Improvements and Evaluation of the Indoor Laser Localization System GaLocate, J. Kokert [et al.] 115 Observability Properties of Mirror-Based IMU-Camera Calibration, G. Panahandeh [et al.] ............. 117 Processing speed test of Stereoscopic vSLAM in an Indoors environment, J. Delgado vargas [et al.] .... 119

Enhanced View-based Navigation for Human Navigation by Mobile Robots Using Front and Rear Vision Sensors, M. Tanaka [et al.] ................................................................................................................... 123 Generation of reference data for indoor navigation by INS and laser scanner, F. Keller [et al.] ......... 127 Implementation of OGC WFS floor plan data for enhancing accuracy and reliability of Wi-Fi fingerprinting positioning methods, D. Zinkiewicz [et al.] .................................................................. 129 On-board navigation system for smartphones, M. Togneri [et al.] ...................................................... 133 A Gyroscope Based Accurate Pedometer Algorithm, S. Jayalath [et al.] ............................................. 138 Bluetooth Embedded Inertial Measurement Unit for Real-Time Data Collection, R. Chandrasiri [et al.] 142

WiFi localisation of non-cooperative devices, C. Beder [et al.] .......................................................... 146 Creation of Image Database with Synchronized IMU Data for the Purpose of Way Finding for Vision Impaired People, C. Rathnayake [et al.] ............................................................................................... 150 Relevance and Interpretation of the Cramer-Rao Lower Bound for Indoor Localisation Algorithms, M. Kyas [et al.] .......................................................................................................................................... 152 Efficient and adaptive Generic object detection method for indoor navigation, N. Rajakaruna [et al.] ... 156

Hidden Markov Based Hand Gesture Classi?cation and Recognition Using an Adaptive Threshold Model, J. Mechanicus [et al.] ............................................................................................................................ 160 Pedestrian Detection and Positioning System by a New Multi-Beam Passive Infrared Sensor, R. Canals [et al.] ................................................................................................................................................... 169 Study of rotary-laser transmitter shafting vibration for workspace measurement positioning system, Z. Liu [et al.] ............................................................................................................................................. 174 Efficient Architecture for Ultrasonic Array Processing based on Encoding Techniques, R. García [et al.] ............................................................................................................................................................... 178 Using Double-peak Gaussian Model to Generate Wi-Fi Fingerprinting Database for Indoor Positioning, L. Chen [et al.] ...................................................................................................................................... 182 Indoor Positioning using Ultrasonic Waves with CSS and FSK Modulation for Narrow Band Channel, A. Ens [et al.] ........................................................................................................................................ 188 Improving Heading Accuracy in Smartphone-based PDR Systems using Multi-Pedestrian Sensor Fusion,

II

M. Jalal abadi........................................................................................................................................ 190 A New Indoor Robot Navigation System Using RFID Technology, M. Fujimoto [et al.] ................... 194 Accurate positioning in underground tunnels using Software-Defined-Radio, F. Pereira [et al.] ....... 196 Positioning in GPS Challenged Locations ? The NextNav's Metropolitan Beacon System, S. Meiyappan [et al.] ................................................................................................................................................... 202 Indoor Positioning using Wi-Fi -- How Well Is the Problem Understood?, M. Kjærgaard [et al.] ...... 207 The workspace Measuring and Positioning System(wMPS)?an alternative to iGPS, B. Xue [et al.] . 211 Key Requirements for Successful Deployment of Positioning Applications in Industrial Automation, L. Thrybom [et al.] .................................................................................................................................... 213 Texture-Based Algorithm to Separate UWB-Radar Echoes from People in Arbitrary Motion, T. Sakamoto [et al.] ................................................................................................................................................... 217 Experimental Evaluation of UWB Real Time Positioning for Obstructed and NLOS Scenarios, K. Alqahtani [et al.] ....................................................................................................................................... 221 Device-Free 3-Dimensional User Recognition utilizing passive RFID walls, B. Wagner [et al.] ....... 225 First Theoretical Aspects of a Cm-accuracy GNSS-based Indoor Positioning System, Y. Lu [et al.] . 229 2D-indoor localisation with GALILEO-like pseudolite signals, A. Monsaingeon [et al.] ................... 234 Performance Comparison between Frequency-Division and Code-Division access methods in an ultrasonic LPS, F. álvarez [et al.] ......................................................................................................... 239 Fusion methods for IMU using neural networks for precision positioning, L. Tejmlova [et al.] ........ 241 Stance Phase Detection using Hidden Markov Model in Various Motions, H. Ju [et al.] ................... 243 Standing still with inertial navigation, J. Nilsson [et al.] ..................................................................... 247 Single-channel versus multi-channel scanning in device-free indoor radio localization, P. Cassarà [et al.] ............................................................................................................................................................... 249 Indoor Positioning using Time of Flight Fingerprinting of Ultrasonic Signals, A. Dvir [et al.] .......... 253 The Construction of an Indoor Floor Plan Using a Smartphone for Future Usage of Blind Indoor Navigation., A. Jayakody...................................................................................................................... 257 Study aimed at advanced use of the indoor positioning infrastructure IMES, Y. Yutaka ..................... 261 Health Monitoring of WLAN Localization Infrastructure using Smartphone Inertial Sensors, R. Haider [et al.] ................................................................................................................................................... 265 GPS Line-Of-Sight Fingerprinting for Enhancing Location Accuracy in Urban Areas, A. Uchiyama [et al.] ......................................................................................................................................................... 269 Utilizing cyber-physical systems to rapidly access and guard patient records, T. Czauski [et al.] ...... 273 Evaluation of Indoor Pedestrian Navigation System on Smartphones using View-based Navigation, M. Nozawa [et al.] ..................................................................................................................................... 275

III

- chapter 1 -

Signal Processing & Analysis

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Position Estimation Using a Low-cost Inertial Measurement Unit with Help of Kalman Filtering and Fastening-Pattern Recognition T. Chobtrong1, M. Haid, M. Kamil and E. Günes Competence Center of Applied Sensor System Darmstadt University of Applied Sciences Darmstadt, Germany 1 [email protected]

Abstract—To improve the quality control in the automotive industry, an intelligent screwdriver is being developed in order to track the position of a bolt being fastened. In many situations such as inside the car or in the engine compartment, it is not possible to track the bolt-positions via a visual-based tracking system. In order to solve this problem, the screwdriver is integrated with an inertial measurement unit instead of using vision-based tracking systems for tracking the bolt-positions. Alike common inertial navigation systems, the challenge of this tracking system is its inaccuracy caused by sensor drift. This paper presents a position tracking algorithm using a low-cost inertial measurement unit for this intelligent screwdriver. The position tracking algorithm is based on Kalman filter with help of the fastening-pattern recognition algorithm based on Hidden Markov Model. Keywords-IMU; Indoor Navigation; Inertial Kalman Filter; Hidden Markov;

I.

To develop a tool-tip tracking system for INSCHRAV project, a low-cost inertial navigation system (INS) is applied for support this application, because of its contactless and referenceless properties as well as its low-cost, low-weight and compact design [2]. This intelligent screwdriver is integrated with a low-cost inertial measurement unit (IMU). However, the challenge of this tracking system is the accumulated error from its measurement signals corrupted by stochastic noise [3].

Navigation;

INTRODUCTION

To prevent problems like missing or unfastened bolts by a worker and improve the quality of automotive manufacturing, a system which is able to track the position of a bolt being fastened, is required. Because the shape of a vehicle is complex and its main material is metal, vision-based or radio-based tracking systems are not suitable for tracking a tool-tip under these conditions. Vision-based tracking systems need to get a picture of the tool-tip to track its position. In some cases such as the car’s inside or the engine compartment, parts of the car or the body itself obstructs the camera. Therefore, the visionbased tracking system loses the position of the tool-tip around that area. In case of radio-based tracking systems, there are some inaccuracy and lost-of-contact problems caused by magnetic-field distortion, because there are many metallic objects in and around an automotive manufacturing line [1]. Another disadvantage of the vision-based and radio-based tracking systems is that these systems need to install their equipments and supporting infrastructure. Therefore, these systems need a lot of investment and they are difficult to adapt to new and changing conditions & processes such as a new car model.

Figure 1. Demonstration of the tool-tip tracking system using inertrial tracking system

This paper presents the overview of the INSHRAV algorithm which has been developed to improve the accuracy of the tool-tip tracking system using INS. This algorithm is able to estimate the position of a tool-tip based on complimentary Kalman filter (CKF) with help of fasteningpattern recognition, based on Hidden Markov Model (HMM). In brief, the optimal position estimated by CKF is optimized by the observed position that is determined by the fastening – pattern recognition. II.

OVERVIEW OF THE INSCHRAV ALGORITHM

The INSCHRAV algorithm estimates the position of a tracked tool-tip by using the measurement signals from IMU500, a low-cost IMU designed and developed by Competence Center of Applied Sensor System (ccass). There are 4 main steps in the INSCHRAV algorithm, attitude

1/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 estimation, gravity compensation, position estimation and fastening-position recognition. The attitude estimation is based on extended Kalman filter (EKF) as the study from Madgwick et al. in [4]. In brief, the current attitude of the screwdriver is estimated by the signals from gyroscope, and it is corrected with the tilt and heading estimation by using measurement signals from accelerometers and magnetometers in order. Then, the measured acceleration signals which contains gravity vector are compensated by using the attitude information, to estimate the dynamic acceleration of the screwdriver.

recognized position (at time 1490 ms). Significantly, the INSCHRAV position estimation based on CKF re-calculated its process covariance matrix and Kalman gain to improve the estimation with the observation signals from the fasteningposition recognition. Therefore, the position errors from INSCHRAV algorithms rapidly decrease after the bolt-position has been recognized.

Finally, the position estimation algorithm estimates the position of the tool-tip as in [5]. However, this estimation is additionally corrected by the fastening-pattern recognition. Because of strict manufacturing procedures, the pattern in which the bolts are fastened is stringently defined. Therefore, the bolt-position being fastened is possible to be recognized through the sequence recognition algorithm based on HMM. This recognition module supports 11 hidden states (10 positions and reset point) with 10 observation states which are determined by a movement classification module. III.

OVERVIEW OF THE IMU500

The IMU500 is a low-cost inertial measurement unit (IMU) designed in-house especially for the INSCHRAV project. The IMU is composed of 2 main sensors, a tri-axial accelerometer & magnetometer (STMicro LSM303DLHC) and a tri-axial gyroscope (STMicro L3GD20). The microcontroller of the IMU500 is ARM Cortex M4 from ST, which supports floating point arithmetic.

Figure 3. Absolute position error from the estimation (a) in x-axis (b) in y-axis and (c) in z-axis, with INSCHRAV algorithm (solid line) and without INSCHRAV algorithm (dash line)

VI.

This position estimation using a low-cost IMU with the help of Kalman filter and Fastening-pattern recognition (INSCHRAV algorithm) is able to track a position of a bolt being fastened with accuracy of ±50 mm on each axis. Further improvement of this project is to optimize the initial parameters of the INSCHRAV algorithm to improve the tracking system’s accuracy. Moreover, this tracking system will be integrated with the intelligent screwdriver, and will be tested the overall performance of the system.

Figure 2. IMU500, a low-cost inertial measurement unit

IV.

REFERENCES [1]

EXPERIMENTS

The INSCHRAV algorithm’s performance was tested through laboratory simulations at ccass. The model of the intelligent screwdriver attached with IMU500, was moved to fasten 10 bolts 10 times (the pitch between each bolt is 10 cm.) on the model of the cylinder head of a 4-cylinder engine. The INSCHRAV algorithm has been compiled and deployed to the IMU500. In these experiments, a reference signal provided by an infrared tracking system, Lukotronic AS200, was used to evaluate the performance of the inertial tracking system. V.

CONCLUSION AND FURTHER DEVELOPMENT

[2]

[3]

[4]

RESULTS

As shown in Figure 3, the errors from position estimation without the INSCHRAV algorithm (dash line) continuously increase over time. But the errors from position estimation with INSCHRAV algorithm are controllable and stable after the first

[5]

2/278

D. Vissiere, A. Martin, and N. Petit, “Using distributed magnetometers to increase IMU-based velocity estimation into perturbed area,” Decision and Control 2007, 46th IEEE Conference, pp. 4924 –4931, 2007. M. Haid, “Improvement of the referenceless inertial objeckt-tracking for low-cost indoor-navigation by Kalman-filtering (Verbesserung der referenzlosen inertialen objecktverfolgung zur low-cost indoornavigaton durch anwendung der kalman-filterung),” Ph.D. dissertation, Universität Siegen, 2004. D. H. Titterto, and J. L.Weston, “Strapdown inertial navigation technology,” The Institution of Electrical Engineers and The American Institute of Aeronautics and Astronautics, 2004. S. Madgwick, A. Harrison, and R. Vaidyanathan, “Estimation of IMU and MARG orientation using a gradient descent algorithm,” in proceeding of Rehabilitation Robotics (ICORR), 2011 IEEE International Conference, 2011. M. Haid, T. Chobtrong, E. Günes, M. Kamil, and M. Münter, “Improvement of inertial object tracking for low-cost indoor-navigation with advanced algorithms,” 16. GMA/ITG-Fachtagung, Sensor und Messsysteme 2012, Nuremberg, 2012

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Reference Navigation System Based in WI-FI Hotspots for Integration with Low-Cost Inertial Navigation System Mustafa Kamil, Pierre Devaux, Markus Haid, Thitipun Chobtrong, Ersan Günes Competence Center for Applied Sensor Systems University of Applied Sciences Darmstadt Darmstadt, Germany [email protected]

Abstract— In recent years, low-cost inertial navigation has been a well-known solution for indoor object tracking, in densely builtup areas or hybrid indoor-outdoor-environments. As the hardware required for inertial navigation is tiny, light-weight and widely available in the market, using MEMS motion sensors promises various industrial, medical or entertainment applications to be realized at very low manufacturing costs. Nevertheless the error characteristics these sensors have shown until now, have limited the applicability to be usable for just simple tasks in smart phones and tablet PCs. To overcome this problem, recent research at the CCASS has successfully achieved a sensor fusion technique for supporting an inertial navigation system (abbreviated as: INS) with GPS as a source of reference navigation. For this concept, short reference signal outages can be overcome by the INS reliably and the INS errors are limited to minimum values. Nevertheless, longer indoor duration environments (production halls, storage areas or indoor parking places, etc.) still results in an unlimited growth of the navigation errors, as commonly known for low-cost INS performance. In order to enable fields of application, that require both outdoor and indoor navigation, an alternative reference navigation system must be found. For best possible market penetration, this system has to be low-cost, able of penetrating walls, require no additional installation work and no changes to the building. As perfectly meeting these requirements, the research aims for developing a localization method based on regular Wi-Fi hotspots. The present short paper handles the reference system development from the trivial hotspot information to the complete reference localization system and the overall system concept including the INS integration. The presented system doesn’t require knowledge of the hotspots positions, is technically based on Wi-Fi fingerprinting and aims for application in tracking and tracing in distribution logistics and industrial production. Keywords: Inertial Navigation, Wi-Fi Fingerpinting, Sensor Fusion, Indoor Navigation, MEMS

I.

INTRODUCTION

Extending position determination techniques for vehicles, objects and personnel towards applicability in roofed,

inaccessible or densely built-up environment is a key technology for optimization of processes across a variety of industries. As an example, in distribution logistics of automotive plants, the transport of finished vehicles to the various loading stations (e.g., train, ship, truck) or to the customer is of great importance for the manufacturer. The transportation of products has to be processed quickly and efficiently in order to avoid negative feedbacks on the production and also to guarantee the fastest possible delivery to the customer. In the state of the art situation it is not possible for the planners to get feedback on the implemented work in the distribution process. Next to that it is unknown, if any sudden disturbances have occurred or what the requirements for everyone of the available resources are. The key to fill this information gap would be accurate IDrelated location data provided in real-time for every vehicle on the ground. In combination with data on tasks to be accomplished and resources available, the tracking of the vehicles can help to add control possibilities to the planning and also gives a powerful tool for the verification of the planer’s success. For example, it can be determined at anytime, if the object is transported correctly to the defined target and additionally if this was accomplished in the designated time. Loss and interchanges of vehicles can be indicated immediately, and so, quick and precise reactions can be initiated. Using location and job information in one integrated environment allows for the first time creating a synchronized information base for planning and implementation of logistics, which in the sequel can be used for optimizing the delivery procedure and for quick and accurate reactions on any unforeseen changes. The results are less loss of time, more efficient and lock-free use of resources (e.g. personnel, facilities, transport routes, etc.) and consequently higher productivity (cf. [1]). II.

INERTIAL NAVIGATION

The principle of inertial navigation is based on the measurement of object movements with the help of mass inertia under acceleration. For this, an orthogonal constellation

3/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 of both three acceleration and three angular rate sensors is needed. This assembly allows the determination of all accelerations and angular moments applied on any object moving in space, without requiring signals from the surrounding environment. The acceleration sensors are used to capture translational movements in the three spatial directions. The angular rate sensors (gyroscopes) capture the rotational speeds of the three spatial axes. Using low-cost sensors for this issue allows on one side a substantial reduction of system cost, but, on the other side, it is also connected with loss of accuracy, accumulating over time (cf. [2]). The reason for that is mainly given by a random bias error in the signals of those sensors. With an integration of translational acceleration or angular rate signals this error gets amplified resulting in a time-dependent growth of inaccuracy. The scenario of an inertial object tracking on the base of just integrated sensor signals is therefore not possible at the current moment and requires higher techniques to reduce the unwanted side-effect (cf. [3]). In practice, INS are often coupled with other localization systems, for example with a Global Positioning System (GPS) receiver device that periodically provides absolute position data while the INS is used to interpolate the intercostal values. Furthermore, advanced signal processing algorithms can help to reduce the error of position and orientation to achieve a better time performance. For that it is usual to make use of estimation filters like the Kalman filter (cf. [4-5]) and to eliminate additional errors evoked by parasitic effects like Gacceleration and the Coriolis Effect. In addition to that the use of extra sensors on the inertial platform can provide helpful information, such as the measurement of the earth’s magnetic field. III.

been processed by the filter which keeps the error states small and the algorithm stable. The system overview is shown in figure 1.

Figure 1: GPS-INS sensor fusion system model based on Kalman filtering in feedback configuration

The main purpose of the present project was to find and implement a low-cost positioning technique for supporting an INS in longer periods of indoor operation using a technology that requires neither additional investments nor changes to the building infrastructure. Wireless hotspots are widely available inside most buildings belonging to private or industrial sectors as Internet service is often distributed by Wi-Fi hotspots. Hence, the first goal was to realize a simple localization method based on this technology in order to provide the basis for uninterrupted low-cost navigation system for both indoor and outdoor operation by sensor fusion of INS, GPS and Wi-Fi positioning.

SENSOR FUSION, GPS-INS-INTEGRATION

As known from literature as well as personal experiments, low-cost GPS navigation does not offer a guaranteed accuracy of more than 12 meters of position deviation and it also suffers some additional errors in cause of signal reflection. Next to that it is not possible to acquire GPS signals in roofed areas or shaded environment. INS, as described above, have the problem of sensor drifting and as a result of the loss of accuracy over time. Hence, the authors have concentrated earlier work into the Integration of INS with low-cost GPS aiming for the application of both systems’ advantages while compensating for their limitations. One of the most common algorithms utilized for the implementation of sensor fusion is a Kalman filter with indirect formulation. Indirect formulation, means, that the estimations provided by the filter do not describe the systems’ motion values themselves, but rather the errors made by the INS and the inertial sensors. The algorithm processes both the inertial sensor values in the so-called propagation step and the GPS information in the so-called measurement update step. In the earlier approach, both the GPS position and velocity measurements were used by the Kalman filter. Subsequent to each filter iteration, the obtained error state vector was fed back into the INS mechanization block and then reset to a zerovector which made the filter algorithm work in feedback configuration. The feedback configuration allows the system states to be corrected immediately after the measurements have

IV.

WI-FI-INS-INTEGRATION

A. Main Concept The overall concept for the indoor navigation part is composed of three correlative steps of processing Wi-Fi signals spread by randomly positioned hotspots inside a specific building and acquired by a mobile low-cost receiver attached to the multi-sensor system. The first step comprises the creation of a signal pattern database in the form of a reference file that represents a virtual assignment of selected positions inside the building to a specific pattern of signal levels received from a number of hotspots exactly at this position and a record of the MAC addresses the selected hotspot devices use (cf. figure 2, 3). The creation of the database file is done as a preparing step before the navigation actually starts. The navigation finally is realized by comparing the signals resulting from continuous acquisition to the previously created database. The position estimation is implemented using the smallest Euclidean distance method. B. Detailed Concept The indoor navigation comprises two successive operation modes:

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

4/278



Database creation: Definition of a number of reference positions and hotspots, acquisition of signal levels in

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 each of those positions and storage inside a tabular database file 

Database matching: Continuous acquisition, signal level matching, estimation of the position, forwarding positive matches to the sensor fusion filter

The database matching mode implements the actual navigation part which comprises the continuous Wi-Fi signal acquisition for the same hotspots selected before and the comparison of the continuously read signal levels to those recorded in the database file. A common algorithm for realizing that is given by the Nearest-Neighbor-Method (cf. [7]):

Signal Pattern Square No. 1 0

Signal Strength (dBm)

-20

-40

For Nearest-Neighbor-Method, there are mainly two possible implementations which are on one side the so-called “Manhattan distance” and on the other side the so-called “Euclidean distance”. In order to maximize the computational reliability for the navigational signal processing, the present approach applies the Euclidean distance variant of the NearestNeighbor-Method despite of its roughly more complex calculation (cf. [7]):

-60

-80

-100

-120

1

2

3 Hotspot No.

4

5

Signal Pattern Square No. 1 0

-20

Signal Strength (dBm)

-40

-60

-80

-100

-120

1

2

3 Hotspot No.

4

5

Figure 2 and 3: Signal strength patterns for positions No. 1 (top) and 2 (bottom) out of six defined reference positions, the MAC addresses were replaced due to data protection requirements

Firstly, the database creation requires the selection of reference positions and of the hotspots for which the signal levels are to be observed. The reference positions shall be selected with a distinct relative displacement by means of the checkpoint concept described below. For the hotspot selection, a compromise has to be found between many observed signals for achieving lowest possible position ambiguities and between fastest possible signal processing. Most common buildings, however, have a limited number of hotspot devices so that the first optimization criterion is often limited. Therefore, the present approach utilized five hotspot devices as an average value between private and industrial Wi-Fi configurations. After these fundamental decisions have been made, the signal levels for the selected hotspots at each of the defined reference positions must be acquired and the hardware identifiers as well as the acquired signal patterns must be stored to the database file. The latter is designed as a two-dimensional array inside a text file for maximizing reading and writing speed while minimizing file size and memory allocation (cf. [6]).

As a next step, for each continuous acquisition sample, the signal levels are compared with each row of the array inside the database file as those rows correspond to the desired reference position coordinates. The comparison routine results in a one-dimensional array containing the smallest Euclidean distances for all known positions by means of the first cell for first reference position, the second cell for second reference position, and so on. The final solution is reached by taking the minimum value inside the array, comparing this value to a tolerance range and giving back the corresponding position coordinates. If the minimum value is inside the bounds of the tolerance range for a minimum number of samples, then the receiver device must be sufficiently close to the reported reference position. Otherwise, no reference position is detected, so that the navigation is continued by the stand-alone INS mechanization. C. Checkpoint Concept The checkpoint concept referred to above is directly connected to the INS integration procedure. Low-cost INS can provide an accurate source of navigational information for limited periods of time, but they lose accuracy during longterm operation. Hence, when INS are frequently provided with navigational reference information, they can be re-calibrated to reset the navigation errors by means of the accuracy given for the reference system. Combining INS and Wi-Fi-based positioning can help to compensate for the INS short-term stability while reducing the requirements for the density of reference positions used by the Wi-Fi system. Instead of dividing indoor environments into grids of uninterrupted reference positions, only “checkpoints” have to be defined to realize a density corresponding to the performance shown by the inertial navigation. The checkpoint concept results in less

5/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 effort required for the database creation, higher positioning reliability and better compensation of signal fluctuation effects.

further strategies must be found for reducing the effects on the navigational signal processing. Using an INS inside the system

Figure 4: Acquired signals for the selected hotspots with correspondent position markings

V.

RESULTS

As shown in figure 4 for a continuous round walkthrough inside a building corridor, signal pattern comparison can successfully lead to the recognition of previously defined reference positions. Furthermore, the application of the checkpoint concept proves to be useful, as the given displacement significantly simplifies differentiating between the single positions. The figure also shows that the signal levels can fluctuate during dynamic motion as well as during stationary periods which represents one of the major challenges for stand-alone Wi-Fi-based indoor navigation. VI.

CONCLUSION

The presented development has shown the results of a proof-of-concept study developed for the extension of a previously developed GPS-INS-System towards both outdoor and long-term indoor operation. It was possible to realize a low-cost Wi-Fi positioning prototype system by application of a previously installed hotspot infrastructure and simple signal processing despite the hotspot positions being unknown. The present system concept is suitable for application in aided inertial navigation systems as a source of indoor reference positions. As GPS position measurements can be applied for integration of INS and GPS in outdoor environments, both indoor and outdoor reference systems could be made available for further study to be implemented in future aiming for authentic commercial or industrial application.

enables an effective reduction of fluctuation effects as it can provide information for dynamic motion. A second strategy would be available when the INS and the Wi-Fi reference system are fused by a Kalman filter algorithm, as the Euclidean distance information can provide a useful input to the filter and hence affect the level of trust given for the Wi-Fi reference. Finally, the application of motion monitoring algorithms can provide a powerful tool for filtering illogical results, too fast or impossible motion, such as inside the wall or outside the building. ACKNOWLEDGMENT We would like to express our very special thanks to Professor Dr. Markus Haid for supervising this work at the CCASS in Darmstadt. His efforts are much appreciated. REFERENCES [1]

[2] [3]

[4]

VII. FURTHER DEVELOPMENT

[5] [6]

For the reason that Wi-Fi signals usually suffer from unforeseeable fluctuations induced by multipath propagation, environment dynamics or transmission power variations,

[7]

6/278

M. Haid, M. Kamil, T. Chobtrong, E. Günes, M. Münter, H. Tutsch, “IN-DIVER - Integrated Distribution Planning using an inertial-based tracking system,” International Conference on Flexible Automation and Intelligent Manufacturing, Tampere (Finland), 2012 N. Yazdi, F. Ayazi, and K. Najafi, "Micromachined inertial sensors," Proceedings of the IEEE, vol. 86, no. 8, 1998. M. Haid, "Verbesserung der referenzlosen inertialen Objektverfolgung zur Low-cost Indoor-Navigation durch Anwendung der KalmanFilterung, Dissertation," Universität Siegen, Siegen (Germany), 2005. M. Haid, J. Breitenbach, "Low cost inertial object tracking as a result of Kalman filter," Applied Mathematics and Computation, Volume 153, Issue 2, ELSVIER, Science direct, 2004. O. Loffeld, Estimationstheorie Bd. I / II.: Oldenbourg Verlag, 1990. M. Paciga and H. Lutfiyya, "Herecast: An Open Infrastructure for Location-based services using Wi-Fi", Wireless and Mobile Computing, IEEE International Conference on Networking and Communications, Montreal, (Canada), 2005. I. Nikolaou, S. Denazis, "Positioning in Wi-Fi Networks", University of Patras, Patras (Greece).

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Survey of Accuracy Improvement Approaches for Tightly Coupled ToA/IMU Personal Indoor Navigation System Vladimir Maximov

Oleg Tabarovsky

LLC RTLS Moscow, Russia Email: [email protected]

LLC RTLS Moscow, Russia Email: [email protected]

Abstract—In this work we present the personal indoor navigation system based on ranges measurements with Time of Arrival (ToA) principle and Inertial Measurement Unit (IMU). Survey of accuracy improvement approaches including monocular camera Simultaneous Localization an Mapping (SLAM) and WiFi SLAM is provided. Provided experimental results show that integration of data from various navigation systems with different physical principles can increase the accuracy and robustness of the overall solution. Index Terms—PDR; IMU; ToA; Monocular SLAM; Vector Field SLAM;

I. I NTRODUCTION

Fig. 1. System frames

Indoor real-time locating systems (RTLS) are spread widely nowadays, and use various physical layers - from RF to acoustic and infrared. RTLS system, developed in our company employs RF ToA range measurements between mobile receivers - tags, and stationary base stations - anchors. It can provide steady solution with 1 meter accuracy 80% of a time. But it is known that indoor RF range-measuring systems are suffering from NLOS measurements, another limitation comes from measurements update rate that is about 1 Hz. In order to provide more smooth and robust updates, other sources of navigation information should be used. For this work we choose only those sources, that are relatively autonomous - inertial, visual, field strengths, with more autonomous and ubiquitous navigator in mind. This paper has following structure: first part describes tightly coupled PDR navigator, second part is devoted to inertially augmented monocular SLAM, and third part gives some results on WiFi SLAM. II. T IGHTLY COUPLED T OA/IMU NAVIGATOR A. Pedestrian Dead Reckoning There is a lot of reports on pedestrian navigation systems that use inertial sensors - from foot-mounted, with full 3D starpdown INS [1] to 2D strapdown INS attached to a pedestrian body [2]. In our system we use practical solution, where pedestrian dead reckoning (PDR) navigator uses velocity Vb , that is determined by estimating step frequency using accelerometer signals. Angle Ψ defines the rotation of body frame with respect to navigation frame. Fig. 1 shows frames

Fig. 2. Personal navigator functional diagram

used for navigation system, where: X, Y - navigation frame (nframe) axes, Xp , Yp - pedestrian frame (p-frame) axes, Xb , Yb - body frame (b-frames) axes, Ψ -heading angle, δΨ - heading angle misalignment. Fig. 2 shows the functional diagram of tightly-coupled ToA/IMU navigator, where: a~b - acceleration vector in b-frame; m ~ b - magnetic field vector in b-frame; ΨAHRS -AHRS heading angle; Vp - velocity in p-frame; δ Vˆp ˆ - estimated heading error; estimated p-frame velocity error; δ Ψ ˆ ~ n - n-frame coordinates; δ R ~ - estimated n-frame coordinates R ˆ ~ ~ CSS - meaerror; Rn - corrected coordinates in n-frame; R surements of ranges by RF chirp spread spectrum (CSS) ToA system. Velocity absolute value is calculated by multiplying inverse of counter value by experimentally estimated scalefactor:

c 2012 IEEE 978-1-4673-1954-6/12/$31.00

7/278

Vp = Sv /Scnt

(1)

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 where Scnt - counter value; Sv - step scale-factor. Heading angle is estimated by AHRS (attitude and heading reference system) that fuses data from inertial sensors and vector magnetometer in 15-state Kalman filter.

2

Misalignment angle δΨ, rad

1.5

B. PDR Errors Correction Pedestrian velocity and heading angle can be used to calculate coordinates in n-frame, drift will inevitably occur. Drift is caused by magnetic field disturbances that are highly probable indoors; velocity derived form step frequency is also prone to various errors, ranging from false step detection to person-varying scale factor in (1). Tightly-coupled ToA/IMU navigator uses indirect Extended Kalman Filter (EKF) that fuses PDR and ToA ranges in order to estimate and compensate PDR error model with following 4-th order state vector:   ~ δΨ δSv T (2) δR

0 −0.5

∆Ψ0 = 0 rad. ∆Ψ = 1.2 rad.

−1

0

∆Ψ0 = −2.5 rad.

−1.5 −2 −2.5 −3 0

100

200

300

400

500

600

700

Time, sec

Fig. 3. Misalignment angle estimation

Step scale factor multiplier error δ Sv

0.5

where δSv - pedometer scale-factor error. 1) System model: It can be easily shown that linearized PDR errors dynamic equation can be written as:     −Vxn cos Ψ ˙ ~ δΨ + δR = V δS (3) Vyn sin Ψ p v

0.4 0.3 0.2 0.1 0 −0.1

Sv = 11

−0.2

Sv = 7 Sv = 17

−0.3 −0.4 −0.5 0

−VXn , VYn

- velocity components in n-frame. where It gives following system matrix F for discrete EKF:   1 0 0 0 −Vyn ∆t 1 0 cos ΨVb ∆t  F =  Vxn ∆t 0 1 sin ΨVb ∆t  0 0 0 1

1 0.5

100

200

300

400

500

600

700

Time, sec

Fig. 4. Step scale factor multiplier estimation

(4) III. U SING M ONOCULAR SLAM FOR PDR ACCURACY IMPROVEMENT

where ∆t - sampling period. 2) Measurement model: Range measurement, delivered by CSS ToA system can be written as a function of current position x, y and known base stations coordinates Xi , Yi : q 2 (5) ri = h (x) = (Xi − x) + (Yi − y) Measurement vector z is formed as a difference between predicted and measured ranges: T  (6) z = r1P DR − r1CSS . . . rnP DR − rnCSS C. Filtering and experimental results As system and measurement models are defined, the standard set of discrete Joseph-form EKF equations was applied to get the estimated errors values. Experimental setup included CSS tag paired with custom-built inertial module. Data was acquired in a typical office environment. Inertial sensors data was sampled with 20 Hz rate together with 1 Hz CSS ToA ranges . It can be seen from Fig. 3 and 4 that tightly coupled system can effectively cope with various misalignments and step scale-factors, continuously adapting to the pedestrian. Variations of misalignment angle plots on Fig. 3 are due to external magnetic distortions.

Tightly integrated scheme described above, can effectively fuse the data, compensate PDR errors and smooth CSS ToA ranges measurements. But it also has several drawbacks, and reliance on magnetic heading is one of them. While AHRS combines the gyroscope and magnetometer data and can filter out short-time magnetic disturbances, long-time disturbances still pose the problem. It is known that monocular camera SLAM algorithm can provide information on camera attitude, velocities and coordinates [3], [4]. Typically 30 frames per second (fps) camera rate is used to make smooth tracking possible. In order to make algorithm more suitable for mobile platforms, data from gyroscopes was used on prediction step, and accelerometer data on correction step - so it was possible to reduce camera frame rate to 10 fps. Monocular SLAM augmented with inertial data is based on EKF with dynamically changing state vector. State vector x ¯ includes camera state x ¯c and number of features states x ¯f . Two different modes of monocular SLAM were tested - compass mode - where only the attitude of the camera being estimated, and full 6-D mode - where attitude is estimated alongside with coordinates and velocities of the camera in starter frame:

8/278

 ¯c x ¯= x

x ¯1f

...

x ¯if

T

(7)

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 State vector for the 6-D mode:  T  ¯ c q¯ δ ω ¯f ¯ υ¯ x ¯c = R x ¯f = R

θ

φ

T ρ

(8)

0.2 Attitude angles, rad

¯ c - camera coordinates where q¯ - camera attitude quaternion, R vector in starter frame, υ¯ - camera velocity vector in starter frame, δ ω ¯ - gyroscopes biases vector, Rf , θ, φ, ρ - standard inverse depth parametrization model of visual features [4]. For the compass mode xc will contain only q¯ and δ ω ¯ terms, and xf - only θ and φ.

0.1 0

ΨSLAM, rad. −0.2

A. System model System state model for 6-D case can be written as:     ¯˙ c υ¯ R 1/2Ω¯  q¯˙  q     = f (¯ x , ω ¯ ) =  ν¯ω  δ ω ¯˙  ν¯υ υ¯˙

Ψω, rad.

−0.1

5

10

15

20 25 Time, sec

30

35

40

Fig. 5. Heading angles

(9) 2 Ψ

, rad.

SLAM

1.5 Attitude angles, rad

where Ω - is a a following skew-symmetric matrix:   0 −ωx + δωx −ωy + δωy −ωz + δωz ωx − δωx 0 ωz − δωz −ωy + δωy   Ω= ωy − δωy −ωz + δωz 0 ωx − δωx  ωz − δωz ωy − δωy −ωx + δωx 0

θSLAM, rad. γSLAM, rad.

1

0.5

0

Gyroscope biases and camera velocity are modeled as random walk processes with correspondent noises ν¯ω and ν¯υ . Fore the compass case only q¯ and δ ω ¯ components of the model are used [5].

−0.5 0

20

40 60 Time, sec

80

100

Fig. 6. Attitude angles for 6-D SLAM experiment

B. Measurement model Monocular SLAM measurement model with pin-hole camera was augmented with accelerometer data, which gives the system an ability to keep the local vertical. Normalized acceleration vector, measured in body frame is transformed to navigation frame, where it is compared with normalized local gravity vector: kˆ an k = Cbn (¯ q ) kab k (10)

3

Position, m.

2.5

End

2 1.5 1

where Cb - direction cosines matrix; kab k - normalized acceleration vector in b-frame; kˆ an k - estimated acceleration vector in n-frame; Acceleration measurement is gated by measured acceleration absolute value - only undisturbed acceleration vectors are used. Ranges from CSS ToA system were also added to the measurement model for the 6-D case.

0.5 0

Begin

−2

−1 0 Position, m.

1

Fig. 7. Pedestian trajectory for 6-D SLAM experiment

C. Experimental results The experimental setup included BeagleBone board with 320x240 web-camera, custom-built AHRS module, CSS ToA tag. Data from camera was sampled at 10 fps rate, inertial data at 100 Hz rate, and CSS ToA data at 1 Hz rate. 1) Compass mode: Pedestrian heading angle can be evaluated with range-only system while pedestrian is moving, but it is ambiguous for the stationary case. Magnetic field disturbances, especially in industrial areas will finally distort the AHRS readings also. To address this problem, ceilinglooking camera was used as a source of heading angle information. Two plots of Fig. 5 show heading angle behavior in gyro-only mode (magnetic correction was switched off) - Ψω

and heading angle ΨSLAM , provided by monocular SLAM, augmented with inertial data. It is clear that monocular SLAM helps to eliminate heading drift considerably. 2) 6-D mode: For this mode pedestrian was equipped with forward-looking hand-held camera, AHRS and CSS ToA tag. Standard office environment proved to be very difficult place for pedestrian monocular SLAM, as landmarks set is changing fast and features base can quickly grow more then a 100, nevertheless, for short periods of time, while pedestrian is located in the same room, monocular SLAM can provide good support to PDR navigator. Fig. 7 shows the estimated trajectory.

9/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

12 10 8

Distance, m

6 4 2 0 PDR Trajectory WiFi VFSLAM Trajectory

−2 −4 −6

−5

0

5 Distance, m

10

15

Fig. 8. Results of WIFi VFSLAM simulation

Fig. 9. Calculated pedestrian paths

IV. W I FI SLAM

RSS measurements, taken with 0.5Hz rate served as the only external information. Grid size was chosen to be 5 meters. Fig. 9 shows two calculated pedestrian paths, and it can be easily seen that even for rough 5-meter grid, WiFi VFSLAM delivers much robust results than PDR alone.

Received signal strength information (RSSI) of WiFi signals another ubiquitous source of navigation information. Additional reason to take that information into account is that WiFi as well as inertial sensors and monocular camera are at the core of any modern mobile platform such as smartphone. RSSI has long been used for navigation purposes with the fingerprinting approach as a basic one. One of the approaches, that enables to collect the RSSI data without need for fingerprinting, is the approach, called Vector Field SLAM (VFSLAM) [6]. Being the standard EKF-SLAM with dynamically changing state vector, that consists of vehicle part and a number of features parts, it is original in a way of representing RSSI surface. Features represent the RSSI levels for the regular square grid corners. RSSI value inside the current grid is calculated by bilinear interpolation. A. WiFi SLAM simulation and test results Matlab simulation of VFSLAM was performed to evaluate it’s effectiveness in well-controlled environment. Gaussian processes [7] were used to approximate random surfaces to simulate signal strengths for three base stations. Grid size was chosen to be 1m. Fig. 8 shows the result of simulation. Green and blue and red surfaces represent reference fields, while same color markers correspond to field values, estimated by VFSLAM algorithm. It can bee seen that estimated values are pretty close to reference values, so the next step is to evaluate the algorithm with real-life data. To evaluate perfomance of VFSLAM in real life situation, data from PDR and WiFi RSS collected in office environment was fused by VFSLAM with following state vector:  T x ¯ = x y Ψ m1 . . . mn (11) where m1 . . . mn - RSSI levels at regular grid corners. Pedestrian velocity and angular rate around vertical axis (derived from AHRS and corrected with gyroscope bias estimation) are used to propagate the state forward, while WiFi

V. C ONCLUSION In this paper authors propose practical approach to indoor pedestrian navigation which combines several different sources of navigation information. Survey of accuracy improvement approaches is provided, with emphasis to usefulness of the system on mobile smartphone-like platform. There is a lot of open questions remain - both from practical and from theoretical standpoints - how to optimally fuse different frameworks WiFi, visual and inertial, switch from one mode of navigation to the other, and so on. In our ongoing research activity we use BeagleBone as the prototype platform, and the next step will be the integration of all surveyed approaches onto this compact platform to get fully-functioning prototype. R EFERENCES [1] E. Foxlin, Pedestrian tracking with shoe-mounted inertial sensors, IEEE Computer Graphics and Applications, vol. 25, no. 6, pp. 3846, Nov. 2005. [2] Atia, M. M., Noureldin, A., Georgy, J., Korenberg, M., Bayesian Filtering Based WiFi/INS Integrated Navigation Solution for GPS-Denied Environments, NAVIGATION, Journal of The Institute of Navigation, Vol. 58, No. 2, Summer 2011, pp. 111-125. [3] Rydell, J.; Emilsson, E., ”CHAMELEON: Visual-inertial indoor navigation,” Position Location and Navigation Symposium (PLANS), 2012 IEEE/ION , vol., no., pp.541,546, 23-26 April 2012 [4] Civera, J. Davison, A.J. Montiel, J., Inverse Depth Parametrization for Monocular SLAM, Robotics, IEEE Transactions on , vol.24, no.5, pp.932,945, Oct. 2008 [5] Montiel, J., Davison, A.J. A visual compass based on SLAM, IEEE Internation Conference on Robotics and Automation, 2006 [6] Gutmann, J.-S., Eade, E., Fong, P., Munich, M.E., Vector Field SLAMLocalization by Learning the Spatial Variation of Continuous Signals, Robotics, IEEE Transactions on , vol.28, no.3, pp.650,667, June 2012 [7] Reece, S., Roberts, S., An introduction to Gaussian processes for the Kalman filter expert, Information Fusion (FUSION), 2010 13th Conference on , vol., no., pp.1,9, 26-29 July 2010

10/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Enhancement of the automatic 3D Calibration for a Multi-Sensor System The improved 3D calibration method of a radio-based Multi-Sensor System with 9 Degrees of Freedom (DoF) for Indoor Localisation and Motion Detection

Enrico Köppe

Daniel Augustin, Achim Liers, Jochen Schiller

Division 8.1 Sensors, Measurement and Testing Methods BAM, Federal Institute for Materials Research and Testing, Berlin, Germany [email protected]

Computer Systems & Telematics FU-Berlin Berlin, Germany daniel.augustin, achim.liers, [email protected]

Abstract—The calibration of the integrated sensors in a multisensor system has gained in interest over the last years. In this paper we introduce an enhanced calibration process, which is based on the preceding study described in [1]. The enhancement consists of the integration of a gyroscope. So far only the accelerometer and the magnetic field sensor were taken into account for the calibration process. Due to this improvement we reach a better approximation of the accelerometer and the magnetic field sensor. Additionally, we minimize the standard deviation of the single sensors and improve the accuracy of the positioning of a moving person.

parameters are approximated. By using the corrected magnetic field and acceleration data the current sensor orientation is calculated and utilized as comparison value for the calibration of the gyroscope. In figure 1 the three steps of the calibration procedure are shown with a continuous recalibration of all three used inertial sensors. sensor data: accelerometer

Keywords-sensor calibration and validation; person tracking; inertial navigation system; inertial measurement unit; embedded systems; multi-sensor system

I.

INTRODUCTION

The fast development of mobile sensor technologies , for instance, GPS tracking or MEMS and new hard- and software solutions for smart phones result in growing interest as well as innovative solutions for outdoor and indoor localization. For indoor localization based on inertial sensors it is necessary to calibrate the sensor system with an initial calibration method. The aim of this work is to enhance the accuracy of indoor positioning and tracking by body motion sensing by the improved calibration of the inertial sensors. II.

sensor data: magnetic field sensor

Kalman filtering suppress noise

Kalman filtering suppress noise

Calibration by the approximation of an ellipsoid

Calibration by the approximation of an ellipsoid

sensor data: calibration of the accelerometer

sensor data: calibration of the magnetic field sensor

Low pass filtering

Low pass filtering

sensor data: gyroscope

A

Determination of the resting point using the Totmann circuit Calibration by the Approximation of a straight line sensor data: calibrated gyroscope Low pass filtering

Approximation of the sensor orientation using the gravitation field and the magnetic field

C

B Adjustment of the sensor orientation to determine the calibration of the gyroscope

Figure 1. Schematic diagram of the calibration procedure

CALIBRATION PROCEDURE

For the processing of motion sequences for localization it is necessary to use sensors with high sensitivity and time stable measurement behavior. This can be ensured by a continuous recalibration of the sensors. The procedure which will be presented in this paper uses a recalibration procedure independent of external equipment. Basis for the procedure is the natural movement of the person who is wearing the sensor. From the performed motion sequences of the person the necessary data for the sensor correction is recorded and analyzed. At first the acceleration and the magnetic field sensors are calibrated. Over a long time period their measurement values describe the surface of an ellipsoid, whose

A. First step: Data acquisition and normalization of the sensor data With the data of the three used sensors (9 Degrees of Freedom (DoF)) measured in free movement, it is possible to generate two ellipsoids (one with the acceleration sensor and one with the magnetic field sensor, 3 + 3 Degrees of Freedom) and a straight line (gyroscope). In the next step we need to normalization the data. For that reason we calculate the rest position and the local earth normal, which depends on the gravitation field and the magnetic field of the earth. Then we filter the data using a standardized finite impulse response

11/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 (FIR) filter with Gauß window function (elimination of outliers by fast movements). B. Second step: Calculation of the 6 ellipsoid parameters (acceleration, ACC, and magnetic field sensor, MAG) For a good estimation of the two ellipsoids we use the approximation of the least Median of Squares and the bisection method. There are still some errors which we can be eliminated by the optimization of the ellipsoid especially by the optimization of the parameters x, y, z, rx, ry, rz. Detailed information of the calculation steps are given in [1]. Other perturbations are estimated with a validation of the measurement values in two steps of the realized FIR filter (critical frequency). C. Third step (new): Data transfer from the gyroscope to the sensor data ACC and MAG In this new step we tend to combine data of the three sensors. For that we transfer the data of the accelerometer and the magnetic field sensor to an angle velocity (°/s). This is done using the first deviation of the data of both sensors and calculation of the simultaneous change in the angle between two consecutive calibrated data points. Due to the modification of the orientation of the sensors we receive the angle velocity. In the next step it is important to do a time synchronization of the gyroscope data, the accelerometer data, and the magnetic field sensor data. A time shift is caused by the filtering of the accelerometer and magnetic field sensor data. After that the Gauß-Newton method is applied to determine the scaling and the resting point of the gyroscope based on the calculated angle velocities of the accelerometer and the magnetic field sensor.

The maximum standard deviation for each sensor is shown in Table 2. Furthermore, all standard deviations of the calibration values are below the noise level of each single sensor. TABLE II.

Sensor

Accelerometer

Magnetic field

Gyroscope

STANDARD DEVIATION OF THE THREE DIFFERENT SENSORS IN EACH AXIS axis ax (rest position) ax (1g) ay (rest position) ay (1g) az (rest position) az (1g) mx (rest position) mx (local) my (rest position) my (local) mz (rest position) mz (local) gx (rest position) gx (digit in °/s) gy (rest position) gy (digit in °/s) gz (rest position) gz (digit in °/s)

Standard deviation calibration 2.3 2.5 2.4 5.2 2.4 2.4 0.47 0.11 0.09 0.23 0.22 0.07 0.14 0 0.1 0 0.1 0

noise 3.6 4.5 4.8 4.3 4.0 4.1 1.07 0.2 0.1

In Figure 2 we see good conformity for the calibration results of the gyroscope (abbr. G) with the calibrated and calculated data of the accelerometer and magnetic field sensor (abbr. MA). The minimal difference between the angle velocity ϕ(G) of each axis and ϕ(MA) of each axis is caused by the calibration.

The described calibration process is performed, continuously. One result of this continuous calibration process is the minimization and the elimination of internal and external error sources as well as temperature influence, drift behaviour and different offsets as hard iron offset and soft iron offset. III.

RESULTS

For evaluation of the calibration procedure 109 experiments were carried out. The resulting calibration values of the three different sensors are shown in Table 1. TABLE I. axis ax ay az mx my mz gx gy gz

CALIBRATION AND OFFSET VALUES FOR THE SPECIFIC SENSORS offset of the rest position Accelerometer -15.95 mg -33.90 mg 41.34 mg Magnetic field sensor -81.15 mGauss 116.12 mGauss 60.27 mGauss Gyrocope 1.67 °/s 0.51 °/s 041 °/s

deformation Figure 2. Calculated and measured calibration data of the gyroscope 0.969 1.042 0.961 0.969 1.042 0.961 0.98 1.01 0.97

IV.

CONCLUSION

The procedure of the continuous recalibration shown in this paper can be used to improve indoor positioning and to ensure long term stability of commercial sensors. Due to this procedure a higher accuracy for the determination of a position is possible. Additionally the influences of the internal senor drift, the temperature dependence as well as the influence of external disturbances, for example, local magnetic fields are reduced. Anyway, the promising results of the described

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

12/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 calibration procedure enable an application on the consumer market and as well in mobile phone and smartphone sector.

[1]

REFERENCES

13/278

E.Köppe, D. Augustin, A. Liers and J.Schiller, “Automatic 3D Calibration for a Multi-Sensor System” IPIN 2012

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

A gait recognition algorithm for pedestrian navigation system using inertial sensors Wen Liu

Yingjun Zhang

Navigation College Dalian Maritime University Dalian, China [email protected]

Navigation College Dalian Maritime University Dalian, China [email protected]

Abstract—In this paper, we investigate the gait recognition algorithm which can be applied in a foot-mounted pedestrian navigation system, where the gait types are computed using dynamic time warping algorithm. This algorithm copes with the different dimension walking samples which due to the randomness of walking motion. Future, in order to get the walking samples, we propose a step cycle detection algorithm based on 3D gyro magnitude and the sliding window method. Subsequently, by combining pedestrian walking characteristics, a gait sample set is established using actual measured data, where the gait samples includes continuous horizontal walking sample, intermittent horizontal walking sample, upstairs walking sample and downstairs walking sample, which are common in daily life. Taking advantage of the dynamic time warping algorithm, the gait can be recognition and best route can also be computed. Finally, employing the artificial marking method, we evaluate the performance of the gait recognition algorithm using actual measured data. The test results show that the recognition accuracy is reliable, specifically, 95.86% for continuous horizontal walking, 90.73% for intermittent horizontal walking, 93.48% for upstairs walking, and 98.85% for downstairs walking. Keywords: gait recognition; pedestrian navigation; inertial sensors; dynamic time warping.

I.

strapping MEMS inertial sensors on insteps, which is proposed by Foxlin [4], and the main problem of system is error accumulation caused by inertial sensor drift error. To solve this problem, Kalman filtering is used to tracking system error, and the error is kind of corrected by Zero Velocity Updates (ZVUPs) algorithm [4, 5]. In order to obtain higher navigation accuracy, without introducing other sensors, we pay attention to analysis the gait motion to find the error correction information. The aim of this paper is to analysis the gait recognition algorithm. Recent developments in gait recognition field adopt the solution that straps many Inertial Measurement Units to different positions of the body, and achieves the gait recognition by analyzing the rules of acceleration and the angular velocity change [6, 7]. To simplify this method, only one IMU is used in our solution. In addition, the dynamic time warping algorithm is used to cope with different dimension walking samples which due to the randomness of walking motion. Employing the artificial marking method, we evaluate the performance of the gait recognition algorithm using actual measured data. This paper has been divided into three parts. The first part deals with walk cycle detection algorithm, the second part analyses gait recognition algorithm, the algorithm is evaluated in the last part.

INTRODUCTION

GPS is an important component in the positioning system, and plays a key role in outdoor positioning. However, GPS continues to struggle indoors due to the failure of satellite signals to penetrate buildings [1]. Furthermore, recent developments in the field of smart mobile terminals have led to an increased interest in indoor positioning and navigation. In most recent studies, indoor positioning and navigation has been discussed in two different ways, one is Local Positioning System (LPS), and the other one is Pedestrian Dead Reckoning (PDR) [2]. Compared with LPS, PDR approach has a number of attractive features: autonomy, cost-effective, without installing markers or instrumentation in advance. Specifically, PDR is divided into stride and heading system (SHS) and inertial navigation system (INS) [3]. Pedestrian inertial navigation technology which bases on MEMS inertial sensors has gradually become an indoor navigation solution as its independence, portability, low cost and other characteristics.

II.

WALKING CYCLE DETECTION ALGORITHM

Walking cycle is a process that is from the start to the end of walking motion. Specifically, the walking motion is divided into single-step and complex-step. Because only one IMU is strapped on instep, the walking cycle refers to complex-step. This part analyses the walk cycle detection algorithm, the aim is dividing successive raw data into walking cycle sections which are objects of gait recognition algorithm. Reliable walking cycle detection algorithm is prerequisite of gait recognition. For the purpose of accurate detection, a method based on 3-axis angular velocity and a sliding window is proposed, the algorithm idea is as follows: the raw data matrix is constructed firstly; the threshold based on the norm of 3-axis angular velocity is set, in addition, the static state is determined by sliding window method, and the static state matrix is constructed; finally, walking cycle matrix which is based on static state matrix is constructed. Specific steps are as follows:

Pedestrian inertial navigation system widely adopts system framework that characterized by extended Kalman filtering and

14/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 A. Construct the raw data matrix The data measured by MEMS inertial sensors are saved in a matrix of N×10. Each row represents a set of sensor data. Specifically, they are, in order, data index, 3-axis acceleration, 3-axis angular velocity, 3-axis magnetic field strength. Our algorithm requires index and 3-axis angular velocity, so a raw data matrix of N×4 is constructed, as shown in Table I. TABLE I.

RAW DATA MATRIX SAMPLE Angular velocity (rad/s)

Index 1 … 3689 3690 … end

X axis

Y axis

Z axis

0.002411 … 1.537429 1.256393 … 0.023122

0.982755 … 7.157153 7.626344 … 0.786572

-0.000122 … -0.480536 -0.999916 … -0.0021342

STATIC STATE MATRIX SAMPLE

Index 1 … 10 … end

Start 3855 … 5391 … 5706

End 3935 … 5438 … 5751

C. Construct walking cycle matrix To construct walking cycle matrix which is the basis of gait recognition, we set the end of static state in row i as the start of walking motion in row i+1, then, the walking cycle matrix is constructed, as shown in Table III. Compared with static state matrix, walking cycle matrix has one more column which presents the start of each walking cycle. Therefore, the raw data of each walking cycle can be obtained. The raw data are divided into walking cycle sections which are saved in each rows of walking cycle matrix using the above algorithms.

B. Construct static state matrix  Set two identifying signs, sign Start is used to identify the start of each static state, and sign End is used to identify the end of each static state. The sliding window is formed by sign Start and End. The Entire detection process begins from first row to the end of raw data matrix. To begin with, sign Start and End are pointed to the first row, and the width of sliding window is zero. The detection rule is the norm formed by 3-axis angular velocity less than 0.5 rad/s at the index (i), set sign Start and End to index (i) if the detection rule is met, then the width of sliding window is zero. 

TABLE II.

Slide sign End to the index (j) which makes the norm formed by 3-axis angular velocity greater than or equal 0.5 rad/s at index (j). Then, set sign End to index (j-1), and the width of sliding window is j-i, Note that the “cross zero” state also meets the detection rule. In order to prevent false detection, the sliding windows are ignored if the width is less than 24, in the other words, the sliding windows are valid when the width greater than 24. And the threshold is determined by sampling frequency and static state duration.



The sign Start and End of each detection are saved into a matrix of (N×3), which is called static state matrix. Each row represents a static state, stationary state index is saved in the first column, and Start and End are saved in the second and third column, as shown in Tab. II.



Slide the sign Start to End after each walking cycle is detected. So the width of sliding window is zero. Then, sliding sign Start and End to the next index if the detection rule is met simultaneously.



Repeat the above steps until the raw data matrix is detected. The static state matrix is as shown in Table II.

TABLE III.

WALKING CYCLE MATRIX SAMPLE a

Index 1 … 11 … end

Start 3720 … 5438 … 5901

Startb 3855 … 5542 … 6021

End 3935 … 5595 … 6075

a. start stands for the beginning of each walking cycle; b. start stands for the beginning of static state in a walking cycle.

III.

GAIT RECOGNITION ALGORITHM

Gait recognition algorithm is the identification process of gait samples, and specifically, the distance between gait samples and sample set is computed by recognition algorithm. So the sample set and recognition algorithms are the core factor. A. Construct sample set In this paper, we research three kinds of gait, and specifically, horizontal walking, upstairs and downstairs walking which are common in indoor environment. From the data measured we found that another horizontal walking existed, and specifically, it happens in transient process of different gait. Therefore, we divide horizontal walking into two kinds, one is continuous horizontal walking, and the other is intermittent horizontal walking. In addition, from date we found that Y-axis angular velocity varied significantly, we adopt Y-axis angular velocity as the variable to describe samples. Four kinds of walking sample are obtained by trial and error. The aim is to find out relative standard samples which improve the recognition accuracy. Finally, the sample set is constructed, as shown in Fig.1. B. Dynamic time warping algorithm The delay maybe happens when two static array have the same variation tendency due to variation of walking speed. In addition, the dimension of two walking array may be different. For this problem, dynamic time warping algorithm adopted.

15/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 8

10

element of three elements, and specifically, the left element, up element and left-up element. D matrix is as follows:

Angular rate( rad/s)

Angular rate( rad/s)

6

5 0 -5

4 2

 0  1 D    82   86

0 -2 -4

-10 0

20

40

60 80 Index

100

120

-6 0

140

50

(a)

Index

100

150

(b)

Angular rate( rad/s)

Angular rate( rad/s)

6 4 2 0 -2 -4

4 2

50

Index

100

-6 0

150

20

40

(c)

60

80

100

120

140

(d)

Figure 1. Sample set (a. Continuous horizontal walking-139 elements; b. Intermittent horizontal walking-163 elements; c. Upstairs walking-155 elements; d. Downstairs walking-136 elements )

The algorithm takes advantage of dynamic programming idea [8], the timer shaft is warped unequally to realize the alignment of different dimension samples. The detailed process is as follows: suppose there are two vectors, t (sample vectors) and r (vector to be identified), t = [1 2 10 3], r = [1 1 10 2 3], t is 4 dimensions, and r is 5 dimensions.

82

1

65

86

50

2

86   65  113   2 

10

0

81

1

1

64

0

81

0

64

4

49

1

Angular rate( rad/s)

6

81

82

0

0

0

0

0

0

0

0

2 0 -2

-6

4   1  49   0 

-8

0

20

40

60

80

100

120

140

160

Index

Figure 2. Correlation between test sample and sample set

element d (1, i) and D (1, i-1). Assignment of the first column is the same way, D matrix is as follows: 0

4

-4

2) Construct adjacency matrix D Adjacency matrix D and Euclidean distance matrix d have the same dimension. The matrix D is initialized to zero, and d (1,1) is assigned with D (1,1); Then, the first line of the adjacent matrix is assigned, and specifically, the D (1, i) of first line of the first i-th column (i> 1) is assigned with the sum of

0

Continuous horizontal walking sample Upstairs walking sample Downstairs walking sample Intermittent horizontal walking sample Test sample

8

1) Construct Euclidean distance matrix d Euclidean distance matrix d is 4×5 dimensions. dij represents the square of the distance between i-th element of t and j-th element of r, as follows:

 0  1 D    82  86

64

C. Gait Recoginition Case In order to analyze the application of dynamic time warping algorithm in gait recognition, we use a case to explain. Firstly, a 173 element sample is selected, and the relationship with sample set is shown as Fig. 2. The distances computed using dynamic time warping algorithm are shown as Table IV. The distance between sample and continuous horizontal walking is minimal, so the sample belongs to horizontal walking.

Index

0  1 d    81  4

82

64

0 -2 -4

0

81

1

The distance between two vectors is shown at the lower right corner of Adjacency matrix D. Therefore, the distance between t and r is 2, and the distance between different dimensions sample can be computed using this algorithm.

8

6

0

86   0  0   0 

TABLE IV.

DISTANCES COMPUTED USING DYNAMIC TIME WARPING ALGORITHM

Item

Distance

Distance between samples and the sample set Continuous horizontal walking

Intermittent horizontal walking

Upstairs walking

Downstairs walking

13.2673

284.7651

301.8935

61.3711

IV.

Then, all-zero sub-block (3×4) located at lower right corner is assigned as follows: first, the assignment process begins from the first row, from left to right, each element is formed by two parts, one is the element which is located at the same position in Euclidean distance matrix, and the other is the min

EXPERIMENTAL VERIFICATION

For the purpose of verifying the gait recognition algorithm proposed, experimental verification is conducted. The MEMS inertial sensor MTx [9] (28A58G25) is used, and experimental data are raw data measured with MTx. And the sampling frequency is 120Hz. The algorithm accuracy is computed using gait recognition algorithm and artificial marking method as follows:

16/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 A. Artificial Marking Method The realistic gait information during a waking motion is necessary to compute the gait recognition algorithm accuracy. Therefore, we mark each walking cycle artificially during the experiment, which is called artificial marking method. B. Experimental Analysis We conducted 9 experiments which are in stadiums, shopping malls and laboratories, and specifically, 733 walking cycles are done by the same person. As mentioned above, the recognition accuracy is computed by artificial marking method, statistical result is as follows: 95.86% for continuous horizontal walking, 90.73% for intermittent horizontal walking, 93.48% for upstairs walking, and 98.85% for downstairs walking. The detailed results are as shown in Table V. TABLE V. Artificial Marking NO.

1

Ca 69

Ib 2

Uc 0

EXPERIMENT RESULTS Gait Recognition

Dd 0

Ca 69

Ib 2

Uc 1

Recognition Accuracy (%) Dd 0

Ca 98.5

Ib

Uc

100

Dd

---

e

--- e

e

--- e --- e

2

187

2

0

0

185

3

1

0

98.9

100

---

3

132

4

12

0

131

7

10

0

99.2

100

83.3

4

45

2

0

0

43

3

1

0

95.5

100

--- e

--- e

5

23

6

7

8

24

5

7

8

100

83.3

100

100

6

46

2

0

0

46

1

1

0

100

50

--- e

--- e

e

--- e

7

47

2

8

28

6

9

24

10

0

0

45

3

6

8

28

5

32

29

18

12

1

0

95.7

100

---

7

8

100

83.3

100

100

29

28

75

100

90.6

96.5

a. C stands for continuous horizontal walking; b. I stands for intermittent horizontal walking; c. U stands for upstairs walking; d. D stands for downstairs walking; e. --- stands for none.

V.

CONCLUSION AND PROSPECT

The main problem of pedestrian navigation system based on MEMS inertial sensors is error accumulated caused by drift error of inertial sensors. In recent years, there has been an increasing interest in correct error based on existing framework. For this purpose, in this paper, we analyzed the gait recognition algorithm, and tried to take advantage of gait information to correct the error. Specifically, a walking cycle detection algorithm based on 3-axis angular velocity and the sliding window is proposed to divide successive raw data into walking cycle sections which are objects of gait recognition algorithm. In addition, the gait recognition algorithm based on dynamic time wrapping which copes with the different dimension walking samples which due to the randomness of walking motion is proposed as well. Finally, in order to evaluate the performance of the gait recognition algorithm, 9 experiments are conducted. The test results show that the recognition accuracy is reliable, specifically, 95.86% for continuous horizontal walking, 90.73% for intermittent horizontal walking, 93.48% for upstairs walking, and 98.85% for downstairs walking.

This research has thrown up many questions in need of further investigation. Next step, we will focus on error correction algorithms based on gait information. ACKNOWLEDGMENT The research is supported by Project 61073134 supported by National Natural Science Foundation of China; Project 51179020 supported by National Natural Science Foundation of China; National 863 Project (No.2011AA110201); The Applied Fundamental Research Project of the Ministry of Transport of China (No.2013329225290). REFERENCES [1] Dedes G, Dempster A G. Indoor GPS positioning - challenges and opportunities [C]. IEEE 62nd Vehicular Technology Conference, Texas, USA, 2005:412-415. [2] Jiménez A R, Seco F, Zampella F et al. PDR with a foot-mounted imu and ramp detection [J]. Sensors, 2011, 11 (10):9393-9410. [3] Harle R. A survey of indoor inertial positioning systems for pedestrians [J]. IEEE Communications Surveys & Tutorials, 2013, PP (99):1-13. [4] Foxlin E. Pedestrian tracking with shoe-mounted inertial sensors [J]. IEEE Computer Graphics and Applications, 2005, 25 (6):3846. [5] Jimenez A R, Seco F, Prieto J C et al. Indoor pedestrian navigation using an INS/EKF framework for yaw drift reduction and a foot-mounted imu [C]. The 7th Workshop on Positioning Navigation and Communication (WPNC), HTW Dresden, Germany, 2010:135-143. [6] M.J. V-N.Recognition of human motion related activities from sensors [D]. Malaga, Spain:University of Malaga, 2010. [7] Frank K, Nadales M J V, Robertson P et al. Reliable real-time recognition of motion related human activities using mems inertial sensors [C]. The 23rd International Technical Meeting of the Satellite Division of the Institute of Navigation, Portland, OR, United states, 2010:2919-2932. [8] Myers C, Rabiner L, Rosenberg A E. Performance tradeoffs in dynamic time warping algorithms for isolated word recognition [J]. IEEE Transactions on Acoustics, Speech and Signal Processing, 1980, 28 (6):623-635. [9] Xsens. http://www.xsens.com/en/general/mtx.

17/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

An UWB Based Indoor Compass for Accurate Heading Estimation in Buildings Abdelmoumen Norrdine1, David Grimm2, Joerg Blankenbach1 and Andreas Wieser3 1

RWTH Aachen University, Geodetic Institute, Aachen 2 Leica Geosystems, Heerbrugg 3 ETH Zurich, Institute of Geodesy and Photogrammetry, Zurich {norrdine ; blankenbach}@gia.rwth-aachen.de; [email protected]; [email protected]

Abstract—The demand for positioning systems locating people and/or objects automatically inside buildings or in other GPSdenied environments has rapidly increased during the recent years. Several systems have been developed. Most of them are only designated for position estimation.However, in addition to the position the user’s orientation is also useful or even mandatory for certain applications. In this contribution the determination of azimuth (heading) of mobile devices is presented using an indoor positioning system based on time-of-flight measurements with Ultra Wide Band (UWB) pulses. The system enables the determination of 3D positions with accuracies in the cm-range even when multipath propagation is present. The main focus of this contribution is the determination of azimuth (heading) of mobile users without using antenna arrays, a magnetic compass, or inertial sensors (and therefore not requiring any prior knowledge about an initial orientation). The proposed method for azimuth determination is based on selective shadowing of UWB signals using arotating attenuation shield. The time-varying attenuation of the received UWB wavesallows estimating the direction of arrival of the respective signals at the receiving antenna by means of signal processing methods. If the emitting transceiver’s position is known the antenna orientation can be derived therefrom. Using a prototype, first experiments have been carried out. The results prove the feasibility and indicate an accuracy of less than 1 degree under good conditions in an indoor environment. Index Terms—Heading, azimuth, orientation, localization, trilateration, Ultra Wide Band (UWB)

I.

indoor

INTRODUCTION

Recently, the need for automated systems locating people and objects inside buildings (indoors) has rapidly increased. A mainreason for this is the general availability of positioning and navigation outdoors and the demand for seamless extension of the related applications to indoor environments. Some examples are pedestriannavigation in public buildings (such as railway stations or airports), locatingfirefighters in emergency situations, tracking and finding assets, orautomated robot control. Global Navigation Satellite Systems (GNSS) areonly available outdoors except under very special conditions with very low accuracy requirements (e.g. accepting deviations of 100 m or more).The satellite signals are heavily attenuated by walls, ceilings and objects, and cannot be used indoors

therefore.Worldwide intensiveresearch in indoor positioning is a result. In addition to the pure position information (typically 2D or 3D coordinates in a local reference system), the spatial orientation of the user ormobile device may also be useful or even mandatory for certainapplications. Examples areaugmented reality applications where a view of the real world is augmented by virtual objects. Both the3D position and orientation of theuser i.e., all six degrees of freedom (6 DOF) have to be knownwith high accuracy in such cases.A further example is the use of the moving direction (heading or azimuth) of the user for dead reckoning in pedestrian navigation. The most widespread method for heading estimation in pedestrian navigation systems is the utilization of a magnetic compass. However, magnetic anomalies occurring in indoor environments due to electrical wiring, metal furniture or building materials (reinforced concrete)may cause heading estimation errorsexceeding30° [1, 2] and therefore rendering the magnetic azimuth virtually useless. A methodrequiring no additional sensors but applicable during motiononly is based on the heading determination by means of the baseline betweensubsequent position estimates. Often, inertial measurement units (IMUs) are used to indicate orientation [3, 4]. However, thisapproach suffers from accumulated errors, because of the required multiple integration of the sensor output.Anotherapproach for orientation estimation is the use of an antenna array, i.e., of multiple antennas rigidly attached to the mobile device.The accuracy is proportional to the distance between the antennas; so this approach isuseful for GPS-based orientation estimation of outdoor platformsbut hardly applicableto pedestrians or small devices in indoor environments because of signal obstructions and fading effects [5].A method for attitude estimation in indoor applications based on a vision system is presented in [6]. However,it is limited to environments where straight lines can be detected (e.g. doors and corridor borders). In the following an alternative approach for azimuth determination based on a highly accurate Indoor Local Positioning System (ILPS) usingUltra Wide Band (UWB) signals is introduced. This paper is organized as follows: first, the UWB-ILPSis presented. Next, the proposed system and method for

18/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 measuring the orientation areoutlined. This is followed by thepresentation and discussion of a real-world experiment,and by a conclusion. II.

UWB-INDOOR LOCAL POSITIONING SYSTEM

UWB systems have advantageousproperties forpositioningin indoor environments.They yielda high spatial resolution, arerobust with respect to multipath, and their signalspenetrate various materials. The UWB-ILPSused for this contribution was already developed in previous research and successfully implemented in a prototype [7-9]. The UWBILPS consists of several TimeDomainTM P210 UWB transceivers operating in the 3.2- 6.3 GHz frequency range. It has a positioning update rate of more than 3s and a radiated power of less than 50 µW. Therefore it can be only deployed for static operation and in indoor environments with temperate obstacles. The slope distances between all the transceivers can be derivedeven in non line of sight scenarios by measuring the Time Of Flight (TOF) of the UWB pulses. In order to avoid the need for synchronization between the transceivers, TOF is implemented as two way ranging [7]. Thus, ranging to multiple transceivers is accomplished successively.Using the distances between the Reference Stations (RSi), whose coordinates (Xi,Yi,Zi) in the building reference system are known, and each Mobile Station(MS) with unknown coordinates (XMS,YMS,ZMS) (s. Fig.1), positioning results with an accuracy up to 2 cm have beenachieved [8]. The heading determination could be achieved using the baselines between subsequent positions of the mobile station or using the simultaneously estimated positions of at least two antennas mounted rigidly at the MS. Thelatter method has been successfully implemented for yaw determination for a digital camera (Fig.2) [7-9]. However, the first approach requires rather fast motion and is only applicable if a constant relation between MS orientation and direction of motion is maintained; the second approach is of limited applicability to small objects or pedestrians. III.

Figure 2: Camera orientation by using two UWB antennas mounted on a rigid baseline

not force the antenna to remain leveled the missing rotation angles (roll and pitch) can be determined using an additional sensor, e.g. an inclinometer.This is not further investigated in this paper. The proposed method originates from [3].It is based on the idea of selective shadowing of UWB signals received from reference stations RSi. For signal shadowing, an Attenuation Shield (AS) (15cm x 7 cm x 4 mm, PVC material)is utilized rotating around the receiver antenna(Fig. 3). For the experimental evaluation of the concept a rotating device called NORDIS hardwarewasutilized, which was originally used for research in GPS orientation determination outdoors [3]. A BroadSpecTMomnidirectional UWBantenna was mounted on the NORDIS hardware.AS isrotated about the antennabore sight with constant velocity (Fig. 4). The angle of arrival of the RS signals are indicated with regard to the zero direction of the NORDIS hardware which is, however, not the required heading of MS. Though, the heading of the NORDIS

SINGLE ANTENNA AZIMUTH DETERMINATION

The main focus of this contribution is the determination of the azimuthof a leveled UWB antenna without using an antenna array or the (past) trajectory. If the application does Figure 3: Rotating attenuation shield

Figure 4: UWB antenna mounted on the NORDIS hardware.

Figure 1: System architecture of UWB-ILPS.

19/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 5: Measurement setup

hardware can be derived from the azimuth from MS to RS calculated using the coordinates of MS and RS - and the determined angle of arrival. For the evaluation of the proposed approach test measurements have been accomplished in the geodetic measuring lab of ETH. Therefore,themobile station equipped with NORDIS hardware as well as the reference stations were set up in a horizontal planeon fixed reference points (=survey pillars)whose relative positions are exactly known (Fig. 5). The signal strength associated with these two reference stations has been continuouslycaptured during four revolutions of AS using the antenna equipped with the NORDIS hardware. To get the azimuth values from the raw data, several signal processing steps are performed in post-processing: A. Leading edge detection and signal strength calculation During the shield rotation the signal strength was calculated continuously by integrating the received UWB signal. The rotation is slow w.r.t. the signal integration such that the rotation angle of the shield is assumed constant during integration.The respective integration interval starts at the leading edge of the pulse and is 2 ns wide. Fig.6 shows an exemplary UWB signal and the estimated leading edge. Figure7 shows a signal strength curve (in black) after integrating several UWB signals during four revolutions of AS. B. Signal smoothing

Figure 6: Received UWB signal

Figure 7: Example of signal strength curve

Figure 8: Signal strength with transceiver at RS1and RS2during one revolution of the attenuation shield.

To remove signal outliers and reduce signal fluctuations, the measured signal has to be first smoothed. Due to the quasisinusoidal nature of the signal,the Fast Fourier Transformation (FFT)has been used in combination with the Inverse FFT (IFFT) in order to resynthesize the signal based on itsspectral analysis. In the spectral analysisstage,frequency components lower than a preset threshold are set to zero. Figure 7 shows a smoothed signal strength curve. C. Time delay estimation The TimeDelay (TD)is a measure of the angle between the two RS. The calculation is based on TD estimation between the measured waveformsassociated with the two transceivers atRS1 and RS2 respectively (Fig. 5). These two waveforms x(RS1) and r(RS2) are depicted in Figure8. In this early stage of research two methodshave been used for TD estimation:  Local maximum difference: The TD is estimated by detectingthe maxima within the two waveformsx and r. The respective maximum occurs when MS, RS and the rotation shield are exactly lined up (Fig. 8). One maximum occurs when the MS liesbetween the AS and RS. In this case the received power consists of line of sight signal and reflected signal from AS. The second maximum occurs when AS liesbetween MS and RS. The occurrence of the second maximum results from the power gain caused by signal diffraction around AS.

20/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 9: Cross-correlation results.

The difference between the local maxima of the waveforms x and r corresponds to the rotation angle.  Cross-correlation: The common method to determine TD is cross-correlating the two signals from the two reference transceivers [3]. In this work a similar method is used. TD is calculated independently byfinding a specific template curve t within the signalsx and r. To find the template curve location, the cross-correlation has been used. The template curve tis similar to a bell curve(s. Fig.7). The peak and the width of the template curve depend on the shape of the rotating shield and can be calculated beforehand by a suitable calibration. The cross-correlation value of measured signal xand template curve tat delay δ is defined as N

 (x[i]  m )(t[i  ]  m ) x

Rxm [ ] 

t

i 1

N

(1)

N

 (x[i]  m )  (t[i  ]  m ) 2

x

i 1

2

t

i 1

where mx and mt are the means of x and t respectively. TD is indicated by the delay of the maximal correlation value as shown in equation (2)  max  max[ Rxm (i )] i

By applying both methods to the signals from the previous example, theaveragedTDbetween the two curves correspond to 91,15 deg (Fig. 8) and 90,27 deg (Fig. 9) by using the local maximum difference and the cross-correlation method respectively.The true angle isaright angle (90°)(Fig.5). Utilizing reference points in the measuring lab which are geodetically determined with high accuracy, the realization of the true angle can be assumed as error free. IV.

CONCLUSION

Besides position estimation for indoor applications the orientation determination of objects and persons is also a challenging task. The proposed method of selective shadowing

of radio signals may be used to determine the angle of arrival of signals and contribute to the determinationof the spatial orientation of a mobile user in a building. The results of a first experiment using a modified UWB-transceiveras mobile station are promising.Further researchis required to quantify the attainable accuracy depending on geometric configuration, environmental conditions, signal propagation through materials (walls), and MS kinematics.This includes alsoUWB wave propagation simulation by using ray-tracing methods and wave diffraction equations in order to determine the optimal attenuation shield dimension and position. The accuracy of TD estimation is a function of signal-tonoise ratio (SNR) and the angular sampling rate of the signal strength. By increasing the signal acquisition update rate and the UWB signal quality by using new generation P410 UWBTransceiver [10], the TD estimation and consequently the estimation error might be reduced significantly.Due to their lower power consumption, higher update rate and reduced size, the P410 radios could be also used for pedestrian navigation. Moreover the presented time delay estimation method and additional methods such assignal zero crossingor adaptive signal processing based method have to be examined and compared to each other [11, 12]. REFERENCES [1]

Afzal, M.H.; Renaudin, V. and Lachapelle, G.,“Assessment of Indoor Magnetic Field Anomalies using Multiple Magnetometers”, proceeding of ION/GNSS, 2010. [2] Skvortzov, V.Y. ; Hyoung-Ki, L.; SeokWon, B.; YongBeom, L., “Application of Electronic Compass for Mobile Robot in an Indoor Environment”, 2007 IEEE International Conference on Robotics and Automation, 2007 , Page(s): 2963- 2970 [3] Grimm, D. E., “GNSS Antenna Orientation Based on Modification of Received Signal Strengths”, Dissertation (Dr.sc.ETH), ETH Zurich, 2012 [4] Hesch, J. A. and Roumeliotis, S. I., “An indoor localization aid for the visually impaired”, IEEE International Conference on Robotics and Automation, pp. 3545–3551, 2007. [5] Kuylen, L. V.; Nemry, P.; Boon, F.; Simsky, A. and Lorga, J. F. M., “Comparison of Attitude Performance for Multi-Antenna Receivers”, European Journal of Navigation, Vol. 4, No. 2, pp. 1–9, 2006. [6] Kessler, C.; Ascher, N; Frietsch, M.; Weinmann, M. and Trommer, G., “Vision-based Attitude Estimation for Indoor Navigation usingvanishing Points and Lines”, Proceedings of IEEE ION/PLANS,pp.310-318, 2010. [7] Norrdine, A.“Präzise Positionierung und Orientierung innerhalb von Gebäuden“, Dissertation, Schriftenreihe Fachrichtung Geodäsie der TU Darmstadt, Heft 29, 2009 [8] Blankenbach, J. andNorrdine, A.,“Mobile Building Information Systems based on precise Indoor Positioning”, Journal on Location BasedServices, Volume 5, Issue 1, pp. 22-37,Taylor & Francis, 2011 [9] Pflug, C.„Ein Bildinformationssystem zur Unterstützung der Bauprozesssteuerung“, Dissertation, Schriftreihe des Instituts für Baubetrieb der TU Darmstadt, D50, 2009 [10] Time Domain Corporation, http://www.timedomain.com/ [11] Zhou, C., Qiao, C., Zhao, S., Dai, W. and Li, L., ”A zero crossing algorithm for time delay estimation”, IEEE 11th International Conference on Signal Processing (ICSP), pp. 65 - 69, 2012 [12] Park, S. and Kim, Y.T., ”Adaptive signal processing algorithms for time delay estimates and tracking”, Proceedings of the 20th Southeastern Symposium on System Theory, pp. 433 – 437, 1988

21/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Accuracy of an indoor IR positioning system with least squares and maximum likelihood approaches F. Domingo-Perez, J. L. Lázaro-Galilea, E. Martín-Gorostiza, D. Salido-Monzú

A. Wieser Institute of Geodesy and Photogrammetry ETH Zürich Zürich, Switzerland [email protected]

Department of Electronics University of Alcalá Alcalá de Henares, Spain [email protected]

Abstract—This paper focuses on the predicted accuracy of indoor positioning of mobile clients emitting modulated infrared signals (IR). The related positioning system makes use of several anchor nodes receiving the IR signal and measuring phase differences of arrival, which are converted into range difference values. These range difference values are used with hyperbolic trilateration to estimate the position of the respective emitter. This work deals with two issues of localization techniques: the selection of the best sensor subset and the selection of the optimum algorithm in the sense of localization accuracy and computational effort. We compare nonlinear least squares (NLS) and maximum likelihood estimation using the Cramer-Rao lower bound as a benchmark to select the appropriate sensor subset and the optimum algorithm in each case. Results show that we can have accurate results by neglecting the correlations and using NLS with the best sensor subset according to sensors-target geometries. Keywords-least squares approximation; maximum likelihood estimation; phase difference of arrival; source location

I.

The N-1 range differences can be expressed as a function of the parameters to be estimated (θ = [x y z]T): Δd = f(θ) + ε,

(1)

where Δd is the N-1 measurement vector, f(θ) is the noiseless range difference function vector in terms of θ and ε represents the deviation. Solving (1) by NLS or MLE means optimizing an Euclidean distance (2) or a Mahalanobis distance (3): 

Δd f(θ)TΔd f(θ)







Δd f(θ)TΣ-1Δd f(θ)





INTRODUCTION

Sensor resource management (SRM) [1] is related to localization techniques in the sense that SRM deals with sensor placement for optimal localization (e.g. highest accuracy in a whole area) and the selection of the best sensor configuration among available options (sensor subset selection). This paper shows the effect of the latter in a Phase Difference of Arrival (PDOA) infrared (IR) localization system. The effect of selecting a sensor subset that provides good geometry conditions is analyzed by simulation for maximum likelihood estimation (MLE) and nonlinear least squares (NLS). The paper is organized as follows. Section II gives an overview of PDOA localization. The IR system is briefly described in section III. Section IV shows the simulation scenario and results. Finally, section V provides the conclusions of the study. II.

differences, resulting in a set of N-1 hyperbolae that intersect in a single point in the absence of noise, whereas in real conditions the point of intersection must be estimated.

POSITION ESTIMATION WITH PDOA

N anchor nodes measure phase of arrival of a sinusoidally modulated IR signal transmitted from a board that acts as the target. Pairing a reference node with the remaining N-1 nodes and differencing their phase measurements gives a set of N-1 PDOA values. These values can be converted into range

We apply the iterative Gauss-Newton algorithm to solve (2) and (3) due to the nonlinearity of f(θ): 

k+1

=

k

+ JTWJ)-1JTWΔd f(

k

)



where the superscripts k and T denote the iteration instant and the transpose operator, respectively. J is the Jacobian matrix of f(θ) evaluated at instant k and W is a weight matrix. In case of MLE, W = Σ-1, whereas W is the identity matrix in NLS. III.

SYSTEM DESCRIPTION

The system contextualizing this work achieves range difference estimation by measuring phase differences of a modulated IR signal continuously reaching different receivers. The IR emitter, boarded in the robot to position, generates an 8 MHz intensity modulated signal to drive a wide angle IR-LED at 940 nm. The receivers, placed in fixed and known positions in the ceiling of the area, are formed by a low level conditioning stage adapting the photocurrent generated by a wide angle silicon PIN photodiode. The outputs of the receivers are simultaneously digitized and range difference measurements are estimated from the resulting sequences [2].

978-1-4673-1954-6/12/$31.00 ©2013 IEEE

22/278



2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 The phase measurement noise is modeled as a zero mean normal distribution whose variance is inversely proportional to the output signal-to-noise ratio of the receiver. The phase error is converted into a distance error using the modulation frequency and the propagation speed in vacuum conditions (c = 3·108 m/s). Five sensors are deployed to cover the area of a 9 m2 regular square; they are placed in the corners and in the center according to [3] (the height of the sensors is 2.80 m). The height of the emitter is constant and known (0.65 m). IV.

RESULTS

Results have been obtained with 5000 Monte Carlo runs. Fig. 1 shows the positioning cell under test. 21 test points have been tested with three, four and five sensors with MLE and NLS. The RMSE is compared with the square root of the trace of the Cramer Rao Lower Bound (CRLB) and plotted in Fig. 2, 3 and 4. Figures captions show the sensors in use. When just three sensors are used we neglect the measurements of the further sensors (II and III, see Fig. 1), which are added in the subsets containing four and five sensors, sensor V is always the reference. Fig. 2, 3 and 4 show that we can neglect the correlations of the measurements using NLS and reach the CRLB selecting the appropriate subset, which is depicted in Fig. 1 for each sensor configuration.

Figure 3. MLE vs. NLS, sensors I, II, IV and V.

Figure 4. MLE vs. NLS, sensors I, II, III, IV and V. 3

Sensors Sensors: 5 Sensors: 4 Sensors: 3

II

III

V.

2.5

This paper has presented how important it is to take into consideration SRM in an IR positioning system. Selecting the best subset in relation with the geometry of the cell and the point of interest allow the application of lower-complexity algorithms and avoid the computation of the correlations. Further research will include a study of the computation time we can save and the derivation of an indicator to select the best subset.

Length, m

2

1.5

V

21

15

20

10

14

19

6

9

13

18

3

5

8

12

17

2

4

7

11

16

1

0.5

0

1

I 0

0.5

1

1.5 Width, m

CONCLUSIONS

ACKNOWLEDGMENT

IV 2

2.5

Figure 1. Optimum sensor subset selection.

3

This research was supported by the Spanish Research Program through the project ESPIRA (ref. DPI2009-10143). F. Domingo-Perez thanks the FPU program (Ministerio de Educación, Cultura y Deporte, Spanish Government, 2012). REFERENCES [1]

[2]

[3]

Figure 2. MLE vs. NLS, sensors I, IV and V.

23/278

C. Yang, L. Kaplan, and E. Blasch, “Performance measures of covariance and information matrices in resource management for target state estimation,” IEEE Trans. Aerosp. Electron. Syst., vol. 48, no. 3, pp. 2594–2613, Jul. 2012. E. M. Gorostiza, J. L. Lázaro Galilea, F. J. Meca Meca, D. Salido Monzú, F. Espinosa Zapata, and L. Pallarés Puerto, “Infrared sensor system for mobile-robot positioning in intelligent spaces,” Sensors, vol. 11, no. 5, pp. 5416-5438, May 2011. Y. Chen, J.-A. Francisco, W. Trappe, and R. P. Martin, “A practical approach to landmark deployment for indoor localization,” in 3rd annu. IEEE Commun. Soc. on Sens. and Ad Hoc Commun. and Netw. (SECON’06), Reston, VA, 2006, pp. 365-373.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

An indoor navigation approach for low-cost devices Andrea Masiero, Alberto Guarnieri, Antonio Vettore and Francesco Pirotti Interdepartmental Research Center of Geomatics (CIRGEO) University of Padova Padova, Italy [email protected] Abstract—The increasing diffusion of low-cost devices is motivating the development of a ever-growing number of mobile applications. On the other hand, indoor navigation is recently becoming a topic of wide interest, especially thanks to its possible use in some socially relevant applications (e.g. indoor localization during emergencies). Motivated by these considerations, this paper proposes a Bayesian probabilistic approach to the problem of indoor navigation with low-cost mobile devices (e.g. smartphones). The proposed approach deals with the unavailability of the GPS signal by integrating geometric information on the environment with measurements provided by the inertial navigation system and the radio signal strength of a standard wireless network. The proposed system takes advantage of the sensor measurements to build a statistical model of the characteristics of the environment. The estimated model of the environment is used to improve the localization ability of the navigation device. Keywords- indoor navigation; sensor fusion; nonlinear filtering

I.

INTRODUCTION

The capillary diffusion of smartphones and tablets and the unreliability of the GPS signal [9, 10] in certain operating conditions (e.g. indoor environments) are motivating an increasing interest in the development of alternative navigation systems for low-cost mobile devices. Several approaches have been considered in the literature ([1, 2, 4, 7, 13]) to deal with the lack of the GPS signal, most of them based on the use of Inertial Navigation System (INS) measurements and on the Radio Signal Strength (RSS) of wireless networks. However, results that can be obtained by using INS or RSS separately are often unsatisfactory. On the one hand, because of the low reliability of INS measurements, position estimation systems based on such updates quickly drifts from the real track. On the other hand, WiFi signal instability and change in the environment do not allow to obtain a sufficiently small positioning estimation error for systems based exclusively on the RSS. Hence, nowadays it is commonly accepted that indoor navigation systems have to integrate the use of different sensors (e.g. INS and RSS measurements) to deal with the unreliability (or lack) of the GPS signal. In this direction, several approaches have been recently proposed in the literature [5, 8, 12]. This paper considers a Bayesian approach to tackle the indoor pedestrian navigation problem, where information from INS and RSS measurements is integrated with a priori geometrical and physical information on the environment.

Information integration is formulated as a nonlinear optimization problem, and effective tracking is obtained by means of a multiple hypothesis approach. Furthermore, sensor measurements can be used also to improve the model of the environment and to detect regions with specific characteristics (e.g. landmarks [11]): such characteristics can be used to improve the successive performance of the navigation system. The results obtained in our simulations in a university building suggest that the proposed approach allow to obtain good navigation accuracy using low-cost devices (provided with a minimum number of navigation sensors). II.

SYSTEM DESCRIPTION

A. Characterization of the Navigation System In this work it is assumed that the device used for the navigation satisfies the following requirements: it is provided of a 3-axis accelerometer and of a 3-axis magnetometer. Furthermore, since in terrestrial applications usually the height with respect to the floor is of minor interest, navigation is considered as a planar tracking problem. However, notice that certain exceptions to the last assumption are admitted, e.g. to deal with stairs and lifts. Since this work is mainly motivated by the interest in developing a navigation system for (standard) low-cost mobile devices, then simulations have been performed by using standard smartphones, thus without the use of any external sensor. Let (ut,vt) be the position of the smartphone, expressed with respect to the North and East directions, at time t, and let (us,vs,ws) be the smartphone (local) coordinate system. Then, conventionally, the heading direction is assumed to approximately correspond to one of the axes of the coordinate system (system is designed to estimate and correct heading direction discrepancies, with absolute value lower than 15 degrees, with respect to such direction). The rationale is that of using a dead reckoning-like approach: a proper analysis of the accelerometer measurements allow to detect the human steps, while the magnetometer allows to estimate the movement direction with respect to the North. In order to make the estimation method more robust, such information is integrated with that provided by RSS measurements and the geometrical characteristics of the building. RSS measurements of the standard WiFi networks are provided by the corresponding sensor in the smartphone.

24/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 B. Dynamic Model of the System The measurements available to the navigation procedure are the lengths {st} of the steps [6], the angles {t} (on the horizontal plane) with respect to the North direction, and the RSS measurements. Here the index t in st indicates the progressive step number. Analogously, t denotes the angle associated to the t-th step. Exploiting an RSS channel model [4], the RSS measurements are converted in distance measurements: i.e. djt is the distance measurement at the t-th step with respect to the j-th Access Point (AP) (djt can eventually be empty), and dt the vector formed by the set of distances {djt}, for all j. Assuming that the starting position (u0,v0) is known, then the information provided by {ut , vt}t=1,…,t and {t , st}t=1,…,t is equivalent:

[ ][] [ ]

(1) The system dynamic and the measurements will be modeled as follows:  t 1 =  t   t st 1 st

[ ][ ]

t b , t y t =C t s t  0  t 0 dt

(2)

t

, t

2



∑t  st  u t , v t −s t 2 ∑ j , t  d j , t u t , v t −d j , t 2

+

(4) B. Estimation of System Characteristics Measurements provided by the sensors are used to estimate specific characteristics of the system and of the environment. In particular, the influence of sensor errors on the orientation measurements (provided by the magnetometer) are estimated and (partially) corrected online. RESULTS

A low-cost smartphone, Huawei Sonic U8650, has been used to validate the proposed navigation system in a building of the University of Padova. Taking into consideration tracks of approximately 600 steps, the mean estimation error of the current position is 2.5m, whereas considering fixed time delayed estimation then the mean error is 2.3m. REFERENCES [1] [2] [3]

[4]

[5]

LOCALIZATION [6]

A. Multiple Hypothesis Tracking Following a Bayesian approach, the tracking algorithm estimate the position of the device by integrating geometric information of the environment with that provided by the sensor measurements. Specifically, the spatial domain of interest (in our case the three floors of the building) is partitioned in a set of L disjoint regions. Then, the discrete variable t is defined to be equal to i if the position of the device is inside region i at time t. Thus t is a discrete state variable, and the vector t, formed by the values of t collected from time 0 to t, represents a rough description of the temporal track of the smartphone from time 0 to t. Then, the problem of estimating the positions of the device can be formulated as in (4). This problem is solved by using interior methods and properly setting the initial guess of the solution [3].

+

  2s  2d + log p  X t∣ t = t , G log p  t = t∣G 

(3)

where wt and t are assumed to be independent zero mean Gaussian white noises, Ct is a matrix formed by ones and zeros, that selects the measurements available for step t, and b,t is a bias in the direction measurement. Measurements of t, st, dt and are assumed to be independent, and measurement errors are assumed to be Gaussian (measurement error of st, dt are assumed to zero-mean, whereas, according with the bias assumption stated above, the error of t is assumed to have mean b,t that can be estimated from data). III.

+

∑t   t u t , v t − t 2

IV.

sin α t ut +1 u = t +s t cos α t v t +1 vt

[ ][ ]

X t ,  t = arg max X

[7]

[8]

[9] [10] [11] [12]

[13]

978-1-4673-1954-6/12/$31.00 ©2012

25/278

M. Barbarella, et.al., “Improvement of an MMS trajectory, in presence of GPS outage, using virtual positions”, ION GNSS 2011. M. Barbarella, S. Gandolfi, A. Meffe, and A. Burchi, “A test field for Mobile Mapping System:design,set up and first test results”,MMT 2011. B.M. Bell, J.V. Burke, G. Pillonetto, “An inequality constrained nonlinear Kalman-Bucy smoother by interior point likelihood maximization”, Automatica, vol. 45 (1), pp. 25-33, January 2009. A. Cenedese, G. Ortolan, and M. Bertinato, “Low-density wireless sensor networks for localization and tracking in critical environments”, IEEE trans. on vehicular technology, vol.59(6), pp.2951-2962,July 2010. N. El-Sheimy, K.-W. Chiang, and A. Noureldin, “The utilization of artificial neural networks for multisensor system integration in navigation and positioning instruments”, IEEE trans. on instrumentation and measurement, vol. 55 (5), pp. 1606-1615, October 2006. J. Jahn, U. Batzer, J. Seitz, L. Patino-Studencka. and J. Gutiérrez Boronat, “Comparison and evaluation of acceleration based step length estimators for handheld devices”, 2010 , IPIN, pp. 1-6, 2010. A.R. Jimenez Ruiz, F.S. Granja, J.C. Prieto Honorato, and J. I.G. Rosas, “Accurate pedestrian indoor navigation by tightly coupling footmounted IMU and RFID measurements”, IEEE transactions on instrumentation and measurement, vol. 61(1), pp. 178-189, Jan. 2012. C. Lukianto, H. Sternberg, “Stepping – Smartphone-based portable pedestrian indoor navigation”, Archives of photogrammetry, cartography and remote sensing, Vol. 22, pp. 311-323, 2011. M. Piras, A. Cina, “Indoor positioning using low cost GPS receivers: Tests and statistical analyses”, IPIN 2010. M. Piras, G. Marucco, K. Charqane, “Statistical analysis of different low cost GPS receivers for indoor and outdoor positioning, PLANS 2010. H. Wang, S. Sen, A. Elgohary, M. Farid, M. Youssef, R.R. Choudhury, “No need to war-drive:Unsupervised indoor localization”,MobiSys2012. Widyawan, G. Pirkl, D. Munaretto, C. Fischer, C. Ane, et al., “Virtual lifeline: Multimodal sensor data fusion for robust navigation in unknown environments”, Pervasive and mobile computing, vol. 8, 388-401, 2012. M. Youssef, and A. Agrawala, “The horus WLAN location determination system”, MobiSys '05, pp. 205-218, 2005.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

ARIANNA: a Two-stage Autonomous Localisation and Tracking System Enrico de Marinis, Fabio Andreucci, Otello Gasparini, Michele Uliana, Fabrizio Pucci, Guido Rosi, Francesca Fogliuzzi R&D and Automation Dept. DUNE s.r.l. Rome, Italy [email protected] Abstract— ARIANNA is a small-size system, wearable by an operator for his localisation and tracking. Its design stems from the following assumptions: no need of infrastructure for localisation; low cost, no need of warm-up time (e.g. training phases); seamless switch between GPS-denied/available conditions; computational requirements relaxed enough to be hosted in a commercial smartphone. ARIANNA meets these objectives by adopting a novel two-stage approach: the former stage is a conventional tracking process based on Extended Kalman Filter and step detection; the latter is a post-processing in which the errors due to the sensor drifts are estimated and compensated. The system has been extensively tested with various sensors, different operators, in clear and polluted magnetic environments, with good and poor/intermittent GPS, with paths ranging from 300 m to 3 km, each walked with mixed speeds. The results systematically show good and repeatable performance. Keywords- IMU, PDR, compass, GPS, tracking, calibration, localisation, pedestrian, indoor positioning, multi-sensor navigation, human motion models

I.

INTRODUCTION

Substantial efforts and resources have been steered in the past decade toward INSs (Inertial Navigation System) for human tracking and localization based on IMUs (Inertial Measurement Unit) based on MEMS (Micro ElectroMechanical Systems) technology [1], [2]. The major attractive is that these devices might provide low-cost, low-power, miniaturized, lightweight and infrastructure-less solutions for the accurate navigation in GPS-denied scenarios. However they suffer significant bias, noise, scale factors, temperature drifts and limited dynamic range, resulting into position deviation and magnification of the angular Abbe error. These drawbacks de-facto prevent the use of MEMS IMUs for long-range localisation. As a consequence, it is not surprising that most of the efforts in the recent years address a widespread ensemble of techniques to improve the localization capabilities of MEMSbased INS for pedestrians. Most of the techniques rely on the PDR (Pedestrian Dead Reckoning) [1], where the walking behavior is exploited to reset the INS errors by adopting an ECKF (Extended Complementary Kalman Filter). Other approaches achieve better performance by exploiting the presence of ancillary sensors, such compass [3], or by visualinertial odometry [4]. Also independent, pre-existing sources of

information are exploited, such as or RFID tags [5] or “map matching” techniques [6]. The recent trends jointly exploit multiple-sensors readings (e.g. compass, barometer, RFID tags) into UKF (Unscented Kalman Filter) structure [7]. However, scrutinizing the current state of the art, it can be highlighted that a common factor shared by all the approaches is the adoption of a unique, powerful, sophisticated processing, fusing multiple input data coming from heterogeneous sensors, usually sampled at different rates and with different relative delays, trying to provide the best possible output. This pushes up the HW complexity and poses a constraint on the battery drain of a wearable system, as well as on its cost, weight and size. In addition, some sensors need a mandatory calibration phase before the operations: gyroscopes biases and scale factors drift with temperature and magnetometers need the Hard-Iron Calibration (HIC) and Soft-Iron Calibration (SIC). The lack of gyro calibration introduces an amplification of the Abbe error and uncalibrated magnetometers can significantly magnify the position errors, when they are exploited to reduce the inertial angular drifts. Despite the plethora of calibration methods for gyros and magnetometers [8], [9], some MEMSbased IMUs and compasses also suffer a long-term obsolescence of the calibration (e.g. a few months for gyros and even 1-2 weeks for magnetometers HIC). This would imply a re-calibration performed on a regular basis: an unacceptable task from the end-user perspective. In this paper we describe ARIANNA, a novel comprehensive system for the tracking of pedestrian operators. The key assumptions and requirements of ARIANNA stem from a long phase of analysis performed with the collaboration of end-users (e.g. firefightes, army, speleologists).

26/278



Low cost, small-size and lightweight system, smoothly wearable by an operator, with at least 4 hours of battery life with no recharge.



Unavailability of any ancillary infrastructure for localisation, either pre-existing or to be deployed during the operations.



Zero-touch interaction with the operator, no need of warm-up times, training phases or constraints on the initial path to be walked.



No calibration tasks to be performed by the end-users.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 

Performance independent of the number of operators.



Computational requirements relaxed enough to be hosted (as option) in a commercial smartphone.

These objectives are met by adopting a novel two-stage approach: the former is a conventional PDR based on ECKF; the latter is a post-processing in which sensor drifts are estimated and compensated. The data coming from the GPS (when available) and from the compass (when reliable) can be exploited in both stages. This paper is organized as follows: the proposed ARIANNA system and its post-processing are illustrated in Section II, whereas its performance is assessed in Section III. Finally, in Section IV some conclusions are provided. II.

and GPS data, if available), transmitted to the C2 at a much lower rate (e.g. 1-2 Hz), are subsequently employed in a joint scheme to estimate the HIC of the compass. The normalized GPS data (if and when reliable) and the compensated compass data are subsequently employed to estimate the positioning drift parameters, so to compensate them in the last processing step. It should be highlighted that the compass data, even if corrupted by local polarization and interference, are always available, whereas GPS data can appear and disappear in an unpredictable way: the ARIANNA post-processing automatically handles this, avoiding the inclusion of any special logic, thus insuring seamless indoor/outdoor operations (e.g. continuous walk inside and outside buildings).

ARIANNA SYSTEM

ARIANNA is a light, smoothly wearable and highly customizable localization and tracking system for the remote tracking of pedestrians, seamlessly managing presence/absence of the GPS signal. Its basic components are: 

miniaturized IMU+Compass shoe-fastened unit, small enough to be also sealed into the heel;



wearable computing and transmission unit, also equipped with GPS, where PDR processing is performed (it can range from a Smartphone to a dedicated pocket-size HW, depending on the enduser’s needs);



remote receiving and visualisation unit (e.g. a commercial, mid-level PC) where the ARIANNA proprietary post-processing is performed.

As illustrated in Fig.1, the raw sensor data from a shoemounted unit can be linked to the processing unit by a wireless (e.g. BT) link or by a waterproof cable (e.g. when the operators walk in partially flooded environments). In the wireless version, the sensor unit comes with a battery insuring 4 hours of continuous operations and the recharging can be done with a proprietary RF device (working at 150 kHz), avoiding the need of accessible plugs (e.g. when the sensor is sealed inside the heel). The position data are computed by the processing unit (power consumption 1.2 W); these data are transmitted to the remote command and control center (C2), where the ARIANNA post processing for the drift compensation is performed and the tracking data are displayed in 3D. The bandwidth needed for each operator on the user-C2 link is so small (50 bps) that a commercial digital radio modem (260485MHz band, 38-57 kbps) can in principle accommodate hundreds of simultaneous transmissions. So far 3G/4G cellular links and commercial radio-modem have been employed over virtually unlimited and 2-3 km ranges, respectively. A schematic block diagram of the whole processing chain of ARIANNA is depicted in Fig. 2. A purely inertial tracking is computed in the wearable processing unit; this PDR is performed at the sensors sample rate (e.g. 400 Hz) and is expected to be affected by significant drifts, as no information coming from the ancillary sensors (compass, GPS) is exploited. The uncompensated tracking data (along with the raw compass

27/278

Figure 1. Basic elements of ARIANNA system. High rate local processing

Low rate remote post-processing

Uncompensated Position

Compensation and position adjustment

PDR with ECKF

Drift factors estimation

Compass HIC estimation

Step detection

Gyro.

Accel.

Reliability

Reliability

Compass

GPS

Figure 2. Functional block diagram of ARIANNA processing chain.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 Beyond the performance improvement expected by the joint exploitation of the independent information coming from the GPS and compass, ARIANNA comes with some additional advantages at system level. The processing performed at higher data rate is a PDR based on ECKF with a minimal complexity configuration, as no attempt of further correction/compensation is performed at this stage: this minimises the hosting HW complexity, cost and the associated battery drain. In addition, the uncompensated position data are transmitted at rates as low as 1-2 Hz (enough to insure an effective post-processing) and this slow transmission rate further shrinks the requirements on the power needed for the data delivery and the relevant bandwidth to be allocated. On the post-processing side, the low data rate and the absence of complex algorithms are the key factors to let the proprietary estimation/compensation algorithms run on any commercial mid-level PC. From the operational point of view, usually the gyro biases are estimated by requiring the operator to stand still a few tens of seconds before moving; the HIC and SIC parameters can be roughly estimated by requiring the operator to walk a circle or an 8shaped path. ARIANNA does not have such requirements: the operator’s interaction with the system in basically zero-touch, so to let him/her focus on the mission, also considering that constraints such as the still periods and/or constrained paths sound as unacceptable by some classes of end-users (e.g. soldiers, firefighters). As a last consideration, looking at the ARIANNA system as a whole, the mitigated requirements on calibration, power, bandwidth and hardware leave a significant room for customisation. III.

In the following, the results of three experiments are provided. In Fig. 3 no GPS is employed and ARIANNA solely relies on uncalibrated compass to compensate drifts. The experiment is a 2.53 km path walked back and forth on a long straight road, then entering a large building and finally back to the starting point. The PE metric in this case is 0.51%. In vertical plane (not reported here) PDR is affected by a constant drift, leading to a final vertical position error of 45 m, whereas ARIANNA never exceeds 1.5 m of vertical position error along the whole experiment, with an error at the end point of 1 cm. TABLE I.

MEAN AND S.D. OF THE PERFORMANCE METRICS No GPS

GPS (urban/suburban)

PE %

SFI (0-10)

PE %

SFI (0-10)

PDR

9.612.4

3.82.8

-

-

PDR+MAG

7.06.0

4.13.0

-

-

ARIANNA

1.752.3

8.41.4

0.81.1

9.10.5

EXPERIMENTAL RESULTS

ARIANNA has been widely and extensively tested with various sensors, different operators, in clear and very polluted magnetic environments, with straight and random paths ranging from 300 m to 3 km, each walked with mixed speeds, ranging from 0 km/h (long still periods), up to 8 km/h. Usually the performance are measured by walking closed paths and adopting the metric PE = ||r0-re||/L, i.e. the distance between the starting and final positions (r0 and re, respectively) as a percentage of the walked distance L. However this metric could be somewhat misleading, as it does not account for the departure of the estimated path from the ground truth: e.g. two distinct angular errors might compensate each other, so to lead to a small PE score, despite the poor similarity of the path with the ground truth. In the absence of a calibrated testbed, enabling point-by-point differential measures, we introduce also the (subjective) SFI index (Shaping Fidelity Index) roughly ranking the similarity between the estimated path and what we know to be the ground truth (0=no similarity, 10=excellent match). The following Table I summarises the mean values and the SD of the PE and SFI metrics, estimated over 36 heterogeneous experiments. From the table, the significant boost of ARIANNA w.r.t. the PDR and PDR+MAG (i.e. PDR with magnetic drift reduction) is apparent, both for PE metric and SFI index. It should be also considered that PDR also benefits of a calibrated compass and an initial still period (gyro biases estimation), whereas ARIANNA does not.

Figure 3. Estimated paths by PDR (black) and ARIANNA exploiting only compass data (red); walked distance: 2.53 km.

Figure 4. GPS (green) and ARIANNA exploiting both GPS and compass data (black) and only compass (red); walked distance: 1.4 km.

28/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 The experiment in Fig. 4 has been performed in a typical dense urban environment. Both GPS and compass are employed in ARIANNA. The path length is 1.40 km, with a long section walked in the underground metro station, where the GPS, although still available, is definitely unreliable. This underground path estimated by ARIANNA is reported in red in Fig. 4 and a detail is provided in Fig. 5. The PE metric for this experiment is 0.71% (for GPS is 0.4 %). In Fig. 4, large fluctuations can be noticed for GPS: they are mainly due to the typical multipath effects in urban environments; on the opposite ARIANNA preserves a better resemblance with the ground truth. Also in this case (not reported in the figures) the vertical drift of the PDR leads to a final vertical position error of 24 m, whereas the ARIANNA vertical error at the final point of 0.2 m (the corresponding GPS error is 5 m, but with fluctuation as large as 20 m along the whole experiment).

this case, the lack of calibration results in a dramatic loss of performance for PDR+MAG. On the opposite, ARIANNA, although exploiting the same uncalibrated compass, performs well, giving a final PE=0.31%. In addition, the final vertical error is 3 m for PDR and only 1 cm for ARIANNA. IV.

CONCLUSIONS

In this paper, ARIANNA, a customizable, novel pedestrian positioning and tracking system specifically designed for lowcost MEMS-based IMUs, is presented. It splits the path estimation and the drift compensation in two separate processing structures: the former, hosted on the wearable computing unit of the operator, operates at higher rate; the latter, hosted on the remote receiver side, operates at much slower rate. An extensive validation campaign, performed with a wide range of experimental conditions, has systematically demonstrated a superior performance of ARIANNA w.r.t. PDR and, more important, a good repeatability. The current work for its further improvement is focused on configurations with IMUs mounted on both shoes and the management of lifts and elevators. In conclusion, ARIANNA is a mature system in which electronic, logistic, recharging, processing and visualization have not been designed just for demonstration, but for the use in real-operations. REFERENCES [1] [2]

[3] Figure 5. Detail of Fig. 4, relevant to the underground metro station. [4]

[5]

[6]

[7]

[8] Figure 6. PDR with compass aiding (black) and ARIANNA (red), both exploiting the same uncalibrated compass data; walked distance: 2.32 km.

[9]

The experiment in Fig. 6 consists of 6 rounds (plus some random walk at the beginning and at the 5th round) of a soccer pitch for a total walked length of 2.32 km. In this case the uncalibrated compass data have been employed to correct the PDR estimation, an operation resulting in an effective improvement when the compass is properly calibrated, but in

29/278

E. Foxlin, “Pedestrian tracking with shoe-mounted inertial sensors”, IEEE Comput. Graph. Appl., vol. 25, no. 6, pp. 38–46, Nov. 2005. H. Leppäkoski, J. Collin, J. Takala, “Pedestrian Navigation Based on Inertial Sensors, Indoor Map, and WLAN Signals” in Journal of Signal Processing Systems, 2013. A.R. Jimenez, F. Seco, J.C. Prieto and J. Guevara, “Indoor Pedestrian Navigation using an INS/EKF framework for Yaw Drift Reduction and a Foot-mounted IMU”, WPNC 2010: 7th Workshop on Positioning, Navigation and Communication, 2010. M. Li, A. I. Mourikis, “High-precision, consistent EKF-based visual– inertial odometry”, International Journal of Robotics Research, Volume 32, No 6, May 2013. A.R. Jiménez, F. Seco, J.C. Prieto, and J. Guevara Rosas, “Accurate Pedestrian Indoor Navigation by Tightly Coupling Foot-Mounted IMU and RFID Measurements”, IEEE Trans. Instrum. Meas., vol. 61, no. 1, pp. 178–189, Jan. 2012. S. Kaiser, M. Khidera, P. Robertson, “A pedestrian navigation system using a map-based angular motion model for indoor and outdoor environments”, Journal of Location Based Services, Special Issue: Indoor Positioning and Navigation. Part III: Navigation Systems, Volume 7, pp. 44-63, Issue 1, 2013. M. Romanovas, V.Goridko, L.Klingbeil, M.Bourouah, A.Al-Jawad, M.Traechtler, Y. Manoli, “Pedestrian Indoor Localization Using Foot Mounted Inertial Sensors in Combination with a Magnetometer, a Barometer and RFID”, in Progress in Location-Based Services, Lecture Notes in Geoinformation and Cartography, pp 151-172, 2013. W. Ilewicz, A. Nawrat, “Direct Method of IMU Calibration”, Advanced Technologies for Intelligent Systems of National Border Security, Studies in Computational Intelligence Volume 440, pp 155-171, 2013 Z. Wu, Y. Wu, X. Hu, M. Wu, “Calibration of Three-Axis Magnetometer Using Stretching Particle Swarm Optimization Algorithm”, IEEE Transactions on Instrumentation and Measurement, Volume: 62, Issue: 2, pp. 281-292, Feb. 2013.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 th-31th October 2013

Source Localization by Sensor Array Processing using a Sparse Signal Representation Joseph LARDIES

Marc BERTHILLIER

Institute FEMTO-ST DMA Besançon, France [email protected]

Institute FEMTO-ST DMA Besançon, France [email protected]

Abstract—The objective of the communication is the localization of emitting sources by an array of sensors. The sources are narrowband or wideband and can be correlated or uncorrelated. The method is based on a sparse representation of sensor measurements with an overcomplete basis composed of time samples from the sensor array manifold. A new adaptation approach of sparse signal representation based on a compromise between a residual error and the sparsity of the representation is proposed. An important part of our source localization technique is the choice of the regularization parameter which balances the fit of the solution to the data versus the sparsity prior. An appropriate regularization parameter which handles a reasonable tradeoff between finding a sparse solution and restricting the recovery error is obtained using three approaches. Simulations and experimental results demonstrate that the proposed method of selecting the regularization parameter can effectively suppress spurious peaks in the spatial spectrum and only the correct sources are localized. Keywords : array processing; source localization; sparse representation; high-resolution

selection of the regularization parameter is analyzed using three methods: the L-curve method, the analysis of the Chisquare distribution of the recovery error and the analysis of the Rayleigh distribution of the recovery error. Numerical and experimental results are presented. II.

SOURCE LOCALIZATION FRAMEWORK

A. Source Localization Problem The goal of sensor array source localization is to find the locations of sources that impinge on an array of sensors. To simplify the exposition we only consider the farfield scenario and confine the array to a plane. The available information is the geometry of the array, the parameters of the medium where sources propagate and the measurements on the sensors. Consider an antenna array of N elements and assume P signals impinge on the array from unknown directions θ1, θ2,… θP. The array output can be described as [6-8]: P

y(t)= A(θ) s(t) + b(t) = I.

INTRODUCTION

 a (θ i ) s i (t) + b(t)

(1)

i 1

Source localization using sensor array processing is an active research area, playing a fundamental role in many applications such as electromagnetic, acoustic, seismic sensing, and so on. The receiving sensors may be any transducers that convert the received energy to electrical signals and an important goal for source localization methods is to be able to locate closely spaced sources in presence of noise. There are a lot of high-resolution algorithms for source localization such as MUSIC and ESPRIT [1,2] which require the computation of the sensor output covariance matrix and the knowledge of the signal subspace. We propose a different approach for source localization based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. The method uses the l1 – norm penalty for sparsity and the Frobenius-norm penalty for noise or residual error. However, this method needs the use of a regularization parameter [3-6] which handles a reasonable tradeoff between finding a sparse solution and restricting the recovery error. If the regularization parameter is too low there are a lot of spurious sources and, inversely, if the regularization parameter is too high some sources are dismissed. In [7] Zheng et al. presented a sparse spatial spectrum but the regularization parameter was chosen arbitrary. In the communication, the

where y(t) is the array output, s(t) is the complex amplitude of the signal field, b(t) is the additive Gaussian noise and A(θ) = [a(θ1), a(θ2),… a(θP)] is the array manifold matrix (NxP). The goal is to find the unknown directions {θ1,.. ,θP} of the sources from the observation y(t), when the number of samples is small (less to 50), using a sparse signal representation. B. Source Localization by Sparse Representation We formulate the source localization problem given in (1) into a sparse signal reconstruction perspective. For this formulation one defines an overcomplete matrix A containing all possible source locations. Let { θ1,θ2, ,θL } be a sampling grid of L possible source locations. In the farfield this grid contains the directions of arrival and in the near field this grid contains bearing and range information. We assume that {θ1,.. ,θP}  { θ1,θ2, ,θL } and the model (1) can be reformulated as :

30/278

~ y(t) = A( θ ) x(t) + b(t)

(2)

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 th-31th October 2013 where x(t) is the (Lx1) vector, whose ith entry is equal to the jth ~ entry of s(t) if θ i = θ j and zero otherwise. Therefore, the locations information is converted to the positions of the non~ zero entries in x(t). The important point is that A( θ ) is known and does not depend on the unknown source locations θ j as A(θ) did and the problem is the estimation of the spatial spectrum of x(t), which has to exhibit sharp peaks at the correct source locations. With multiple snapshots the model (2) can be written as : ~ Y = A( θ )X + B (3) where Y = [y(1) y(2)…y(T)] is the matrix (NxT) containing T time samples, X (LxT) and B (NxT) are defined similarly. Matrix X has a two-dimensional structure: a spatially structure with a spatial index i=1,…,L and a temporally structure with a time index j=1,…,T. But sparsity only has to be enforced in space. This can be done by computing the l2-norm of the corresponding rows of X : of

x

(l 2 )

(l 2 )

xi

and penalizing the l1-norm

(l ) (l ) (l ) = [ x 2 , x 2 , .....,x 2 ]T. The sparsity of the L 1 2

resulting (Lx1) vector

x

(l 2 )

corresponds to the sparsity of the

This curve has typically a L shape and the regularization parameter value corresponding to the corner is the one that balances the tradeoff optimally. However, in several cases the corner is not clearly visible and the L-curve gives a bad regularization parameter. Another approach based on the cumulative distribution function of noise is presented. B. Regularization parameter estimation by the Chi-square distribution of noise We present an approach to select the regularization parameter automatically for the case where some statistics of the noise can be estimated or are known. Let X( β ) be the time-spatial matrix obtained using β as the regularization parameter. Malioutov et al. [6] and Zheng et al. [11], propose to select the parameter β to match the residuals of the solution to some statistics of the noise. If the distribution of the noise B is known or can be modeled, then the regularization parameter is obtained such that the Frobenius norm of the residual error approaches the Frobenius norm of noise. Let bmn be the (m,n) element of the B matrix. Malioutov et al. assume that the noise is Gaussan, independent and identically distributed with zero mean and variance equal

spatial spectrum. We can find the spatial spectrum by solving the joint sparse optimization problem [9]: min x (l 2 ) subject to YA(~) X 1

2



2

(4)

and B

F

The method uses the l1 –norm penalty for sparsity of the representation and the Frobenius-norm penalty for noise or residual error, it forces the residual to be small. For this problem, the task of choosing the regularization parameter 2 properly is very important and is discussed in the next section. III.

B

A. Regularization parameter estimation by the L-curve The regularization parameter controls the tradeoff between the sparsity of the spectrum and the residual norm. This parameter balances the fit of the solution to the data versus the sparsity prior. If the regularization parameter is too low there are a lot of spurious sources in the spectrum and if the regularization parameter is too high some sources would disappear. A very popular method for a choice of the regularization parameter is the L-curve method [10]. Having noted the important roles played by the norms of the solution and the residual, it is quite natural to plot these two quantities versus each other. The L-curve plots the log l1- norm of sparsity against the Frobenius norm of recovery error for a range values of the regularization parameter:

1

2

)

F

=

N

T

  b2mn

(5)

(6)

m1 n 1

has approximately a Chi-squared distribution with

F

NT degrees of freedom upon normalization by the variance of 2 noise : B F



2

2NT . The cumulative distribution function of

the 2NT is p = F(z, NT) =



z t ( NT 2) / 2 e t / 2

0 2

REGULARIZATION PARAMETER ESTIMATION

log( x (l 2 ) ) = f( Y A(~) X

2

2

2 . We have:

NT/ 2

( NT / 2)

dt

(7)

where  is the Gamma function and the inverse of the chisquare cumulative distribution function for a given probability p (or a given confidence interval) and NT degrees of freedom is z = F-1(p, NT) = z : F(z, NT)  p  (8) From (4) we must choose 2 high enough so that the probability that B

2 F

 β2 is small and we use the chi-square

distribution with a very high degree of confidence to ensure the suppression of spurious sources. Unfortunately, when the number of time samples is small (inferior to 50) even with a degree of confidence up to 0.999 we cannot obtain the regularization parameter to effectively suppress such spurious sources (see applications). To explain this phenomenon we consider equation (4) ~ 2 2 ~ H ~ Y  A() X = A() S  A() X +trace [B(A() S  A() X) ] F

F

31/278

F

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 th-31th October 2013

~ + trace [(A() S  A() X) BH ] + B

2

(9)

F

Under the l1-norm minimization if we only exploit 2 = 2

to obtain the regularization parameter the spurious

F

1400

Courbe en L 7.6

noise cannot be removed and we obtain in the spatial spectrum inevitable spurious peaks. The value of the regularization parameter is too small. We present now a method to obtain the regularization parameter with a large dynamic range.

7.4

log(||x||1)

7.2 7 6.8 6.6

1000

C. Regularization parameter estimation by the Rayleigh distribution of noise

6.2

T

(10)

m1 n 1

If cmn ~ N (0, σ2 /2 ) and dmn ~ N (0, σ2 /2 ) then the absolute value of the complex number bmn : b mn = (c2mn d2mn ) is Rayleigh-distributed. The cumulative distribution function of the Rayleigh distribution is 2

pmn = G( b mn , σ /2 ) =



2 2 b mn 2 t e t /  2

2 b mn : G( b mn ,  / 2)  p mn

    

(12)

Then, we have 2 F

=

N

T

 

2 b mn

0.1

N T =   G1(pmn , σ2/2) 

m1 n 1 m1 n 1  1 2 NT G (pmax , σ /2) = 2Rayl

(13)

where pmax = max  p mn  . The regularization parameter is obtained from the Rayleigh inverse cumulative distribution function using (13).

0.14

0.16

200 0.9

0.18

0.91

0.92

0.93

0

0.96

0.97

0.98

0.99

1

-20 -40

-50

-60

-60

-80 -100 -120

-100

-150

-80 -100 -120 -140

-140 -160

-160

-200

-180

-180

-80

-60

-40 -20 0 20 40 Angle d' incidence (degrés)

60

80

-250 -100

100

-80

-60

-40 -20 0 20 40 Angle d' incidence (degrés)

60

80

100

-200 -100

-80

-60

-40

-20

0

20

40

60

80

100

Angle d' incidence (degrés)

(c) (a) (b) Figure 2. Spatial spectra for two uncorrelated sources by the sparse representation using the Chi-square distribution (a) with p=0,99 ; (b) with p=0,999 ; (c) using the Rayleigh distribution with p = 0,99

Consider now the case of two correlated sources. The number of snapshots is T=50 and RSB=20 dB. Figure 3(a) shows the variations of the regularization parameter and Figure 3(b) presents 20 spatial spectra with a regularization parameter obtained by the Rayleigh distribution with pmax = 0,99. The two correlated sources can be localized. 0

1000

-20

Chi2 Rayleigh

-40

800

-60

Puissance (dB)

We consider a uniform linear array of N=6 sensors separated by half a wavelength of the narrowband source signals. Two uncorrelated sources at 0° and 10° are present in the field. The number of snapshots is T=50 and SNR=20 dB. Figure 1(a and b) presents the variations of the regularization parameter using the three methods. The L curve presents differents corners and it is very difficult to obtain the regularization parameter from this plot. Figure 1(b) presents the variations of the regularization parameter versus the degree of confidence for the Chi-square distribution and the

0.95

0

0

-40

Paramètre de régularisation

NUMERICAL AND EXPERIMENTAL RESULTS

0.94

Probabilité de localisation des sources

(b)

900

IV.

0.12

Figure 2 (a) shows 20 spatial spectra for two uncorrelated sources with a regularization parameter obtained by the Chisquare distribution using a 0,99 degree of confidence. The spurious sources are too important and it is impossible to localize the two true sources. Even with a 0,999 degree of confidence we cannot localize the sources (Figure 2(b)). If we use a 0,99 degree of confidence and a Rayleigh distribution, we can then localize the two true uncorrelated sources as it is shown in Figure 2(c).

-200 -100

interval) and a scale parameter σ /2 is

B

0.08

Figure1. (a) Regularization parameter by the L-curve ; (b) Regularization parameter versus the degree of confidence by the Chi-square and the Rayleigh distribution

(11)

dt

2

    

0.06

-20

0  and the inverse of the Rayleigh cumulative distribution function for a given probability pmn (or a given confidence -1 2 b mn = G ( pmn , σ /2 )=

0.04

Puissance (dB)

N

  ( c2mn  d2mn )

0.02

Puissance (dB)

=

0

(a)

Puissance (dB)

F

600

||Y - AX||2 F

We assume that the noise is complex, Gaussian, independent and identically distributed. The element (m,n) of the B matrix is complex : bmn=cmn+jdmn with m=1,..,N and n=1,..,T 2

800

400

6.4

B

Chi2 Rayleigh

1200

Paramètre de régularisation

B

Rayleigh distribution. A large dynamic range of the regularization parameter is obtained by the Rayleigh distribution.

700 600 500

-80 -100 -120 -140 -160

400

-180

300 -200 -100

200 0.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

Probabilité de localisation des sources

(a)

0.99

1

-80

-60

-40 -20 0 20 40 Angle d' incidence (degrés)

60

80

100

(b)

Figure 3. (a) Regularization parameter variations ; (b) Spatial spectra for two correlated sources using the Rayleigh distribution with p=0,99

Figure 4 (a) shows the experimental test in an anechoid room. The two sources to localize are the two loudspeakers generating sinusoidal waves. The distance between

32/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 th-31th October 2013 microphones is d= λ/2 and T = 50. Figure 4(b) shows the variations of the regularization parameter using the Chi-square and the Rayleigh distribution. The Rayleigh distribution has a large dynamic range and we use this distribution to obtain the regularization parameter with p=0,99. The spatial spectra by the sparse method is plot in Figure 4c, we can localize the two acoustical sources easily. 2000

Chi2 Rayleigh

1600

Puissance (dB)

Paramètre de régularisation

(b)

(a)

0

1800

1400 1200 1000

-50

Figure 6. (a) Conventional beamforming ; (b) Sparse method for the localization of three wideband sources

-100

800 600 400 0.9

-150 -100

-80

-60

-40

-20

0

20

40

60

80

V.

100

Angle d' incidence (degrés) 0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

CONCLUSION

Probabilité de localisation des sources

(b) (c) (a) Figure 4. (a) Experimental procedure ; (b) Parameter regularization variations; (c) Experimental spatial spectra by the sparse method using the Rayleigh distribution and p=0,99

The method is now used to localize wideband sources. In Figure 5 we look at three wideband signals consisting of one or two harmonics each. At 1 = 60° there are two harmonics with frequencies 320 Hz and 480 Hz ; at

2 = 68° there is a

single harmonic with frequency 320 Hz, à 3 = 100° there are again two harmonics with frequencies 400 Hz and 480 Hz and at 4 = 108° there is a single harmonic with frequency 400 Hz. As shown in Figure 5 the sparse method resolves all sources and does not have any distortion due to noise, contrary to MUSIC method. MUSIC 0 50 100 150 0

100

200

300

400

500

600

700

800

900

400 500 600 Fréquence(Hz)

700

800

900

Angle d'incidence (degrés)

PARCIMONIE 0 50 100 150 0

100

200

300

Figure 5. Wideband source localization

In Figure 6 we present three chirps localized at 1 = 60°, 2 = 78° and 3 = 100°, with frequency span from 250 Hz to 500 Hz ( d/λ [0,25-0,5] ). Using the conventionl beamforming method we cannot localize the three wideband sources. The spatial-frequency spectra of the chirps are merged and cannot be separated as shown in Figure 6 (a), specially in lower frequency ranges. The methodology presented in the communication can be used for the localization of three wideband sources as it is shown in Figure 6 (b).

The regularization parameter plays an important role in the source localization problem by sparse reconstruction. This parameter handles a reasonable tradeoff between finding a sparse solution and restricting the amplitude of the recovery error. Simulations and experimental results shown the effectiveness of the method in the suppression of spurious sources when the number of time samples is small. The sparse signal reconstruction method presented can be used for the localization of wideband sources. REFERENCES [1] S.U. Pillai; Array signal processing; Springer-Verlag; 1989 [2] S. Marcos ; Les méthodes à haute résolution ; Edition Hermès, Paris, 1998 [3] J.J. Fuchs; More on sparse representations in arbitrary bases; IEEE Trans. on IT, Vol. 50, pp. 1341-1344; 2004 [4] D.L. Donoho and X. Huo; Uncertainty principles and ideal atomic decomposition; IEEE Trans. on IT, Vol. 47, pp. 28452862; 2001 [5] S. Bourguignon, H. Carfantan and T. Bohm; Spar Spec : a new method for fitting multiple sinusoids with irregularly sampled data; Astronomy&Astrophysics; Vol. 462, pp. 379387; 2007 [6] D.M. Malioutov, M. Cetin, A.S.Willsky; A sparse signal reconstruction perspective for source localization with sensor arrays; IEEE Trans. Signal Processing; Vol. 53, pp. 30103022; 2005 [7] J. Zheng, M. Kaveh and H. Tsuji; Sparse spectral fitting for direction of arrival and power estimation; Proc.IEEE/SP; 15th Workshop on Statistical Signal; 2009 [8] J. Lardiès, H. Ma, M. Berthillier ; Localisation de sources de bruit par représentation parcimonieuse des signaux issus d’une antenne acoustique, GRETSI 2011, Bordeaux [9] J.S. Sturm; Using SeDuMi 1.02, a Matlab toolbox for optimization over symmetric cones; Optimization Methods and Software; Vol. 11, pp. 625-653; 1999 [10] P.C. Hansen; The L-curve and its use in the numerical treatment of inverse problems; Advances in Computational Bioengineering; Edit. P. Johnston; 2000 [11] C. Zheng, G. Li; Subspace weighted l2,1 minimization for sparse signal recovery; Journal on Advances in Signal Processing; 2012

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

33/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

OFDM Pulse Design with Low PAPR for Ultrasonic Location and Positioning Systems Daniel F. Albuquerque, Jos´e M. N. Vieira, S´ergio I. Lopes, Carlos A. C. Bastos, Paulo J. S. G. Ferreira Signal Processing Lab – IEETA/DETI – University of Aveiro 3810-193 Aveiro, Portugal {dfa, jnvieira, sil, cbastos, pjf}@ua.pt Abstract—In this paper we propose an iterative algorithm to design ultrasonic orthogonal frequency division multiplexing (OFDM) pulses with low peak-to-average power ratio (PAPR), increasing, not only, the probability of pulse detection, but also, the system power efficiency. The algorithm is based on the Papoulis-Gerchberg method, where in each iteration the PAPR of the resultant pulse is reduced while keeping the spectrum flat and band limited. On each iteration the amplitude of the OFDM carriers are kept constant and only the phases of carriers are optimized. The experimental results have shown that for ultrasonic OFDM pulses with a large number of carriers it is possible to design pulses with a PAPR of 1.666. The designer pulse is ideal for time of flight (TOF) measurement purposes.

(up to 100 kHz) the narrow-bandwidth model it is not well suited. II.

The algorithm to optimize the PAPR of OFDM pulses is presented in Fig. 1 and it is adapted from the algorithm proposed in [6] which is based on the Papoulis-Gerchberg algorithm [7], [8]. The algorithm starts by computing the Compute: q (k) from Newman method

Keywords—Ultrasounds, Ultrasonic Pulse, Pulse Design, Time of Flight, Pulse Detection, OFDM, PAPR, Papoulis-Gerchberg.

I.

I NTRODUCTION

The OFDM is a method of data transmission that uses multiple carriers at a very low rate [1]. The main advantage of using OFDM is its robustness to some adverse indoor ultrasonic (US) channel conditions, such as strong multipath and different equalization along the frequency [1]. Due to this advantages, the authors have proposed an ultrasonic pulse that uses OFDM pulses to measure the TOF and transmit data simultaneously [2]. However, one of the major drawbacks of using OFDM pulses to measure the TOF is the high PAPR when comparing to other types pulses, such as chirps. The PAPR is defined as the ratio between the peak power to the mean power of the OFDM pulse. On the one hand, the probability of pulse detection increases with the signal energy [3]. On the other hand, if the transmission system uses a power amplifier it is important to increase the signal energy and reduce the signal amplitude peak in order to increase the power amplifier efficiency [1]. Therefore, the pulse used for TOF measurement should present a PAPR as low as possible [1]. The literature usually covers the PAPR problem for communication purposes [4], [5] but for TOF measurement, where the quest for the best pulse is the goal, the typical solutions are not for the PAPR problem but for a similar one, the peak-to-mean envelope power ratio (PMEPR) [3]. The PMEPR instead of measuring the ratio between the peak power and mean power of the real transmitted signal it computes the ratio using the signal envelope. For narrow-bandwidth signals1 the PMEPR provides a good approximation of the PAPR value (typical radar case) [3], [5]. However, for typical US signals 1 Narrow-bandwidth signals are signals whose carriers’ frequency is much greater that the signal bandwidth.

P ROPOSED A LGORITHM

Compute: S(k) = e jq (k) S(k) = 0 for k ≠ k0..kNc-1

Compute: q (k) from X(k)

IFFT

FFT

Compute the real part x(n) = 2 x Re{s(n)}

Clip the signal peaks

Fig. 1: Proposed iterative algorithm to decrease the PAPR. carrier phases, θ(k) = (k − 1)2 π/Nc , using the Newman method, where Nc is the number of carriers. After it is computed the frequency carriers information, S(k) = ejθ(k) , with amplitude one which results into an OFDM pulse with a PAPR around 3.5. The resultant signal is then converted to the time-domain and the double of its real part is computed. Note that the double is only important to keep the carriers amplitude equal to one. Therefore, from the resultant signal, x(n), the peaks are removed by clipping the maximum and the minimum of the signal. Note that the clipping process must be between 75% to 95% from the maximum amplitude of the signal to ensure that the algorithm converges and that the PAPR is reduced as fast as possible [6]. After passing the clipped signal to the frequency-domain the new carriers phases are obtained and the first iteration is completed. For each iteration, the carriers phase from the last iteration must be kept. III.

A LGORITHM R ESULTS

This section presents the results of the proposed algorithm for two types of OFDM pulses, a short pulse, with 100 ms, and a long pulse, with 20 s. Both pulses performance are compared with a chirp signal2 with the same characteristics. 2 The term chirp is sometimes used interchangeably with sweep signal and linear frequency modulation signal.

c 978-1-4673-1954-6/12/$31.00 2012

34/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

20 10

0.1

0.2 0.3 0.4 0.5 0.6 0.7 0.8 Instantaneous Power (Normalized to Chirp Peak Power)

0.9

1

0.9

1

(a) OFDM Instantaneous Power Distribution.

3.5

30 20 10 0 0

3.2 2.9 PAPR

30

0 0

Occurences (%)

For the short pulse an 100 ms OFDM pulse with 1000 carriers from 20 kHz to 30 kHz was used. The algorithm was ran one million times and the clipping process started at 0.8 of the maximum signal value. If the PAPR during one iteration is not reduced, the clipping value for the next iteration changes to 80% of the previous clipping value plus 0.2. For example if a clipping of 0.8 does no reduce the PAPR the clipping changes to 0.84 and after that to 0.872 and so on. The result for this test is presented in Fig. 2. As can be seen the PAPR reduces to the value of 2 in just 3980 iterations. The problem is to reduce the PAPR bellow 2. After one million of iterations the PAPR is only 1.945 and it is only reduced by 5.6 × 10−11 in each iteration.

Occurences (%)

A. Short Pulse

0.1

0.2 0.3 0.4 0.5 0.6 0.7 0.8 Instantaneous Power (Normalized to Chirp Peak Power)

(b) Chirp Instantaneous Power Distribution.

2.6 2.3 2 1.7 0 10

1

10

2

10

3

10 Iteration

4

10

5

6

10

10

Fig. 4: Instantaneous Power Distribution for a 20 s OFDM pulse with 2 million carriers and a chirp pulse with a bandwidth of 100 kHz. The instantaneous power was normalized to the peak power of the chirp.

Fig. 2: Algorithm results after 1 million iterations for an 100 ms OFDM pulse with 1000 carriers. IV.

Probability of Detection

The resultant OFDM pulse will be compared with a chirp pulse with the same main characteristics: amplitude, duration and bandwidth. The probability of detection as a function of the signal’s amplitude for the last pulse sample using a matched filter and considering a threshold that produce a probability of false alarm of 10−6 is depicted in Fig. 3. One can observe 1 OFDM pulse Chirp pulse

0.8 0.6 0.4

Using the proposed algorithm it is possible to design OFDM pulses that present a low PAPR. The results show that it is only needed some thousand iterations to obtain an OFDM pulse with a PAPR of 2, however to go beyond this value it will be necessary to iterate the algorithm millions of times. Additionally, the results also show that it is easy to obtain an OFDM pulse with low PAPR for long pulses than for short pulses. It was possible to design an OFDM pulse with 20 s of duration that presents a flat spectrum between 0 and 100 kHz and a PAPR of 1.666. This result represents a 16.7% energy gain when compared with the chirp pulse having the same amplitude, length and bandwidth.

0.2 0 0

C ONCLUSION

R EFERENCES 0.02

0.04

0.06 0.08 0.1 0.12 0.14 (Max. Amplitude)/(Noise std.)

0.16

0.18

0.2

[1]

Fig. 3: Probability of detection for an OFDM and a chirp pulse as a function of the signal amplitude.

[2]

that the OFDM pulse detection is slightly better than the chirp pulse detection for the same amplitude.

[3] [4]

B. Long Pulse For the long pulse a 20 s OFDM pulse with 2 million carriers from 0 Hz to 100 kHz was used. The algorithm was ran 10 million times and the clipping process was manually tuning between 80% and 99.999%. Fig. 4 presents the instantaneous power for the resultant OFDM pulse and for a chirp with similar characteristics: energy, duration and bandwidth. The PAPR reducing technique shows up its value, the OFDM pulse presents a PAPR of 1.666 against 2 for the chirp. As a result of this, the OFDM pulse has a considerable better efficiency.

[5]

[6]

[7] [8]

35/278

Henrik Schulze and Christian Luders, Theory and Applications of OFDM and CDMA, John Wiley & Sons, first edition, 2005. Daniel F Albuquerque, Jos´e M N Vieira, Carlos A C Bastos, and Paulo J S G Ferreira, “Ultrasonic OFDM Pulse Detection for Time of Flight Measurement Over White Gaussian Noise Channel,” in 1st internation conference on Pervasive and Embedded Computing and Communication Systems, Vilamoura, Portugal, 2011. Nadav Levanon and Eli Mozeson, Radar Signals, JOHN WILEY & SONS, 2004. Seung Hee Han and Jae Hong Lee, “An overview of peak-to-average power ratio reduction techniques for multicarrier transmission,” IEEE Wireless Communications, vol. 12, pp. 56–65, 2005. Jiang Tao and Wu Yiyan, “An Overview: Peak-to-Average Power Ratio Reduction Techniques for OFDM Signals,” Broadcasting, vol. 54, no. 2, pp. 257–268, 2008. Edwin Van der Ouderaa, Johan Schoukens, and Jean Renneboog, “Peak factor minimization using a time-frequency domain swapping algorithm,” Instrumentation and Measurement, vol. 37, no. 1, pp. 145–147, 1988. Athanasios Papoulis, “A new algorithm in spectral analysis and bandlimited extrapolation,” Circuits and Systems, vol. 22, no. 9, 1975. R. W. Gerchberg, “Super-resolution through Error Energy Reduction,” Optica Acta: International Journal of Optics, vol. 21, no. 9, 1974.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Dynamic Collection Based Smoothed Radiomap Generation System Jooyoung Kim, Myungin Ji, Youngsu Cho, Yangkoo Lee, Sangjoon Park Positioning / Navigation Technology Research Team, Robot / Cognitive System Research Department ETRI (Electronics and Telecommunications Research Institute) Daejeon, Korea [email protected], [email protected], [email protected], [email protected], [email protected]

Abstract— The fingerprinting method has been considered as a promising technology of indoor localization. However, it has a serious problem to employ to applications targeting broad service areas due to the high cost for constructing DB. In the conventional DB generation system, it utilizes a statistic collection process. In the process, every collector gathers signal characteristics on every point which divides a service area into a grid. To achieve sufficient data set for calculating the characteristics, a collector usually stands on the points for a while, up to a couple of minutes. Therefore, it takes too much time and cost to construct a fingerprint DB for whole serving areas. To deal with this problem, a dynamic collection based fingerprint DB generation system is proposed. In the proposed system, collectors walk along predesigned routes and gather signal measurements in motion using smartphone, not staying on a point. Therefore, the proposed system remarkably reduces the cost for constructing DB by improving the time consuming collection process. However, the signal collecting in motion causes to decrease the reliability of the DB because of the signal variation which is significant in indoor areas. To mitigate the variation, we exploit a moving average filter to the smooth noisy measurements. As a result, the proposed system improves the efficiency of the fingerprint DB generation system with reasonable positioning performance. Experimental result proves the validity of the proposed dynamic collection based smoothed radiomap. The average positioning error using the proposed smoothed radiomap is about 7.41m, and the standard deviation of the error is 6.17m. Keywords-fingerprinting, radiomap, dynamic collection, smartphone

I.

(TOA), Time Difference Of Arrival (TDOA), Angle Of Arrival (AOA), and fingerprint methods. The TOA, TDOA, AOA methods have drawbacks to be utilized to smart-phones because of some additional requirements such as additional devices, aiding information, and timing synchronization, etc. In addition, these measurements are vulnerable under the complex signal propagation environments of indoor areas. Thus, the fingerprint based location is considered as a promising technology for indoor localization. The fingerprint based location system is generally divided into two steps. The first step is an off-line phase or a training phase which constructs a radiomap through site-surveying to collect RSSI measurements of a service area [3]. The vector derived from statistics of the collected RSSI measurements is called fingerprint, and the set of the fingerprints is combined to build a radiomap which represents the characteristics of the signal pattern in a certain area. Then, in second an on-line phase or a positioning phase, positions of users are estimated by comparing the measured RSSI from users with the fingerprints. Though the advantages mentioned above, the main reason to adapt fingerprint based location system into a large-scale field is laborious and time-consuming site surveying or collecting process for building radiomaps. Therefore, an automated collecting process, called dynamic collection, is adapted. Since the measurements are coarsely collected from the dynamic collection, a smoothed radiomap gerneation system is proposed to guarantee the reliability of radiomaps. II.

INTRODUCTION

Recently, location based services have been considered as a key applications especially after the emergence of smartphones. As the smart devices including smart-phones invade daily life deeply, the demand for a location system which is available ubiquitously is increasing rapidly. Therefore, accurate locating system regardless operation site is identified an important component of the applications. The Global Positioning System (GPS) meet the demand in outdoor areas, but any remarkable solution has not been proposed yet for indoor areas [1, 2]. To provide location information in indoor areas, several approaches have been proposed in the literatures, and most methods are classified into four categories: Time Of Arrival

SYSTEM MODEL

A. Dynamic collection process using smart-phones As explained above, the most significant drawback of the fingerprinting based location system is cumbersome and time consuming process for building a radiomap. Conventionally, reference RSSI measurements for building a radiomap are collected statically; Collectors stand at known positions, and pin-point their location manually on a ready-made map, then wait for a while to gather enough tuples of the measurements. Hence, it is too prohibitive to build radiomaps across broad areas based on the conventional static collecting process. To solve the problem, a dynamic collection process is proposed. In the proposed process, collectors determine a path,

36/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 not a point, and walk along the path with a smart-phone which gathers the reference measurements and calculates the groundtruth of the collectors, automatically. In the conventional process, a significant part of labor for collecting is consumed when a collector confirms his or her location for pin-pointing and waits to gather the measurements. However, the effort is remarkably decreased in the proposed system because it is automated by using a smart-phone. For this purpose, a smart-phone application, called collecting app, is developed. The collecting app has the readymade map of a service area, and provides accessible paths to collectors. Selecting one of the paths, the collectors start the collecting process. While the collectors follow the path, the application calculates their positions based on pedestrian deadreckoning by using sensors of smart-phones, and gathers Wi-Fi signals. Then, the application combines the Wi-Fi signal patterns and the calculated reference positions on which the patterns are gathered. As a result, a simple and cost-effective collecting process is achieved by using the application so it makes building radiomaps of broad area feasible. B. Smoothed radiomap generation Though the proposed collecting process remarkably reduces labor and cost of collecting measurements, the simplified process may result in a problem to make a reliable radiomap fingerprint. The RSSI measurement is inherently unstable and unfortunately, it is much more severe in indoor environments. In conventional process, therefore, collectors wait for a while to gather sufficient number of measurements, not only one measurement, to avoid the sudden variation of the signals. Then, the fingerprint is generated regarding the statistical characteristic derived from the set of measurements. However, in the case of the dynamic collection process, it is hard to expect to gather sufficient number of measurements because collectors continuously move to follow the path, and frequency for gathering Wi-Fi beacon signals by a smart-phone is limited. To overcome the limitation, a smoothing algorithm which exploits neighbor measurements is utilized. Note that, the main reason to adapt statistical characteristics of signals in conventional radiomap generation is to reduce the effect of sudden variance of the signals. Thus, mitigating the variation is also achieved with smoothing process for dynamic collection based radiomap. The proposed smoothed radiomap generating system consists of two procedures. First, a collected area is divided into several cells and a fingerprint for each cell is calculated by averaging the measurements gathered within the cell. Then, the averaged fingerprints are smoothed with fingerprints of neighbor cells. In the experiments, moving average filter is adapted for smoothing. The positioning result using the smoothed radiomap is shown in next section. III.

The test-bed, CO-EX, Seould, Korea, is shown in Fig. 1. The size of Co-Ex is 36,364m2, but it takes about four hours to collect the RSSI measurements of one floor. The positioning error is calculated with moving a designated path, and the true and estimated positions are also illustrated in Fig. 1, as blue squares and orange circles. In the experiment, the dynamic collection based smoothed radiomap is exploited and the positions are estimated by the k-nearest neighbor algorithm with K=3. The average positioning error is 7.42m, and standard deviation of the error is 6.17m.

Figure 1. Shape of the test-bed (CoEx), and true (blue squares) and estimated positions (orange circles)

In Fig. 2, the cumulative distribution function (CDF) of the positioning error is shown. 90% of the errors are bounded within 8m, and 70% of errors are bounded 3m. IV.

CONCLUSIONS

In this paper, a smoothed radiomap generation system is proposed for dynamic collection. In dynamic collection, the reference measurements are automatically collected by a smartphone application, so the efficiency of the collecting process is much more improved. Since the collected data are not enough to obtain statistical characteristic which is usually exploited in conventional fingerprints generating system, the smoothed radiomap generation system is proposed. The experimental results show that the positioning results show reasonable accuracy, about 7.41m in average, despite coarsely gathered measurements yielded by the dynamic collection process. ACKNOWLEDGMENT This research was funded by the MSIP(Ministry of Science, ICT & Future Planning), Korea in the ICT R&D Program 2013. REFERENCES [1]

EXPERIMENTAL RESULTS

To validate the proposed smoothed radiomap generating system based on the dynamic collection, the positioning performance exploiting the smoothed radiomap is evaluated through experimental result.

Figure 2. CDF curve of the positioning errors

[2] [3]

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

37/278

Y. S. Cho, M. Ji, Y. Lee, and S. Park, “WiFi AP position estimation using contribution from heterogeneous mobile devices,” Proc. IEEE Position Location and Navigation Symposium (PLANS), pp.562-567, Apr. 2012. G. M. Djuknic, R. E. Richton, “Geolocation and Assisted GPS,” IEEE Computer, vol. 2, pp. 123-125, Feb., 2001. P. Bahl, V. Padmanabhan, "RADAR: An in-building RF-based user location and tracking system," Proc. IEEE INFOCOM 2000, Tel-Aviv, Israel, vol. 2, pp. 775-784, Mar., 2000

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

38/278

39/278

40/278

41/278

42/278

43/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Accurate Smartphone Indoor Positioning Using Non-Invasive Audio Sérgio I. Lopes, José M. N. Vieira, João Reis, Daniel Albuquerque and Nuno B. Carvalho Department of Electronics, Telecomunications and Informatics, University of Aveiro, 3810 Aveiro, Portugal. Email: {sil,jnvieira,jreis,dfa,nbcarvalho}@ua.pt Abstract—In this paper we propose a reliable acoustic indoor positioning system fully compatible with the hardware of a conventional smartphone. The proposed system takes advantage of the smartphone audio I/O and processing capabilities to perform acoustic ranging in the audio band using non-invasive audio signals and it has been developed having in mind applications that require high accuracy, such as augmented/virtual reality, gamming or audio guide applications. The system works in a distributed operation mode, i.e. each smartphone is able to obtain its own position information using a GPS-like topology. In order to support the positioning system, a wireless sensor network (WSN) of synchronized acoustic anchor motes was designed. To keep the infrastructure in sync we developed an Automatic Time Synchronization and Syntonization Protocol that resulted in a sync offset error below 5 µs. Using Time Difference of Arrival (TDoA) measurements we were able to obtain position estimates with an absolute mean error of 7.3 cm and a correspondent absolute standard deviation of 3.1 cm for a position refresh rate of 350 ms, witch is acceptable for the type of application we are focused.

criteria when designing the system: indoor operation, subdecimeter accuracy, smartphone compatibility, scalability and low-cost infrastructure. Indoor operation limits the use of GPS systems, due to the attenuation, multi-path and interference that RF signals suffer when used indoors. To obtain increased accuracy a range-based positioning system with an infrastructure of anchors at known positions was used in order to circumvent the lack of accuracy of mutual positioning systems. Smartphone compatibility restricts the selection of the sampling frequency of the acoustic signal due to smartphone hardware constraints. Commercially available smartphones allow a maximum sampling rate of 44.1 kHz therefore limiting the useful band to 22.05 kHz. In Figure 1 is presented the overall architecture of the proposed positioning system. A modular infrastructure approach takes advantage of a low-cost Wireless Sensor Network (WSN) thus ensuring also the scalability criterium. This way, multiple rooms with unique IDs can be added depending on the needs.

Keywords—LPS, IPS, acoustic positioning, smartphone localization, location-aware.

AM2

Room 0

AM0

AM1

I.

I NTRODUCTION

MD0

The Global Positioning System (GPS) is the most widely used method for outdoor localization and provides global coordinates with an accuracy within 10 meters [1]. Moreover, GPS signals are too weak to penetrate buildings, which makes them useless for indoor positioning. High accuracy indoor positioning systems normally use Radio-Frequency (RF), e.g. Ultra-Wideband (UWB), or acoustic signals [2]. UWB positioning systems use narrow pulses with very short duration (subnanosecond) resulting in widely spread radio signals in the frequency domain [3] and in high accuracy ToA measurements, when compared with other RF methods [4]. A major drawback of UWB systems is related to the synchronization task, that typically results in increased hardware complexity and cost, due to the high precision needed in ToA estimation. On the other hand, by using acoustic signals, a resolution in time in the order of µs can easily be achieved using only off-the-shelf components. II.

AM1

Room 1

MD2 MD1

AM0

MD3

APAM1

APAM0

AM2

Anchor Mote GateWay Mote

Access Point Anchor Mote Mobile Device Wi-Fi

Wi-Fi Router

WSN

Remote Configuration WWW

PC

S YSTEM A RCHITECTURE Positioning Server

The proposed system was developed having in mind increased accuracy (in the decimeter order) applications, i.e augmented/virtual reality, gamming or audio guide applications. To achieve these requirements we focused in the following c 978-1-4673-1954-6/12/$31.00 2012

44/278

DB

System Backend

Fig. 1: Overall system architecture.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 III.

P OSITIONING A PPROACH

The positioning process can be split in three main stages [5]: synchronization, measurement and position estimation.

Rectangular Window

Hanning Window

Combined Window

1

1

1

0

0

0

A. Synchronization 0

The signal transmission period is the time that the transmitter needs to send the acoustic pulse. The listening period is the time slot used by the mobile device to estimate the range measurement and the guard time period was added to reduce the impact of the room reverberation1 .

sig 0

tsig0

guard time

list 0 slot 0 tlist0

tguard0

sig 0

tsig1

guard time

list 0 slot 1 tlist1

tguard1

sig K

... tsig2

tsigk

list K slot K tlistk

guard time

tguardk

tsig0

10

20

30

0

10

20

t (ms)

30

0

10

t (ms)

20

30

t (ms)

0

0

dB

0

18

19

20 f (KHz)

21

22

0

18

19

20 f (KHz)

21

22

0

18

19

20 f (KHz)

21

22

0

dB

Time division multiple access (TDMA) was used based in a centralized architecture with all anchors in sync. To keep the WSN infrastructure in sync a reliable automatic time synchronization and syntonization protocol is used. The proposed method is based in a simplified version of the IEEE 1588 standard, and allowed us to obtain a clock offset sync error of less than 5 µs [6]. This resulted in range measurements with an error with a standard deviation less than 1 cm. In Figure 2 is presented the time slot structure used in the coordination process. For each anchor mote is reserved a specific time slot for signal transmission. Each time slot can be split in three distinct periods, namely: signal transmission period (sig k), listening period (list k) and a guard time period (guard time).

0 t (ms)

1

0 t (ms)

a)

1

0 t (ms)

b)

1

c)

Fig. 3: Chirp Pulse design. Chirp with frequency content from 18 kHz to 22 kHz. First line shows the weighted pulses in the time domain, second line its frequency response and in the third line is showed the autocorrelation function in time around the central peak.

which highly improves the probability of detection in static and dynamic positioning scenarios [8]. To increase the transmitted power maintaining the chirp pulse non-invasive, i.e. inaudible to humans, a combined window that uses the right half of a rectangular window combined with the left half of a hanning window was used. In table I are compiled the most important figures of merit for the rectangular, hanning and combined windows.

Fig. 2: TDMA structure.

B. Measurement The measurement stage is based on ToA estimation by the mobile device. Anchor motes were programmed to periodically transmit acoustic chirp pulses. The usage of chirp pulses overcomes most of the problems (when compared to pure sine tones) such as, poor resolution, low environment noise immunity, short range and low robustness to the Doppler effect. The probability of detection of a transmitted chirp is directly related with the signal-to-noise ratio (SNR) rather than the exact waveform of the received signal [8]. The transducers used to equip the anchor motes were the same presented in [9], i.e. a piezo-tweeter speaker and a Panasonic WM61-A electret microphone. 1) Signal design: Signals with time and frequency diversity, e.g. linear frequency modulated signals or chirps, are well known in RADAR and represent a case where time and frequency are booth used to increase the probability of detection. In RADAR, chirps with large time bandwidth product (TBP) are used to obtain narrow compressed peaks with SNR maximization, resulting in signals with increased probability of detection, but also when Doppler tolerance is needed. Chirps can achieve up to ±B/10 Doppler tolerance, improving the detection probability for large Doppler shifts [10]. By increasing TBP and using adequate weighting in the signal design is possible to increase: the SNR, the pulse compression (better time resolution) and the Doppler tolerance, 1 The room reverberation time was measured using the ISO 3382 standard which resulted in a T60 reverberation time of 25 ms [7].

45/278

TABLE I: Figures of merit of the chirp pulses presented in Figure 3. CR is the compression ratio, PSL represents the Peak Sidelobe Level and PL represents the Peak Level. Chirp Pulse a) b) c)

Weighting Window Rectangular Hanning Combined

B (KHz) 4 4 4

TBP 120 120 120

CR (ms) 0.30 0.60 0.44

PSL (dB) 13.5 46.9 15.8

PL (dB) 0.0 -8.5 -3.3

2) ToA Estimation: To measure ToA, an approach based on selective time filtering using prior knowledge of the system TDMA settings was used, see Figure 4 for more details. This selective time search heavily reduces the probability of false peak detection (i.e. interferences) due to the implicit information that is present in the periodicity of the transmitted pulses. After correlation, the L2 -norm of the signal at the output of the correlator xc is computed, see equation 1, v u(m+1).D−1 u xL2ne [m] = t ∑ |xc [n]|2 , with m = 0, . . . , Nc (1) n=mD

where D is the size of the L2 -norm estimator and Nc is the number of chunks of the correlated signal to process. This way, is possible to obtain a decimated energy estimator, which considerably reduces the number of instructions needed to detect a peak. Moreover, an adaptive threshold method is used to increase the algorithm performance. The method

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 uses a FIFO buffer, WFIFO , that contains Nb samples of the decimated energy estimator xL2ne . Due to the signal periodicity we selected a value of Nb that allows the inclusion of all the data needed to compute a position estimate using the TDMA structure presented in Figure 2. This way, by looking to the maximum and minimum values of WFIFO , we are able to compute a time-variant Signal-to-Noise-Ratio (SNR), see equation 2. The dynamic threshold th value was defined using the conservative rule of 50% of the SNR, i.e. th = SNR/2. SNR = 20 log10 (max(|WFIFO |) − min(|WFIFO |))

(2)

Position Estimation Routine

Start PER

k=0 toa Compute 1st Finite Difference vector dtoa k++

dtoa

dtoa(end-k) < Tslot

Furthermore, we included Non-Line-of-Sight (NLoS) mitigation in the ToA estimation in order to improve the peak detection performance in real situations where multi-path and NLoS occurs. The used approach consists in the search of earlier peaks (i.e. peaks that appear in the left neighborhood part of the main peak) with lower energy, but above th.

k < Nan -1

T

Generate tdoa vector tdoa Position Estimation Algorithm

ToA Estimation Routine

(x,y,z)

Start TER

End PER

Fig. 5: Position Estimation Routine (PER) with TDoA pre-validation.

Read Audio Input Buffer x

IV.

C xc L2 - norm Estimation xL2ne

Adaptive Thresholding T

xL2ne

wFIFO (max - min)/2

F F

xL2ne > th

S YSTEM P ROTOTYPE

The system prototype consists of two distinct devices: the acoustic mote and a smartphone acting as a mobile device. A WSN of Acoustic Motes is used to build an infrastructure of anchors at known positions. These motes can be used as building blocks that can easily be added to an existent infrastructure in order to meet the scalability criterion. The mobile station uses an iPhone App running in real-time with Wi-Fi connectivity.

s [-n]

Listening Time?

T

F : Valid ToA group of measurements

F

th

T NLOS Mitigation (search 1st peak in left neighborhood) PC Gateway

t1p

Acoustic Mote A

Acoustic Mote B

Push t1p into toa vector Launches PER Communications Module

toa

End TER

Batteries Speaker

Fig. 4: ToA Estimation Routine (TER).

Acoustic Module

Microphone

a)

C. Position Estimation

Microphone

Using three anchor nodes is possible to obtain two TDoA estimates which gives the possibility to compute 2D position estimates. Post validation of each group of ToA measurements is need in order to generate a valid TDoA vector to use in the localization algorithm, see Figure 5 for more details. To solve the localization problem and since TDoA measurements are always noisy, (e.g. thermal noise, external acoustic noise, sound velocity changes, etc.) the position estimation can be seen as an optimization problem. We opted to find the position that minimize the squared error intersection point for all the hyperbolas defined for each intervenient anchor node. A detailed description of the used method can be found in [1].

46/278

Headphones

iPhone App

b)

Fig. 6: System Devices. a) Acoustic Anchor Motes and Gateway. b) Mobile Device Running the positioning application.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 V.

9

E XPERIMENTAL VALIDATION

8

Two experiments were performed in order to evaluate the proposed system. The first experiment was held to obtain a quantitative evaluation of the overall system by measuring the estimated position error to a grid of fixed points in the room. A second experiment was taken to obtain a qualitative evaluation of the positioning system when a person equipped with a mobile device is in a moving trajectory. A grid of 6 × 5m with

A2

7 6

Y−axis (m)

5 4 3

9 2 8

A2

1

7 0 6 −1 −1

Y−axis (m)

5

A3

A1 0

1

2

3 4 X−axis (m)

5

6

7

8

Fig. 9: Experiment 2): Real trajectory - solid black line; Estimated positions - red points; Anchor nodes - circles A1, A2 and A3.

4 3

VI.

2

C ONCLUSIONS

1 0

A3

A1 −1 −1

0

1

2

3 4 X−axis (m)

5

6

7

8

Fig. 7: Experiment 1) : Real positions - black cross; Estimated positions - red points; Anchor nodes - circles A1, A2 and A3.

1 m step was used to evaluate the positioning system. All the obtained results are plotted overlapped in the same xy plane, see Figure 7. Note that, no outlier measurements are present. This can be justified by the fact that all measurements were taken in laboratory under a controlled acoustic environment, i.e. acoustic noise bellow 40 dBSPL. A smartphone running the positioning app was placed at each position marked with a black cross, see Figure 7, with a constant height of 1.70 m and one hundred position estimates were then obtained, see results in Figure 8. An absolute mean error of 7.3 cm and an absolute standard deviation of 3.1 cm was obtained.

R EFERENCES [1]

[2]

[3]

0.2 Abs. Error (m)

In this paper we propose an effective indoor acoustic positioning system compatible with conventional smartphones that uses non-invasive audio signals and TDoA measurements for low-cost ranging thus enabling effective indoor positioning for conventional smartphones. The system is supported by a WSN of acoustic anchors running with a sync offset error below 5 µs. Experimental tests were performed using an iPhone 4S in order to evaluate the proposed system. Results showed stable and accurate position estimates with an absolute standard deviation of less than 3.1 cm for a position refresh rate of 350 ms witch is acceptable for the type of application we are focused.

0.15 0.1 0.05

[4]

0 −0.05 (6,7) (5,7) (4,7) (3,7) (2,7) (1,7) (1,6) (1,5) (1,4) (1,3) (1,2) (1,1) (2,1) (3,1) (4,1) (5,1) (6,1) (2,4) (3,4) (4,4) (5,4) (6,4) Position (x,y)

Fig. 8: Absolute positioning error and correspondent standard deviation for X-axis (black) and Y-axis (red).

To obtain a qualitative evaluation of the positioning system a second experiment was performed. In this case a moving person with the receiver on top of the head was used to evaluate the positioning system in a moving trajectory, see Figure 9. In this experiment, only a qualitative evaluation can be performed because errors introduced by the human movement cannot be extracted due to difficulty in ground-truth validation. The audibility of the proposed signals was perceptible only by people with early age, i.e. people with age below 25 years old. Among the people that are able to detect the presence of these signals, all agreed that a classification of non-invasive audio was acceptable.

47/278

[5]

[6]

[7]

[8] [9]

[10]

A. H. Sayeda, A. Tarighata, and N. Khajehnouri, “Network-based wireless location,” IEEE Signal Processing Magazine, vol. 22, no. 4, pp. 24–40, 2005. R. Mautz, Indoor Positioning Technologies. Institute of Geodesy and Photogrammetry, Department of Civil, Environmental and Geomatic Engineering, ETH Zurich, 2012. N. Patwari, J. N. Ash, S. Kyperountas, A. O. Hero, R. L. Moses, and N. S. Correal, “Locating the nodes: cooperative localization in wireless sensor networks,” Signal Processing Magazine, IEEE, vol. 22, no. 4, pp. 54–69, 2005. S. Gezici, Z. Tian, G. B. Giannakis, H. Kobayashi, A. F. Molisch, H. V. Poor, and Z. Sahinoglu, “Localization via ultra-wideband radios,” IEEE Signal Processing Magazine, vol. 22, no. 4, pp. 70–84, July 2005. I. Amundson and X. D. Koutsoukos, A Survey on Localization for Mobile Wireless Sensor Networks. Springer-Verlag Berlin Heidelberg, 2009. J. Reis and N. Carvalho, “Synchronization and syntonization of wireless sensor networks,” in Wireless Sensors and Sensor Networks (WiSNet), 2013 IEEE Topical Conference on, 2013, pp. 151–153. J. S. Bradley, “Using ISO 3382 measures, and their extensions, to evaluate acoustical conditions in concert halls,” Acoustical Science and Technology, vol. 26, no. 2, pp. 170–178, 2005. N. Levanon and E. Mozeson, Radar Signals. Hoboken, New Jersey: John Wiley & Sons, Inc., 2004. S. I. Lopes, J. M. N. Vieira, and D. Albuquerque, “High accuracy 3d indoor positioning using broadband ultrasonic signals,” in Trust, Security and Privacy in Computing and Communications (TrustCom), 2012 IEEE 11th International Conference on, June 2012, pp. 2008 – 2014. M. I. Skolnik, Ed., Radar Handbook, 2nd ed. McGraw-Hill, 1990.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Locally Optimal Confidence Hypersphere for a Gaussian Mixture Random Variable Pierre Sendorek, Maurice Charbit * Karim Abed-Meraim** Sébastien Legoll*** * Télécom ParisTech, Paris, France [email protected] ** Polytech Orléans, Orléans, France *** Thales Avionics, Valence, France Abstract—We address the problem of finding an estimator such as its associated confidence ball is the smallest possible in the case where the probability density function of the true position (or of the parameter to estimate) is a d-dimensional Gaussian mixture. As a solution, we propose a steepest descent algorithm which optimizes the position of the center of a ball such as its radius decreases at each step but still ensures that the ball centered on the optimized position contains the given probability. After convergence, the obtained solution is thus locally optimal. However our benchmarks suggest that the obtained solution is globally optimal. Keywords — Confidence domain; Gaussian Mixture Model; Optimization; Monte-Carlo; Robust Estimation; Accuracy

I.

INTRODUCTION

In navigation, it is often of practical interest to express the accuracy of a position estimator by the dimensions of its confidence domain [1]. One may ask which estimator achieves the optimal accuracy with respect to this criterion. In a Bayesian setting, when the probability density function (pdf) of the true position given the measurement is Gaussian, it is well known that the smallest ball containing the true position with a given probability is centered on the mean. In this case the best estimator is the mean. But the problem has less been studied when the probability density has less symmetries. However this situation naturally appears in navigation. When several sources are used to form the measurement vector, taking into account the probability of failure of each source results in obtaining a pdf of the position expressed as a Gaussian Mixture (GM) [2,9]. In this case it is interesting to have a position estimator such as its associated confidence ball is the smallest possible. In this paper, we address the problem of finding the smallest confidence ball containing the position with a given probability. As a solution, we propose a “multiple” steepest descent algorithm which optimizes the position of the center of a ball such as its radius decreases at each step but still ensures that the ball centered on the optimized position contains the given probability. After convergence of the steepest descent, the obtained solution is thus locally optimal. Thus, this steepest descent is run as many times as there is Gaussians in the GM. Finally, the estimated position yield by the algorithm is, among all the locally optimal solutions, the center of the ball with the smallest reached radius.

Our algorithm’s solution is compared to the globally optimal solution (computed thanks to an exhaustive search) in the 1-dimensional case. It is shown that the globally optimal solution empirically matches our algorithm's. In particular, when the probability density function is only a single Gaussian, the obtained solution matches the optimal solution and is the mean. II.

POSITION OF THE PROBLEM

A. Probability of being outside a ball Suppose that the pdf of our d -dimensional parameter of interest, say X , is described by a GM. Let N g be the number of Gaussians composing the mixture, and for each component j from 1 to N g , let  j be the weight of the Gaussian in the mixture,  j its mean and C j its covariance. The pdf of X thus writes Ng

p X ( x)   j . f j ( x)

(1)

j 1

Where

f j ( x)  N ( x;  j , C j ) is the evaluation at x of

j

the pdf of a Gaussian with a mean

and with a covariance

C j . We call A the probability of X to be outside a ball of center c and of radius r . The definition of A is given by A(c, r )  Pr( X  B(c, r ))  

xB ( c , r )

pX ( x)dx.

(2)

B. The Problem The problem is to find a center c such as the radius r is the smallest under the constraint that X has to be in this ball with an expected probability of 1   . This problem is equivalent to the following: Find c such as r is the smallest possible under the constraint A(c, r )   . We will see in appendix how the value of A(c, r ) can be computed using some numerical functions related to the

48/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 (generalized) chi square law. Also, as the reader may have noticed, we chose to deal with the complementary of the probability to be inside a ball. This is because the user may want to use standard libraries like [6] which implement the chi squared complementary cumulative function (also called the survival function) known to be more precise for the tails of the distribution, which is our case since  is usually closer to 0 than 1 in practical applications. III.

THE OPTIMIZATION ALGORITHM

A. Overview This section details the principle of our algorithm, which is the original contribution of this work. Derivation of the equations will be explained in further sections, which will be decreasing in the abstraction level. Our algorithm proceeds to

A(c  c , r  r )  A(c, r )  0.

And because r and c are supposed to be small, we replace the left term by Taylor's first order expansion A(c  c , r  r )  A(c, r )  c A(c, r ). c   r A(c, r ). r , where c A(c, r ) is the gradient, which is the vector of partial derivatives according to the components of c and where

 r A(c, r ) is the partial derivative of A according to r . Hence we get the equation

c A(c, r ). c   r A(c, r ). r  0 or equivalently, since

N g optimizations, each one being

under the form of a steepest descent algorithm which stops once it reaches a local optimum. Each of these steepest descents is initialized with c   j . The initialization step

requires a value of r which satisfies A(c, r )   . This value of r can be found either by interval halving, or by more sophisticated methods namely the secant method or Newton A(c, r ) is a decreasing function (as a method, because r complementary cumulative function). When both of these values are found, the optimization step starts by finding which of the (small) variations ( c , r ) of the couple (c, r ) do not change the probability of X to be outside the ball. This gives a set of possible directions ( c , r ) with the following property

A(c, r )  A(c  c , r  r )   .

(3)

Among all those possible directions for c , we choose the one which leads to the greatest improvement in terms of radius : hence, the chosen direction is the steepest. The center c is optimized by being replaced by

c'  c 

c

r

is the one for which the radius is the smallest. B. Steepest descent direction To find the steepest descent direction, we want to find which are the (small) variations ( c , r ) such as (3) is satisfied. This implies that we want

(5)

 r A(c, r ) is nonzero

 c A(c, r ). c /  r A(c, r )

(6)

Since several directions are possible, the problem is now to find the steepest descent direction of c . To do this, we take among all the vectors say is

|

c

which have the same (small) norm,

c |  , the one which minimizes r . Mathematically it equivalent to say that we search for

arg min c :| c |  

c A(c, r ). c  r A(c, r )

.

Finally, since the Cauchy-Schwartz inequality ensures

 | c A(c, r ) | .  c A(c, r ). c | c A(c, r ) | .

(7)

 r A(c, r ) is negative, we want the value of r to be negative, so c A(c, r ). c /  r A(c, r ) is   .c A(c, r )/ | c A(c, r ) | which minimized when c As a consequence, since

saturates the left inequality in (7).

and to finish the

optimization step, instead of replacing r by r  r as a radius for the following step, the algorithm solves the equality A(c ', r )   in the variable r (e.g. by one of the already mentioned methods) which is preferred to avoid the cumulation of linearization errors during the successive steps of the optimization. Once this optimization step is finished, another begins. The process is repeated as long as (the step is not negligible) there is an improvement of the radius. Finally, among the N g obtained local optima, the one which is chosen

(4)

The value



is the size of the step which was supposed to

be small during the calculations. But in practice, we take  so as to halve the dimensions of the actual radius and the algorithm works. Also, to avoid oscillations around local optima, when a variation of the center leads to an increase of radius (whereas the linearization always “predicts” a decrease), the step is halved. Halving can be repeated at most N h times (supplied by the user), after which the algorithm considers that the potential improvement is negligible. This translates into the constraint r  r / Q , which results in choosing as a step size   

r. r A(c, r ) Q |  c A(c, r ) |

with an initial

value Q  2 . Finally, the algorithm to find the optimal

(c, r ) sums up to

49/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 Which allows to derive under the sign sum

c A(c, r ). c  

rMC   for( j  1: N g ){

xB (0, r )

Ng

 

Q2

 j . f j ( x ').( x '  j )T C j 1 . c .dx '.

j 1 x 'B ( c , r )

c  j find r such as do{

c

E.

Derivative according to the radius The following derivations refer to a mapping of the Euclidean coordinates into the generalized d -dimensional polar coordinates. However, we won't need to explicit the integrals since the expression in polar coordinates will only be used to obtain the formula of the derivative according to the radius. The obtained integral has a pleasant expression to be mapped back into Euclidean coordinates, in which we will be able to evaluate the integral numerically. Using (9) as a starting point, the substitution   x / | x | and  | x | leads to the generalized polar coordinates

A(c, r )  

 

r. r A(c, r ) Q |  c A(c, r ) |

  .c A(c, r )/ | c A(c, r ) |

(cold , rold )  (c, r )

cc

c

A(c, r )  

find r such as if ( rold

A(c, r ) 

r) (c, r )  (cold , rold )

}while( Q  2

Nh

 (   r )

d 1

p X ((   r ).  c)d  . ( d )

 0

where  is the Lebesgue measure on the unit sphere [5] which we won’t need to make more explicit for calculating the derivative

Q  2Q if(



c pX ( x  c). c dx

)

r  rMC ){ (cMC , rMC )  (c, r ) }}

 r A(c, r )  ( d  1) 

(  r)

(  r)

 0

Ng

(cMC , rMC ) will describe the locally optimal ball containing X with a probability 1   .

  

At the end of the algorithm,

d 1

  r j 1

d 1

p X ((   r )  c ) d  . ( d )

  T 1    C j (   c   j )  j f j (   c ) d  . ( d )  

 d  1 ( x ' c)T 1     | x ' c |  | x ' c | C j ( x '  j )  j f j ( x ')dx ' x 'B ( c , r ) j 1 Ng



C. Computation of the probability to be in a ball The computation of the value of A(c, r ) implies the use of the generalized chi-square cumulative function, which can be efficiently computed e.g. using the algorithms in [3,4]. Indeed

A(c, r )   j  j

Where



xB ( c , r )

xB ( c , r )

f j ( x)dx

(8)

f j ( x)dx  Pr( j  r ) , for a variable  j which

follows a generalized non central chi-square law [4,7] with appropriate parameters (see appendix). In the following sections, this value will be computed from the numerical routines associated to this law.

IV.

Our choice is oriented towards numerical integration since the analytical formulae of the derivatives are unknown to the authors in the general case. Among numerical methods, we chose Monte-Carlo which is known to be insensitive to the increase of dimensionality and is an efficient way to sample the integration space at points where the integrand has significant values (far from zero). Indeed, the derivatives c A(c, r ) and  r A(c, r ) can both be expressed, modulo the adequate choice of the functions g j , as Ng

 

D. Derivative according to the center The expression of the gradient c A(c, r ) has similarities with the expression of A(c, r ) which makes its computation by the Monte-Carlo method comfortable. We derive the expression of the gradient by remarking that

A(c, r ) 



x 'B ( c , r )

pX ( x ')dx ' 



MONTE CARLO COMPUTATIONS

f j ( x).g j ( x)dx

j

j 1

th

x

d

where each j term of the sum can be computed thanks to the Monte-Carlo method with Importance Sampling. Thus the integral

pX ( x  c)dx (9)

xB (0, r )

50/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013



x

f j ( x).g j ( x) dx

(10)

d

Finally the weights j are drawn as  j  U j /

is numerically computed by sampling for the iid variables ( X j ,t )t 1... Nd each one according to the law N (  j , m.C j ) where

realizations of X j ,t outside B(c, r ) which is a set where the

N ( x;  j , mC j ) . A Monte-Carlo integration

Nd

j

j ,t

)

Nd .

1 Nd

i

,

where the variables U j are iid chi-squared variables with 1 degree of freedom. The random samples X j ,t are drawn only

their ratio. The evaluation is made on several sets of parameters (m, N d ,  ) and for each one, a Box and Whisker

approximates (10) by

t 1

U

once for a given set of parameters describing a GM. For each set of these parameters, we compare the obtained radius rMC with the globally optimal radius rG by computing

functions g j are null. Thus the pdf of each random variable

 g (X

Ng

i 1

m is a value greater than 1. Such a choice of m favors

X j ,t is x

which case the global maximum could be unique and thus each local maximum could trivially be the global optimum.

plot of the ratio rMC / rG is made in figure (1). Simulations

f j ( X j ,t )

show that the smaller the value of

N ( X j ,t ;  j , mC j )

Nd

 N(X t 1

N d must be to converge to the global optimum, at the

f j ( X j ,t )

expense of the computational load. Nevertheless, a good choice of the importance sampling parameter m can spare the

;  j , mC j ) j ,t

choice of a too large value for N d . For well-chosen values of

which tends to the ratio of the expectations

f j ( x)

 g ( x) N ( x;  , mC ) N ( x;  , mC )dx j

j

j

m and N d , the radius obtained by our algorithm is in more

j

than 75% of the time the global optimum or close to the global optimum (the ratio rMC / rG is less or equal than 1.2). The use

j

f j ( x ')

 N ( x ';  , mC ) N ( x ';  , mC )dx ' j

j

of importance sampling enables to converge to the right result

j

j

which is indeed the desired value (10), when

with few ( N d  10 ) particles even when   10 . 7

2

N d tends to

infinity. However the strength of importance sampling in this case is that only a small amount N d of drawings suffices to

VI.

make the algorithm work, because the possible errors in the computation of c and  at each step are approximately corrected at the next step thanks to the computation of the new values of c and  which only take the current value of

(c, r ) in consideration as a starting point to the descent. V.

 the greater the value of

BENCHMARKS

CONCLUSION

This paper has proposed a position estimator under the form of an algorithm which minimizes its associated confidence ball in the case when the position’s probability density function is expressed as a Gaussian mixture in multiple dimensions. The algorithm has been assessed in one dimension, where a comparison against a greedy algorithm is possible. Numerical computations showed that the obtained confidence ball is most of the times the globally optimal one or is close to the optimum.

In a mono dimensional setting, we compare the solution obtained by our algorithm to the globally optimal solution obtained by a greedy search on the discretized space. Our algorithm is assessed on 100 Gaussian Mixtures with randomly drawn parameters : N g is drawn as G  2 where G follows a geometric law of mean 4 (to avoid the trivial case N g  1 ),  j is drawn according to a centered Gaussian with a standard deviation equal to 10 and the C j  K j / 3 (which are scalars in the 1D case) where K j is drawn according to a chi-square law with 3 degrees of freedom. The means are thus spaced 10 times the order of magnitude of the standard deviations of the Gaussians to avoid too much overlapping, in

Figure 1. Box and Whisker plots of the ratios of the obtained radius with the globally optimal radius

51/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 VII. APPENDIX

In the particular case when

The integral of a multivariate gaussian pdf over a ball has no closed analytical formula but can be efficiently computed thanks to the use of the numerical routines associated to the the non central chi square and the generalized non central chi square cumulative functions. Consider a random variable X ~ N (  j , C j ) , then we have

Pr( X  B(0, r )) 



N ( x;  j , C j )dx



1

 r2   j   j 2 I and Pr( X  B(0, r ))  Pr   Zi 2  2     i which can be evaluated thanks to the cumulative function of the non central chi square law [6,8], evaluated at

non centrality parameter

B (0, r )





1

B (0, r )

2

1

d

Cj

exp 

2

2



m

2

r2

2

with a

2

and

d degrees of freedom. It’s

complementary to one can be evaluated with the so called survival function, known to be more precise in our case [6].

( x   j ) C j ( x   j ) dx 1

T

C j   j 2 I , we have

Because the covariance matrix can be diagonalized :

C j  R R we have T





1

B (0, r )





2

d

1



B (0, r )

2

1





exp 

2

1 d

REFERENCES

2



exp 

1 2

1



Where

2



T

1

[2]



1

( y  m)  ( y  m) dy T

2

Where we use the substitution

 Pr Y



[1]

( x   j ) R  R ( x   j ) dx T

[3] [4]

y  Rx and m  R .

[5]

  i 1 d

 

 r 2  Pr   ii Zi 2  r 2 

(11)

[6] [7]

Y  RX ~ N (m, ) and where the variables

Zi  Yi / ii ~ N (mi / ii ,1) are independent (because

[8] [9]

their decorrelation implies their independence in the gaussian case). We recognize the cumulative function of the generalized non central chi square law [7] in equation (11). Numerical routines to efficiently compute its value can be found in [3,4].

52/278

RTCA Minimum Operational Performance Standards for Global Positioning System/Wide Area Augmentation System Airborne Equipment. 1828 L Street, NW Suite 805, Washington, D.C. 20036 USA. Pervan, Boris S., Pullen, Samuel P., Christie, Jock R., "A multiple hypothesis approach to satellite navigation integrity", Navigation, Vol. 45, No. 1, Spring 1998, pp. 61-84. Robert B Davies “Numerical inversion of a characteristic function”, Biometrika Trust, vol. 60, pp. 415-417, 1973. Robert B Davies “Algorithm AS 155: The distribution of a linear combination of 2 random variables”, Journal of the Royal Statistical Society. Series C (Applied Statistics), January 1980. Wikipedia “Spherical Measure” http://en.wikipedia.org/wiki/Spherical_measure Scipy http://www.scipy.org/ Wikipedia “Generalized Non Central Chi Square Law” http://en.wikipedia.org/wiki/Generalized_chi-squared_distribution Matlab http://www.mathworks.com Pesonen, Henri, "A Framework for Bayesian Receiver Autonomous Integrity Monitoring in Urban Navigation", NAVIGATION, Journal of The Institute of Navigation, Vol. 58, No. 3, Fall 2011, pp. 229-240.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Evaluating robustness and accuracy of the Ultra-wideband Technology-based Localization Platform under NLOS conditions Piotr Karbownik, Grzegorz Krukar, Andreas Eidloth, Norbert Franke, and Thomas von der Gr¨un Locating and Communication Systems Department Fraunhofer Institute for Integrated Circuits Nuremberg, Germany Email: [email protected]

Abstract—In this paper, we present measurement results of the experimental validation of the UWB localization platform under Line-Of-Sight (LOS) and Non-Line-Of-Sight (NLOS) conditions. The platform is based on the Time Difference of Arrival (TDoA) technique and the energy detection receiver. TDoA based localization systems require minimum four anchors to provide 3D position data. In order to deal with NLOS scenario our platform uses eight receiving anchors. Additionally, we have developed positioning algorithms customized to improve the robustness and accuracy of the system.

I. I NTRODUCTION Indoor environment pose challenges for localization systems, mainly because of multipath effects and the presence of the stationary and moving objects shadowing Line-OfSight (LOS) between anchors and a tracked item. The ultrawideband (UWB) technology might be a possible solution for indoor environments due to its specific properties [1]. In this paper, we present the architecture of the enhanced UWB localization platform as well as the results of experimental validation under LOS and Non-Line-Of-Sight (NLOS) conditions. Compared to the work presented in our last paper from the 2012 edition of the IPIN conference [2], where a scenario with LOS conditions and four receiver anchors was considered, the current version of the platform is evaluated under both LOS and NLOS conditions in 2D and 3D. Through the addition of four anchors we obtained an enhanced system with redundancy that can deal with static and moving obstacles. However, due to low update rate of the platform, focus was laid on static obstacles. Moreover, in order to increase the overall system robustness and accuracy a new algorithm for position calculation was used. Instead of a basic, algebraic solution (AS) [3], an algorithm based on Bayesian filtering [4] techniques was implemented. Due to the limited number of input channels of a LeCroy SDA 816Zi real-time oscilloscope, the UWB platform consists of two synchronized oscilloscopes. Together they provide eight inputs with an analog bandwidth of up to 16 GHz. II. L OCALIZATION P LATFORM A RCHITECTURE A Picosecond 3500D impulse generator providing a pulse of a full width at half the maximum of 65 ps with an amplitude of 8 V was used as a transmitter. The UWB omnidirectional

antenna operating in 2-11 GHz band played the role of a transmit antenna. Compared to the previous IPIN 2012 conference paper [2], the receiver anchors architecture remain unchanged. In order to obtain an eight-channel receiver, two LeCroy SDA 816Zi and SDA 820Zi real-time oscilloscopes (16 GHz and 20 GHz analog bandwidth, respectively) were connected and synchronized. The LeCroy SDA 816Zi oscilloscope operating as a master allowed for a real-time processing of all eight input signals. The graphical user interface developed in Visual Basic and running on the master oscilloscope executed the energy detection receiver algorithms with a sampling rate of 2 GS/s and position calculation functions [6]. In order to enable localization without synchronization between a transmitter and receivers, an algorithm based on the Time Difference of Arrival method was implemented [3]. With time of arrival values obtained from multiple anchors and processed by the Extended Kalman Filter (EKF), positioning accuracy in centimeters range was achieved. The anchors were placed around the measurement site at different heights. To assess the impact of the anchors’ spatial distribution on the localization accuracy, a geometric dilution of precision (GDOP) analysis was performed [7]. Height and GDOP can be read from Fig. 1.

III. M EASUREMENTS AND R ESULTS As a measurement site, a test room at the Fraunhofer Institute IIS in Nuremberg was chosen. Measurements were taken under LOS and NLOS conditions within a 4 m x 9 m area. The true position of the localized UWB transmitter (defined as the phase center of the transmit antenna) was determined with the use of the iGPS laser-based positioning and tracking system of a typical accuracy of 200 µm [5]. For each measurement position, 20 samples were taken. The integration window size and the number of acquisitions were equal to 0.5 ns and 20, respectively [6], [8]. Fig. 1 shows the obtained results at a height of 106 cm together with the calculated GDOP. For the considered anchor spatial distribution, GDOP influences the localization accuracy to a small extent taking values of a degrading factor up to 2.5 for all the measurement positions.

53/278

TABLE II M EAN ABSOLUTE LOCALIZATION ERRORS FOR THE ”NLOS” SCENARIO . Position [cm] X=495.0

Y=203.0

Fig. 1.

The x-y position results with a GDOP visualization.

TABLE I M EAN ABSOLUTE LOCALIZATION ERRORS FOR THE ”LOS” SCENARIO . Position X Y [cm] [cm] 100.0 100.0 105.0 203.0 100.0 298.0 198.0 101.0 197.0 202.0 200.0 302.0 298.0 103.0 300.0 195.0 300.0 300.0 396.0 104.0 399.0 202.0 402.0 300.0 499.0 102.0 495.0 203.0 500.0 299.0 601.0 103.0 599.0 202.0 602.0 303.0 698.0 101.0 699.0 201.0 700.0 298.0 797.0 102.0 796.0 201.0 800.0 303.0 MEAN:

AS ε2D [cm] 13.6 9.7 4.3 5.6 28.5 3.9 1.3 7.5 2.4 1.3 1.4 1.8 0.5 0.7 3.1 0.7 0.9 3.6 10.3 1.0 4.3 0.6 17.7 9.1 5.6

ε3D [cm] 29.7 17.9 10.1 11.6 62.6 6.6 2.5 19.5 5.7 1.5 1.7 4.8 1.6 1.0 5.7 1.9 2.0 8.3 19.2 2.4 7.6 6.7 37.5 12.0 11.7

EKF ε2D ε3D [cm] [cm] 6.18 16.4 2.7 6.8 3.6 5.9 2.2 7.1 12.7 34.3 4.9 8.3 0.9 3.5 3.0 12.0 2.7 5.4 1.8 2.3 1.8 3.0 3.4 6.6 0.7 1.4 1.6 2.6 1.2 2.7 0.4 2.2 0.7 2.3 1.9 5.4 5.5 23.1 0.4 2.2 1.8 5.2 1.5 1.5 6.6 8.1 6.1 7.2 3.1 7.3

A. Localization in a LOS Environment The 24 static measurement positions were distributed over the measurement site approximately with a 1 m grid at a height of 106 cm see Fig. 1 and Table I. During the whole measurement procedure the LOS conditions between the transmit antenna and the receive antennas were maintained. Table I shows the accuracy of the localization platform given by the mean absolute error in 2D, ε2D , and 3D, ε3D , for the AS and the EKF. The average ε3D is lower than 12 cm and 8 cm for the AS and the EKF, respectively. B. Localization in a NLOS Environment In this scenario, one measurement position in a center of the room at a height of 106 cm was chosen - see Table II. The

Shadowed Anchor a b c d e f g h

εX [cm] 4.0 2.2 0.2 2.0 1.1 0.4 2.4 0.2

εY [cm] 1.5 0.9 0.2 3.8 0.1 2.7 0.1 0.9

εZ [cm] 4.2 9.2 0.7 24.3 3.2 11.3 6.2 2.5

ε2D [cm] 4.3 2.4 0.3 4.3 1.1 2.7 2.4 1.0

ε3D [cm] 6.0 9.5 0.7 24.7 3.3 11.6 6.6 2.7

NLOS conditions were obtained by placing a person between the transmit and receive antennas. For each shadowed receiver anchor the position of the transmitter was captured. Table II shows the accuracy of the localization platform given by the mean absolute error for the EKF under NLOS conditions. As expected, the ε2D , for the NLOS scenario is higher but still comparable to the error for LOS scenario. However, the ε3D for the shadowed receiver anchors b, d and f is more than three times higher than for the ε3D obtained under LOS conditions. This phenomena might be related to the deteriorated vertical dilution of precision under the NLOS conditions to those specific receiver anchors. IV. C ONCLUSIONS In this paper, we presented the performance of the UWB localization platform. Obtained results show that it is possible to achieve the localization accuracy averaged over all measurement positions being better than 8 cm and 25 cm in 3D under LOS and NLOS conditions respectively. The usage of the EKF allows for improvement of positioning accuracy as well as for dealing with NLOS conditions between a transmitter and one of the receiver anchors. Further work includes analysis of a NLOS scenario with higher number of shadowed receiver anchors and hardware implementation of the receiver. R EFERENCES [1] Z. Irahhauten, H. Nikookar, and M. Klepper, ”2D UWB Localization in Indoor Multipath Environment Using a Joint ToA/DoA Technique,” Wireless Comm. and Networking Conf. WCNC 2012, pp. 2253-2257, April 2012. [2] P. Karbownik, G. Krukar, M.M. Pietrzyk, N. Franke, and T. v.d. Gruen, ”Experimental Validation of the Ultra-wideband Technologybased Localization Platform,” Int. Conf. on Indoor Positioning and Indoor Navigation IPIN 2012, 3 pgs., Nov. 2012. [3] R. Bucher and D. Misra, ”A Synthesizable VHDL Model of the Exact Solution for Three-dimensional Hyperbolic Positioning System,” VLSI Design, vol. 15 (2), pp. 507-520, 2002. [4] J. Wendel, ”Das Kalman-Filter” in Integrierte Navigationssysteme: Sensordatenfusion, GPS und Integrierte Navigation. Oldenbourg Wissenschaftsverlag GmbH, Munich, 2007, ch.6, pp. 129-147. [5] Nikon website, http://www.nikonmetrology.com, last accessed June 2013. [6] M.M. Pietrzyk and T. v.d. Gruen, ”Ultra-wideband Technology-based Ranging Platform with Real-time Signal Processing,” Int. Conf. on Signal Processing and Comm. Systems ICSPCS 2010, 5 pgs., Dec. 2010. [7] R.B. Langley, ”Dilution of Precision,” GPS World, vol. 10(5), pp. 52-59, May 1999. [8] P. Karbownik, G. Krukar, A. Eidloth, M.M. Pietrzyk, N. Franke, and T. v.d. Gruen, ”Ultra-wideband Technology-based Localization Platform with Real-Time Signal Processing,” Int. Conf. on Indoor Positioning and Indoor Navigation IPIN 2011, 2 pgs., Sept. 2011.

54/278

- chapter 2 -

Positioning Algorithm

Robust Step Occurrence and Length Estimation Algorithm for Smartphone-Based Pedestrian Dead Reckoning Wonho Kang† , Seongho Nam‡ , Youngnam Han† , and Sookjin Lee§

† Korea

Advanced Institute of Science and Technology (KAIST), Daejeon, Korea ‡ Agency for Defense Development (ADD), Daejeon, Korea § Electronics and Telecommunications Research Institute (ETRI), Daejeon, Korea e-mail: † {wonhoz, ynhan}@kaist.ac.kr, ‡ [email protected], § [email protected]

Abstract—Recently, personal positioning systems are necessary to build many location-based services. Pedestrian dead reckoning, which is a pedestrian positioning technique using the accelerometer sensor to recognize pattern of steps, is an alternative method that has advantages in terms of infrastructure-independent. However, the variation of walking pattern on each individual will make some difficulties for the system to detect displacement. This is really interested authors to develop a sensor-based positioning system that applied generally to all individuals. Experiment begins with the feasibility test of accelerometer sensor. In this work, a smartphone with average sampling rate 20 Hz is used to records the acceleration. Then, the acceleration data are analyzed to detect step occurrence with peak step occurrence detection and to estimate the step length using two kinds of dynamic step length estimation methods, which are root-based and log-based schemes. The experimental results show that an average 2% error in step occurrence detection, and standard deviation 0.0320 m and 0.0498 m in root-based and log-based step length estimation, respectively. Keywords—Personal positioning systems, sensor-based positioning systems, pedestrian dead reckoning, smartphone

Detection number of step occurrence and estimation of step length can be done using accelerometer sensor. Recent smartphones which is coming with integrated accelerometer sensor become a new spirit to use PDR as a pedestrian indoor positioning system. This is because smartphones have small physical form and light in weight making easy to carry it anywhere. Moreover, using integrated sensor in smartphone is less expensive than purchasing specialty hardware and it is more convenience in setting up the solutions to pedestrian. In this work, experimental data are collected with Samsung Galaxy Note with an Android simple program to record the acceleration. The structure of this paper is as follows: Section II describes the principle of pedestrian dead reckoning. This is then followed by our experimental scenario in Section III and our experimental results in Section IV. Finally, we conclude our work in Section V. II. P EDESTRIAN D EAD R ECKONING

I. I NTRODUCTION Positioning is a technique that used to know object’s position in a frame of reference. Generally, positioning can be done using some infrastructures-aid, such as Global Positioning System (GPS) satellite or Base Transceiver Station (BTS) cell-phone service provider. However, the implementation of indoor positioning system is still found any limitations, e.g GPS satellite signal dependence make this technique cannot be used in the building. Instead, positioning with BTS cellphone can be used indoor seamlessly, but the accuracy is very small which is about 100 m up to 35 km [1]. Of course this limitations make them impossible to be implemented in indoor positioning. Indoor positioning becomes important when user needs to know its position in a building, such as a firefighters who need to know about their position in a building during a rescuing effort. An alternative of indoor positioning is pedestrian dead reckoning (PDR). PDR technique determines the latest position of a pedestrian by adding estimated displacement to starting known position. Displacement is represented by amount of steps and each step has its various step length.

Pedestrian Dead Reckoning (PDR) is a pedestrian positioning solution by adding distance traveled to the known starting position. Pedestrian distance traveled can be determined by using accelerometer sensor to detect step occurrence and estimate displacement. Accelerometer sensor must be attached to the body to record the acceleration. Some related research has been done in previous studies using a special sensor modules that is attached on the helmet [2], attached at the foot [3], [4], or using low-cost sensor integrated in smartphone and placed it to the trouser pocket [5]–[7]. Basically, the implementation of PDR technique includes some operations: orientation projection, gravity and noise filtering, step occurrence detection, and step length estimation [5], [6]. However, this work is a subsystem of complete PDR system which is not include heading orientation estimation process. A. Orientation Projection Accelerometer sensor actually indicates 3-axis acceleration relative to the smartphone body frame itself. Therefore it can be projected from x-y-z local coordinate system to the

55/278

12

personal coordinate system to obtain the acceleration values in front-side-up using pitch, roll, yaw angles of smartphone. This process is usually used to resolve of smartphone arbitrary placement. The rotation matrices for pitch (θ), roll (ϕ), yaw (ψ) angles are formed as   1 0 0 Rθ =  0 − cos θ sin θ  , (1) 0 sin θ cos θ   cos ϕ 0 sin ϕ 0 1 0 , Rϕ =  (2) − sin ϕ 0 cos ϕ and   cos ψ sin ψ 0 Rψ =  − sin ψ cos ψ 0  , (3) 0 0 1 respectively. We can obtain the rotation matrix that converts local coordinate system to the personal coordinate system by multiplying the above three rotation matrices as R = Rψ Rθ Rϕ   cψcϕ − sψsθsϕ −sψcθ cψsϕ + sψsθcϕ =  −sψcϕ − cψsθsϕ −cψcθ −sψsϕ + cψsθcϕ  (4) −cθsϕ sθ cθcϕ where c and s stand for cos and sin functions, respectively. The acceleration on the personal coordinate system can be obtained as aperson = Ralocal .

(5)

B. Gravity and Noise Filtering The acceleration signal must be filtered to obtain the desired output signal: gravity-free and noise-free signal. Gravity is a low-frequency signal component that causing offset shift up the y-axis, about 9.8m/s. To eliminate the influence of gravity, the signal is filtered with high-pass filter similar to [6], which is implemented with equation (6) and (7). g = αg + (1 − α) aperson z

(6)

−g astep = aperson z

(7)

Low-frequency signal component, represented with mean of waveform, is subtracted to remove gravity component. The output of high-pass filter then processed by low-pass filter to smooth the signal and reduce random noise. Low-pass filter has done by using a moving average filter as equation (8). 1 aout (u) = W

W −1 2



ain (u + v)

(8)

v=− W2−1

where aout and ain are output average-filtered and input unfiltered acceleration signal. W is moving window, the number of points used in the moving average. Results of these filtration process are signal which is free from gravity and minimum random noise as shown in Fig. 1 with respect to different window size. Unfiltered raw signal represents with green line,

Raw acceleration Low−pass filtered acceleration (W=3) Low−pass filtered acceleration (W=5) Low−pass filtered acceleration (W=7) Low−pass filtered acceleration (W=9)

10

Acceleration [m/s2]

8 6 4 2 0 −2 −4 −6 0

Fig. 1.

1

2

3

4 Time [t]

5

6

7

Low-pass filtered acceleration with various window size.

and the magnitude of filtered signals decreases as the window size increases. In this paper, the value of W is taken as 5, which is obtained empirically through signal analysis. The output of filtering process can be proceed further to obtain the information about the step occurrence. C. Step Occurrence Detection Pedestrian distance traveled is represented by the number of steps. Therefore, it is necessary to accurately detect step occurred in order to get better estimation. There are two common step occurrence detection methods which can be used to analyze acceleration signal: peak step occurrence detection [4]–[6] and zero-crossing step occurrence detection, [2], [7], [8]. The zero-crossing step occurrence detection method counts signal crossing zero level to determine the occurrence of step. Researchers usually have used time interval threshold to reject false step occurrence detection. This method is not appropriate to detect user steps in general approaches, because it requires certain time interval threshold to make decision whether the zero-crossing represents a represents a valid step occurrence or not. The problem comes when time interval between footfalls varied for some subjects, so it is quite difficult to detect step event accurately using zero-crossing step event detection method without calibration process. The other method is to detect the peaks of acceleration. According to [4], the peaks of magnitude of acceleration correspond to the step occurrences because the magnitude of acceleration will remain same whether the smartphone is tilted or not. In this paper, we also use the peak step occurrence detection method. However, we use vertical acceleration instead of magnitude of acceleration, in consider to resolve the problem of tilting. Because vertical acceleration is generated by vertical impact when the foot hits the ground. To detect step occurrence, we employ four kind of threshold detection scheme. This scheme detects a step occurrence when the acceleration meats peak, frontside, and backside thresholds,

56/278

1.2

8 Low−pass filtered gravity−free acceleration Positive peak Negative peak

Acceleration [m/s2]

Frontside Threshold

Decrement Peak Trend Threshold

Increment Trend

Reference Mean of estimated step length Median of estimated step length

1

Backside Threshold

Estimated step length [m]

6

1.1

4

2

0

0.9 0.8 0.7 0.6 0.5 0.4

−2

−4 2.8

0.3

3

3.2

3.4

3.6

3.8 4 Time [t]

4.2

4.4

4.6

0.2 0.4

4.8

Fig. 2. Peak step occurrence detection on low-pass filtered gravity-free vertical acceleration.

0.5

0.6 0.7 0.8 Reference step length [m]

0.9

1

Fig. 3. Root-based step length estimation on 0.4 to 1 m-predefined distance intervals. 1.2

Total traveled distance can be calculated by estimating step length in every valid detected step occurrence. Generally, there are two methods for estimating step length: static method and dynamic method. Static method assumes that any valid step having the same length, which can be determined through equation (9). ∀k

(9)

where the constant l is normally in the range of 0.6 to 0.85 m. In contrary, dynamic method assumes any valid step having their different step length which can be estimated using certain approach proposed in [9]. It assumes that vertical bounce, which is happen as a impact from walking activity, is proportional with step length. The vertical bounce is calculated using peak-to-peak differences at each step occurrence as equation

Reference Mean of estimated step length Median of estimated step length

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.4

0.5

0.6 0.7 0.8 Reference step length [m]

0.9

1

Fig. 4. Log-based step length estimation on 0.4 to 1 m-predefined distance intervals.

(10).

D. Step Length Estimation

lk = l,

1.1

Estimated step length [m]

and shows increment and decrement trend on the frontside and backside, respectively in a certain interval. Frontside threshold is from the difference between the current peak and the previous valley, and backside threshold is from the difference between the current peak and the next valley. In this paper, thresholds are constant values that determined experimentally for all test subjects. Fig. 2 illustrates three valid steps taken from a walking pattern of a test subject. Red dot points represent a valid peak which is a peak acceleration exceeding peak threshold. Peak acceleration is shown in the blue dashed-line, while valley acceleration is shown in red dashed-line with valley points in red dot points. The difference between peak and valley acceleration is used as frontside and backside thresholds. A step occurrence is detected when valid peak meets above thresholds and shows increment and decrement trend on the frontside and backside in sequence in a certain interval.

√ 4 step lk = β astep max − amin

(10)

However, this approach is derived from the waist-mounted pedestrian dead reckoning. In this paper, the smartphone is held in hand, so the equation (10) cannot be applied directly. With the consideration of the position of smartphone, the equation (10) should be modified as the equation (11) where γ is the offset. √ 4 step lk = β astep (11) max − amin + γ By the way, the log-based step length estimation is a little more accurate than the root-based one since the range of log function is much wider than that of root function. In this reason, the equation (12) is used for step length estimation in this paper. [ step ] lk = β log astep (12) max − amin + γ

57/278

Start

Orientation Projection

> Peak Point Threshold

High-Pass Filtering (Gravity Elimination)

> Frontside Threshold

Low-Pass Filtering (Noise Elimination)

> Backside Threshold

Step Occurrence Detection

> Trend Threshold

Step Length Estimation

Candidate Step

End

End

No

Yes

Start

Yes

No

Yes

No

Yes

No

(a) Overall system Fig. 5.

(b) Step Occurrence Detection

Flow chart of overall system and step occurrence detection

Fig. 3 depicts the root-based step length estimation and Fig. 4 does log-based step length estimation where black dots represent the estimated step length on y-axis with respect to reference step length on x-axis. This result is from the experiment that the user walks 0.4 to 1 m-predefined distance intervals. The results of two method in figures seems almost same, but the log-based step length estimation is a little more accurate than the root-based one which will be proved in Section IV.

Fig. 6. Experimental scenario that smartphone was placed in the hand as using phone normally while user walks in straight path.

IV. E XPERIMENTAL R ESULT A. Eligibility of Smartphone Sensor Eligibility of a sensor is observed from the sampling frequency since sampling frequency shows how fast a sensor sampling the data. From several tests, our accelerometer has a sampling rate of 20 Hz. It indicates that this sensor is enough to detect step occurrence since normal walking frequency is 1.5 Hz which is much less than the sampling frequency. B. Step Occurrence Detection

III. E XPERIMENTAL S CENARIO In order to evaluate the reliability of system to detect displacement in the variation of walking pattern without calibration, the actual walking test was done. The experiments were done in 7th floor hallway of Information Technology Convergence building, Korea Advance Institute of Science and Technology, Korea. We used an accelerometer sensor integrated in Samsung Galaxy Note with Android Icecream Sandwitch operating system. The acceleration value then proceed in Matlab with procedure as shown in flowchart in Fig. 5 where Fig. 5(a) depicts the flowchart of overall system and Fig. 5(b) does that of step occurrence detection. In experiments, smartphone was placed in the hand, as shown in Fig. 6, and it was assumed that no obstacles in front of subject.

In order to detect a valid step occurrence, we implement four kind of threshold as explained in Section II-C. This scheme is implemented to whole test subjects without individual calibration process to fit their walking pattern. To compare the error from different distance, we use percentage error which is calculated from difference of actual steps counted and steps detected. The detected step occurrence is shown in Fig. 7 , for instance, when a user asked to walk 16 steps. The step occurrence detection error from the experimental results of 20 test subjects was about 2%. It explains that the method is quite reliable in detecting steps without performing individual calibration process. C. Step Length Estimation As explained in Section II-D, the step length can be calculated by root-based and log-based step length estimation

58/278

6

processes in every valid detected step occurrence. Comparison of estimated step length of two methods is shown in Fig. 8 and Fig. 9. Fig. 8 depicts experimental result that the user walks 0.4 to 1 m-predefined distance intervals and Fig. 9 does the case of when a user asked to walk 0.7 m-predefined distance intervals. Log-based scheme can estimate step length better than root-based one. This is indicated by the small standard deviation where 0.0320 m for log-based method and 0.0498 m for root-based one.

Low−pass filtered gravity−free acceleration Peak step occurrence detected points

5 4

Acceleration [m/s2]

3 2 1 0 −1

V. C ONCLUSION

−2 −3 −4 0

2

4

6 Time [t]

8

10

12

Fig. 7. Peak step occurrence detection on low-pass filtered gravity-free acceleration. 1.2 Reference Root−based step length estimation Log−based step length estimation

1.1

Estimated step length [m]

1 0.9 0.8 0.7 0.6 0.5

R EFERENCES

0.4 0.3 0.2 0.4

0.5

0.6 0.7 0.8 Reference step length [m]

0.9

1

Fig. 8. Comparison of root-based and log-based step length estimation on 0.4 to 1 m-predefined distance intervals. 0.78 Reference Root−based step length estimation Log−based step length estimation

0.76

Estimated step length [m]

This paper presents a positioning system that can be used generally without the individual calibration process. The system focused on displacement estimation by utilizing the accelerometer sensor integrated on a smartphone which is placed in the hand. Step occurrence detection on various walking pattern without calibration process results an average error of about 2%. This result shows that step occurrence detection using four-type-threshold peak step occurrence detection is quite reliable to detect steps with general-approached. When a step event detected, step length should be determined to estimate displacement. In this work, step length estimation performed using root and log-based dynamic methods. The log-based dynamic method give better estimation than root-based one. It is confirmed from the experiment showing the less standard deviation of the estimated step length.

0.74

0.72

0.7

0.68

0.66

0.64

0

5

10

15

20

25

30

35

40

45

Step

[1] N. Deblauwe, GSM-based positioning: techniques and applications. Asp/Vubpress/Upa, 2008. [2] S. Beauregard and H. Haas, “Pedestrian dead reckoning: A basis for personal positioning,” in Proceedings of the 3rd Workshop on Positioning, Navigation and Communication (WPNC?06), 2006, pp. 27–35. [3] A. Jimenez, F. Seco, C. Prieto, and J. Guevara, “A comparison of pedestrian dead-reckoning algorithms using a low-cost mems imu,” in Intelligent Signal Processing, 2009. WISP 2009. IEEE International Symposium on. IEEE, 2009, pp. 37–42. [4] J. W. Kim, H. J. Jang, D.-H. Hwang, and C. Park, “A step, stride and heading determination for the pedestrian navigation system,” Journal of Global Positioning Systems, vol. 3, no. 1-2, pp. 273–279, 2004. [5] Y. Jin, H.-S. Toh, W.-S. Soh, and W.-C. Wong, “A robust dead-reckoning pedestrian tracking system with low cost sensors,” in Pervasive Computing and Communications (PerCom), 2011 IEEE International Conference on. IEEE, 2011, pp. 222–230. [6] I. Bylemans, M. Weyn, and M. Klepal, “Mobile phone-based displacement estimation for opportunistic localisation systems,” in Mobile Ubiquitous Computing, Systems, Services and Technologies, 2009. UBICOMM’09. Third International Conference on. IEEE, 2009, pp. 113– 118. [7] S. Ayub, X. Zhou, S. Honary, A. Bahraminasab, and B. Honary, “Indoor pedestrian displacement estimation using smart phone inertial sensors,” International Journal of Innovative Computing and Applications, vol. 4, no. 1, pp. 35–42, 2012. [8] S. Shin, C. Park, J. Kim, H. Hong, and J. Lee, “Adaptive step length estimation algorithm using low-cost mems inertial sensors,” in Sensors Applications Symposium, 2007. SAS’07. IEEE. IEEE, 2007, pp. 1–5. [9] H. Weinberg, “Using the adxl202 in pedometer and personal navigation applications,” Analog Devices AN-602 application note, 2002.

Fig. 9. Comparison of root-based and log-based step length estimation on 0.7 m-predefined distance intervals.

59/278

Context Aware Adaptive Indoor Localization using Particle Filter Yubin Zhao, Yuan Yang, Marcel Kyas Computer Systems and Telematics, Institute of Computer Science, Freie Universit¨at Berlin Email: [email protected], [email protected], [email protected]

Abstract—Range-based wireless positioning systems suffer high noise in indoor environment. The positioning algorithm using building map and non-line-of-sight (NLOS) information to obtain the position is complicated. We propose a low complexity contextaware adaptive particle filtering scheme to improve the tracking performance of indoor positioning systems. It combines three methods: mobility behavior prediction, constraint sampling and weight adaptation. (1) With mobility behavior prediction, we divide the building layout into several regions and predict which region the target belongs to in the next interval by applying a linear transition prediction function. (2) Constraint sampling: to obtain effective particle samples, we introduce a constraint sampling method. The constraint conditions are constructed according to the measurement constraints and layout region obtained in step (1). The measurement constraints are set up through min-max algorithm, which is a robust to the ranging noise. Then the particles are uniformly sampled within the constrained conditions. (3) Finally, to obtain an accurate estimation, a low complexity weight adaptation method is designed to reduce the impact of measurement noise. Experimental results demonstrate that our context aware adaptation scheme achieves an accurate estimation performance and low computation complexity. Index Terms—indoor localization, particle filter, weight adaptation, context aware.

elimination methods know the target moves along the hallway, thus they will eliminate the particles which are not in the hallway. However, in the real world case, targets can move anywhere they want, if the prior information is wrong, the tracking path is limited in the constraint region. Finally, even if the prior information is correct, the measurement noise still influence the estimation significantly. We propose a low complexity particle filter scheme which integrates target’s motion context and building information. First, we predict the mobile target’s motion behavior. And divide the indoor building into several possible regions. The region where the target belongs to is predicted based on the linear prediction equation in particle filter. With the detected region, a joint constraint conditions are constructed based on the region information and measurement information. The particle samples are generated within this constraint conditions. Finally, the estimation is attained using our weight adaptation method. Our method is robust to the measurement noise and inaccurate layout constraint condition, and it can achieve high accurate estimation. Since less building information is required, the computation complexity is low.

I. I NTRODUCTION

II. S YSTEM M ODEL

Recently, there has been a growing interest in indoor localization techniques which rely on in-building communications infrastructure. Wireless systems determine the location of a mobile target from measurements taken on the transmitted signals, e.g. received-signal-strength (RSS), time-of-arrival (TOA) or angle-of-arrival (AOA) by the nodes in the wireless network. A major challenge for indoor location algorithms, is the robustness to the high dynamic and unpredictable inbuilding wireless environment. Particle filter is one effective solution which is feasible and adaptable for the implementation in the non-linear and non-Gaussian environment [1], [2]. It can achieve high accurate estimation with unreliable measurement. Integrating with building information, such as map matching, it can avoids wired motion tracking estimation such as walking through a wall or jumping out of the building [3], and also reduce the estimation error [4]. Using building information still has some drawbacks in the real system. First, the methods which use building information, such as map matching, a large database should be built and the model is quite complicated. Secondly, Some constraint methods have prior knowledge of target movement which is not feasible for the real scenario. For instance, particle

In the rang-based wireless positioning system, the mobile device with unknown position is called target, such as mobile sensor node, smartphone and robot. The wireless devices with known positions and measure the ranges (or distances) to the target are called anchors. In our system, the range measurement is based on time-of-flight (TOF) range measurement [5]. The measurement for each anchor is formulated as: √ (1) ztj = (Xt − pjx )2 + (Yt − pjy )2 + njt where ztj denotes the measurement for jth anchor; xt = [Xt , Yt ]T is the target’s coordinates; [pjx , pjy ]T denotes the anchor’s position; njt is the measurement noise njt ∼ N (µjt , Rtj ). III. M OTION D ETECTION USING B UILDING L AYOUT The building consists of rooms and hallways. The target has different motion behaviors in rooms and hallways. Thus we divide the building layout into several region according to the motion behavior in the building. We record the coordinates of each region as the constraint condition. If the target is predicted to move in this region, one constraint conditions are the coordinates of this region. No additional information

60/278

Type II

Type I

Type I

Type I

Type II

Type I

Fig. 1. Region partition: Type I: room or corridor without cross; Type II: corridor on the cross.

is recorded in our system, such as non-line-of-sight (NLOS) conditions. Thus the complexity is quite low. It is easy to define a room as a single region. However, the movement of the target in the hallway can be different. Thus, we divide hallway into two types of region, which is shown in Fig. 1. The first one is the region on the cross. In this region, the target can either turn right or left, and can also forward or backward. Thus, the constraint condition is less reliable and should not restrict the target estimation. The second type is the corridor with no corners or cross. The target can only move forward or backward, no other options. In this case, the constraint conditions are reliable in this region, which helps us adapt the particle weights. We use linear prediction function to predict the target movement and estimate the region according to its movement. xt = Ft xt−1 + qt

(2)

where xt = [Xt , Yt ]T is the target’s movement state; Ft is the linear transition matrix; xt−1 is the previous state and qt is the prediction noise qt ∼ N (0, Qt ). The region is chosen based on: { k k Xmin ≤ Xt ≤ Xmax (3) k k Ymin ≤ Yt ≤ Ymax k k k k , Ymin , Ymax ]T denotes the coordinates of , Xmax where [Xmin the region. The constraint conditions for particle sampling are also based on (3).

Fig. 2.

The constraint conditions drawn by min-max algorithm

to draw a second constraint region:  min sX,t = max{pjX − ztj }N  j=1     smax = min{pj + ztj }N j=1 X,t X j j N  smin  Y,t = max{pY − zt }j=1    smax = min{pj + z j }N t j=1 Y,t Y

where (pjX , pjY )T denotes jth anchor’s position; ztj is the range measurement for jth anchor. Then, we combine these two conditions to draw an integrated constraints, which is also based on min-max algorithm: { k k max max(Xmin , smin X,t ) ≤ Xt ≤ min(Xmax , sX,t ) (5) k min k max max(Ymin , sY,t ) ≤ Yt ≤ min(Ymax , sY,t ) According to the max-entropy-principle, the particles are uniformly sampled within (5). V. W EIGHT A DAPTATION A. Predicted Measurement To further reduce the measurement noise, we propose a weight adaptation method. First, we make a measurement prediction. The calculation steps are as follows: x ˆt denotes the prediction value of xt according to (2): x ˆt = Ft xt−1

IV. C ONSTRAINT S AMPLING The region constraint is the first constraint condition for particle sampling. However, if the motion prediction is not accurate, the region constraint will lead to wrong estimation. Thus, we propose a second constraint condition: measurement constraint. Min-max algorithm is a robust and simple algorithm. It draws rectangle area according to the range measurement, and the drawn area is like a box, as shown in Fig. 2. The estimation error of min-max algorithm does not increase when the measurement error is high. Then, we use it

(4)

(6)

where xt−1 is the estimation at previous time t − 1. When considering the predicted noise qt , we denote x ˆt as: x ˆt = xt + qt

(7)

where qt is assumed to be the additive noise and follows normal distribution qt ∼ N (0, Qt ); Qt is the covariance at time t. Then we obtain a predicted measurement for sensors:

61/278

zˆt = ht (ˆ xt ) = ht (xt + qt )

(8)

B. Belief Factor θ Belief factor θ is the tuning parameter for the predicted measurement and it is used to adapt the measurement zt to approach the actual measurement ht (xt ). Then, the adaptive likelihood function is constructed as: pAL (zt |xit ) = πv (θˆ zt + (1 − θ)zt − zti )

(9)

where pAL () indicates the adaptive likelihood. C. Optimal θ The we formulate the adaptation method as the convex optimization, which minimize the distance between our adapted measurement and the actual measurement. And we construct the objective function as follows: zt + (1 − θ)zt ]||2 θ = arg min ||ht (xt ) − [θˆ

(10)

which turns to be a least-squares approximation problem. Since zˆt is the non-linear functions of the prediction noise qt according to (8), it is difficult to obtain an analytical optimal result. Thus, we use first order Taylor series expansion at xt to linearize (8) : zˆt ≈ ht (xt ) +

∂ht (xt ) qt ∂xt

(11)

where ∂ht (xt )/∂xt is the partial differential of ht (xt ) with respect to xt . And substitute (11) and zt = ht (xt ) + vt into (10), we obtain: ||ht (xt ) − [θˆ zt + (1 − θ)zt ]||2 ≈ ||θ

∂ht (xt ) qt + (1 − θ)vt ||2 (12) ∂xt

Therefore, the problem is converted into a linear optimization problem, which is solvable analytically by expressing the objective as the convex quadratic function: ∂ht (xt ) ∂ht (xt ) T T Qt [ ] θ +(1−θ)Rt (1−θ)T (13) ∂xt ∂xt where Qt and Rt are the covariance of qt and vt . Then, the optimal θ can be obtained if and only if: Ft (θ) = θ

∂Ft (θ) ∂ht (xt ) ∂ht (xt ) T = 2θ Qt [ ] − 2Rt + 2θRt = 0 (14) ∂θ ∂xt ∂xt Then, the unique θ is derived: θ=

Rt ∂ht (xt ) ∂ht (xt ) T ∂xt Qt [ ∂xt ]

+ Rt

Fig. 3. Building layout for indoor localization experiment and the robot trajectory. The triangles mark the positions of sensor nodes which are placed either in the offices or along the corridor.

MHz transceiver as radio transceiver for communication. The data collected from sensor nodes are also range measurement values which are based on TOA. At each measurement interval, the target carried by a robot is measured by sensor nodes, meanwhile, the robot recorded its actual coordinates in the building. Fig. 3 depicts the map of our experimental building. The triangles, which are randomly deployed, mark the sensor nodes’ positions. According to the statistical errors of measurements, it is hard to model the error to a typical distribution. In general, the expectation of measurement error is 1 m and the standard deviation is about 5 m. We implement three particle filter schemes in this experiment. The first one is a generic particle filter without any constraints, named as PF. The second one is the particle filter with map matching method, which consider the NLOS effect and building layout, named as M-PF. The last particle filter is our proposed method, named as CA-PF. The trajectories are shown in Fig. 4. The solid line indicates the ground truth trajectories. The triangles mark the anchor positions just as Fig. 3. The dash curves depict the estimation trajectories. The estimation accuracy comparisons are listed in Table I and II. As shown in Table I and II, if the range measurement is unreliable, particle filter with map matching can not achieve a high accurate estimation. The estimation error is even higher than the generic particle filter. Our context-aware based particle filter is highly robust and can achieve a very accurate estimation.

(15)

TABLE I T RAJECTORY I: P ERFORMANCE COMPARISON

VI. E XPERIMENT AND R ESULTS We employ a reference system for indoor localization testbeds to examine our proposed algorithms. In this system, we deployed 17 wireless sensor nodes either along the corridor or in the offices of our research building. A robot carrying a sensor node as target moved along the corridor of the building with constant speed while recording its own positions [5]. The error of record positions is less than 15 cm, which can be seen as the actual positions. All sensors are integrated with nanoPAN 5375 RF module with 2.4 GHz transceiver and 1 Mbps data rate for range measurement, LPC 2387 as micro-controller and CC1101 900

Algorithm PF M-PF CA-PF

MAE (m) 0.2061 0.3216 0.2501

RMSE(m) 2.1439 2.3176 1.5653

min error(m) 0.0466 0.0362 0.0393

max error(m) 5.8189 17.1701 6.6470

We adapt the number of particles for each particle filter scheme, and check the estimation performance. The results are drawn in Fig. 5. Fig. 5 indicates that without a constraint condition, particle filter can not achieve a high accurate estimation with a few particles, e.g. generic particle filter has a very high RMSE with 10 particles. Map matching can

62/278

45

−5 The actual trajectory Trajectory of CA−PF Anchors

5

PF M−PF CA−PF

40

Root Mean Square Error (m)

0

10 15 20 25 30

35 30 25 20 15 10

35 5

40 0

10

20

30

40

50

60

70

0 10

20

30

−5

5

50

60

70

80

1.85

Processing Time/ 1e−10 seconds

15 20 25 30 35 10

20

30

40

50

60

70

PF M−PF CA−PF

1.8

1.75

1.7

1.65

1.6

(b) Trajectory II Fig. 4.

100

x 10

10

40 0

90

Fig. 5. Root Mean Square Error (RMSE) Comparison for Different Algorithms with different number of particles.

The actual trajectory Trajectory of CA−PF Anchors

0

40

Number of Particles

(a) Trajectory I

Estimation Trajectories using reference system

1.55 10

20

30

40

50

60

70

80

90

100

Number of Particles

TABLE II T RAJECTORY II: P ERFORMANCE COMPARISON Algorithm PF M-PF CA-PF

MAE (m) 0.5438 0.4246 0.4419

RMSE(m) 2.2635 2.3973 1.5467

min error(m) 0.0404 0.0733 0.0210

Fig. 6.

max error(m) 7.3092 12.8521 7.5943

provide an accurate estimation, but our method is even better. To achieve fast processing, only 30 particles are sufficient to achieve low RMSE. Fig. 6 illustrates the average processing delay for the three particle filter schemes. The three algorithms are highly optimized and tested in matlab platform. It is clearly observed that the processing delay increases linearly with number of particles. The gap between PF and M-PF results in the region detection method. But the gap between M-PF and our method is quite small. Our method is the highest delay but still in a very shot time. Thus it will not influence the performance of the whole system. VII. C ONCLUSION We propose a context-aware particle filter tracking algorithm, which fuses layout information and measurement information to obtain the position of mobile target. Our method

Processing delay with different number of particles.

is adaptable to dynamic environment and robust to the high wireless noise. The experiment results demonstrate that our method achieves high accurate estimation and low processing delay. Future work will focus on hybrid indoor and outdoor tracking with geographic information. R EFERENCES [1] X. Hu, T. Schon, and L. Ljung, “A Basic Convergence Result for Particle Filtering,” Signal Processing, IEEE Transactions on, vol. 56, no. 4, pp. 1337–1348, 2008. [2] J. Prieto, S. Mazuelas, A. Bahillo, P. Fernandez, R. M. Lorenzo, and E. J. Abril, “Adaptive Data Fusion for Wireless Localization in Harsh Environments,” Signal Processing, IEEE Transactions on, vol. 60, no. 4, pp. 1585–1596, 2012. [3] G. Mao, B. Fidan, and B. Anderson, “Wireless Sensor Network Localization Techniques,” Computer Networks, vol. 51, no. 10, pp. 2529–2553, 2007. [4] Y. Qi, H. Kobayashi, and H. Suda, “Analysis of Wireless Geolocation in a Non-Line-of-Sight Environment,” Wireless Communications, IEEE Transactions on, vol. 5, no. 3, pp. 672–681, 2006. [5] S. Schmitt, H. Will, B. Aschenbrenner, T. Hillebrandt, and M.Kyas, “A Reference System for Indoor Localization Testbeds,” in Internet Conference on Indoor Positioning and Indoor Navigation, IPIN 2012, 2012, pp. 1–4.

63/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Verification of ESPAR Antennas Performance in the Simple and Calibration Free Localization System Mateusz Rzymowski#1, Przemysław Woźnica#2, Łukasz Kulas#3 Department of Microwave and Antenna Engineering, Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology Gdańsk, Poland 1 [email protected], [email protected], [email protected]

Abstract—This paper presents the results of simulations and measurements of an indoor localization system that uses Electronically Steerable Parasitic Array Radiator (ESPAR) antennas with switched directional beam. Proposed antennas are dedicated for 2.4GHz ISM low-cost applications where determination of the incoming signal is required. The antennas performance is analyzed and verified with relation to positioning methods based on the simplest direction of arrival (DoA) algorithm. The object position is estimated by incoming signal direction indicated by a pair of antennas. The existing analysis of ESPAR antenna performance with regard to DoA estimation methods usually base on experimental measurements where negative influence of the environment is limited or only certain operational angles are discussed. In this paper all measurements were done in the office and warehouse environment and compared with corresponding ray-tracing simulations. Keywords: switched-beam, ESPAR antenna, WSN, positioning, localization

I.

INTRODUCTION

Determination of an object’s position indoors using RF signal properties is an important subject that has been proved to be useful in such areas of application as healthcare, assets management and safety systems [1]. Among Indoor Positioning Systems (IPS) based on radio wave properties we can distinguish systems that rely on [2]: RSS (Received Signal Strength) which are based on received signal power level [3], ToA (Time of Arrival) based on time of radio signal propagation [4], TDoA (Time Difference of Arrival) based on differences in radio signal time of arrival [5] and DoA (Direction of Arrival) which use the antennas with possibility of changing radiation pattern [6]. Reconfigurable antennas are beneficial for wireless networks functionality [6]. Variability of the radiation patterns in such antennas can improve the link quality, increase the system range or reduce the energy consumption. It is also the key issue for low-cost systems where determination of the incoming signal is required. The example of reconfigurable antennas for such applications are ESPAR arrays [6-11]. They have a simple construction with one active monopole surrounded by a defined number of passive elements. Main beam direction can be changed with the angle dependent on the number of passive elements. Beam steering in ESPAR arrays is performed by electronic switches

that have to provide required load for the parasitic elements, close to open or short circuit. The switching circuits can be simplified by applying the SPST (Single Pole Single Throw) keys (ON/OFF) instead of multiway switches. By adequate RF switches configuration it is possible to obtain the directional beam. There are several methods of estimating the direction of the incoming signal that can be implemented on ESPAR antennas. The simplest and most popular approach to estimating direction of arrival is main beam switching [10-11]. The beam is swept in discrete steps in order to detect the strongest signal, which indicates the incoming signal direction. But the wide main beam or high backward radiation level can negatively influence the accuracy of the estimation so the main goal is to obtain as narrow beam as possible. Results reported so far show that DoA localization based on ESPAR antenna can be significantly improved using advanced algorithms like MUSIC or ESPRIT [14-15], but in most cases localization verification is conducted in an anechoic chamber and algorithms employ simplified theoretical models for 2D environments. Such algorithms are difficult in implementation and results obtained from reflection free environments are hard to reproduce in real test-beds especially when big amount of obstacles in propagation path is present like in the office or warehouse. In this paper a simple localization system using two ESPAR antennas simultaneously, proposed in [11], was simulated and verified. The system is based on the simplest DoA algorithm which is finding the direction where maximum value of signal was received. In comparison to [11], the measured characteristic of manufactured ESPAR antenna were used within detailed model of real environment, to achieve more reliable simulation results. The simulations where also confronted with real environment measurements. Section II describes the antenna construction and measurements. The next section presents the simulation results of the proposed system configuration. In section IV the real environment measurements are discussed. II.

ANTENNA DESIGN AND MEASUREMENTS

The proposed antenna is presented in Fig. 1. It is a twelve elements ESPAR array with one active monopole in the center of the ground plane realized as a top layer of the PCB base. The monopole is fed by SMA connector while the parasitic

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

64/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 elements can be shortened or opened to the ground by the SPST switches connected to them on the bottom layer of the antenna. The opened elements are directors as the electromagnetic wave is passed through them while the shorted parasitic elements are referred to as reflectors because they

III.

NUMERICAL SIMULATIONS

A. Scene setup As the environment for the simulation the part of the floor of the Department of Microwave and Antenna Engineering was chosen (see Fig. 4 and Fig. 5). The setup consists of five rooms and two parts of a corridor. The materials and its electrical properties used in the environment model were provided by the software producer [12], [13]. As the simulation engine the optical ray tracing was chosen using the empirical coefficients to model the interactions between radio signals and the environment.

Fig. 1. Realized ESPAR antenna – top view.

reflect energy. The antenna was designed to operate on 2,4 GHz frequency band and realized on 1,55 mm thick FR4 substrate with top layer metallization. It is fed by a female SMA connector. The parasitic elements are silver plated wires with 1,2 mm thickness. They are shortened or opened to the ground with NEC μPG20112TB switches which are placed on the bottom layer of the antenna what is illustrated in Fig. 2. This model was chosen because of the low insertion losses (about 0,3 dB) and quite good isolation (typically 25 dB). The 56 pF DC cut capacitors were implemented to the input and outputs of switching circuits. The RF switches are controlled and powered by the external driver based on STM32 microcontroller. The driver provides 3 V power supply and use special communication protocol to control the switching process so that it can works autonomously or be steered from other devices (eg. PC or RF module). The radiation pattern of described antenna was measured in anechoic chamber and is presented in Fig. 3.

Fig. 3. Measured 3D antenna radiation pattern.

The proposed setup is presented in Fig. 4, where colors of lines represent materials used in the simulation and highlighted area represents the simulation scene consisting of three rooms and two parts of corridor (see Fig. 5). Three antennas with switched directional radiation pattern presented in Fig. 3 were used in the simulation. Antennas’ positions are marked in Fig. 5 as blue dots and labeled consecutively a1, a2 and a3, while the simulated areas are labeled r1, r2, r3 (the three rooms) and c1, c2 (two parts of the corridor). All the antennas were placed at the same height equal 2,8 m, while the height of predicted signal area is 1,5 m.

Fig. 4. The overall view of the simulation setup (see text for explanations). Fig. 2. Realized ESPAR antenna – bottom view.

65/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 B. Ray-tracing results For the ray tracing simulation the resolution of the simulation was set to 10 cm and the following limitations for each ray were established: maximum four transmissions, four reflections and two diffractions. The results of simulations for all three antennas and all possible main beam configurations are presented in Fig. 6.

Fig. 5. Simulation scene together with antennas positions (see text for explanations).

C. Localization accuracy To determine the localized node's (LN) position DoA algorithm using only two reference nodes equipped with ESPAR antennas [11] was implemented. The algorithm determine direction of localize node signal basing on highest received signal value attached to ESPAR antenna configuration. If the determined directions are divergent, the pair of convergent ones is estimated as one with the smallest angular difference regarding to original ones. Localization was performed, where all the simulated points were used as the testing points. Two pairs of antennas were used during localization. First pair was antenna 1 and antenna 3 the second was antenna 2 and antenna 3. The third pair of antennas was not taken into account because of its inadequate placement due to the algorithm’s performance. The results in form of Cumulative Distribution Function (CDF) of localization errors for the whole scene are presented in Fig. 7. The functions were calculated as normalized cumulative histograms of localization estimation errors. The faster the functions reaches one value the better the results are (for example function argument at CDF equal 0,5 indicates that 50 % of measurements have error smaller or equal augment value).

Fig. 7. CDF for the whole scene (see text for explanations).

The values of error mean and median value are presented in Table II. TABLE I.

LOCALIZATION ACCURACY (IN METERS) - SIMULATION antennas

mean

median

a1 – a3

2.3454

2.0236

a2 – a3

2.2515

2.0070

IV.

Fig. 6. Received power distribution for all antennas and all main beam directions.

MEASUREMENTS

All measurements were done in the environment which simplified model was simulated in the previous section. The environment can be described as fusion of office and storehouse. The selected area was divided into 4x8 grid covering 5,5x11,5 m. The locations of measurement points are presented in Fig. 8 as red squares. The measurement set-up consist of three ESPAR antennas placed in different rooms and connected to RF transceiver. The transceiver with ESPAR antenna was a reference node and measured the incoming signal strength from the localized node. It has to be mentioned that the EM wave propagation in the real environment is much more complicated than in the modeled simulation scene, because of the repository character of the rooms. Another issue related to the measurements is the fact that used modules provides weak output power which influences the measured

66/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 data quality, especially when the distance between the reference and localized nodes is big.

considered. The measurements should be repeated on a denser grid, and the output power of the modules has to be increased with regard to existing standards. ACKNOWLEDGMENT This work has been supported by the Polish National Centre for Research and Development under agreement LIDER/23/147/L-1/09/NCBiR/2010. REFERENCES

Fig. 8. Measurement grid (see text for explanations).

[1]

In this case the DoA algorithm was also used to estimate the position of localized node and as before the same pairs of antennas were used for calculations. The results in form of error mean and median value are presented in table II. TABLE II.

[2] [3]

LOCALIZATION ACCURACY (IN METERS) - MEASUREMENTS antennas

mean

Median

a1 – a3

3,65

2,68

a2 – a3

3,58

3,39

[4]

[5]

A significant difference between the simulated and measured results occurred. The measurement error is more than one meter bigger than in the simulation. As it was mentioned it is because the model was highly simplified and did not include a number of obstacles that were in the analyzed testbed, like metal cupboards, moving people, wood and metal furniture. Another reason is the low output power of the modules which did not allow to distinguish the position of the LN when it was placed in a different room than the reference node. It means that even if the signal source and one of the reference nodes were in the same room and the estimated location was right, the other reference node influenced with a higher estimation error to the system because of the very low level of the received signal from all directions.

[6]

[7]

[8]

[9]

[10]

V.

CONCLUSION

This paper presents a results of simulations and real environment measurements of the localization system based on two ESPAR antennas with switched directional beam. The testbed was modeled and simulated and then the measurements in real environment were done. The localization errors were calculated for both cases described in previous sections. The comparison of the simulations and measurements confirms the fact that the operation of localization systems is hard to simulate with simple models, which is the case in many publications, because of complex wave propagation within indoor environment. The results show that simple algorithm, that use only switched directional beam, can be considered as sufficient for robust localization. More sophisticated algorithms and additional localization methods are required to increase the accuracy. Creation of more detailed model, especially with regard to the metal obstacles should be

[11]

[12] [13]

[14]

[15]

67/278

Taub, D.M.; Leeb, S.B.; Lupton, E.C.; Hinman, R.T.; Zeisel, J.; Blackler, S.; , "The Escort System: A Safety Monitor for People Living with Alzheimer's Disease," Pervasive Computing, IEEE , vol.10, no.2, pp.68-77, April-June 2011 Bensky, A.:Wireless Positioning Technologies and Applications. GNSS Technology and Applications Series, Artech House, 2008. Chin-tseng Huang; Cheng-hsuan Wu; Yao-nan Lee; Jiunn-tsair Chen; , "A novel indoor RSS-based position location algorithm using factor graphs," Wireless Communications, IEEE Transactions on , vol.8, no.6, pp.3050-3058, June 2009 Patwari, N.; Hero, A.O., III; Perkins, M.; Correal, N.S.; O'Dea, R.J.; , "Relative location estimation in wireless sensor networks," Signal Processing, IEEE Transactions on , vol.51, no.8, pp. 2137- 2148, Aug. 2003 Bin Xu; Ran Yu; Guodong Sun; Zheng Yang; , "Whistle: Synchronization-Free TDOA for Localization," Distributed Computing Systems (ICDCS), 2011 31st International Conference on , vol., no., pp.760-769, 20-24 June 2011 Luis Brás, Nuno Borges Carvalho, Pedro Pinho, Lukasz Kulas, and Krzysztof Nyka, “A Review of Antennas for Indoor Positioning Systems”, International Journal of Antennas and Propagation, vol. 2012, Article ID 953269, 14 pages, 2012. doi:10.1155/2012/953269J. R. Schlub, Junwei Lu and T. Ohira, “Seven Element Ground Skirt Monopole ESPAR Antenna Design using a Genetic Algorithm and the Finite Element Method”, IEEE Trans. on Antenna and Propagation, Vol. 51, No. 11, pp. 3033-3039, Nov. 2003. R. Schlub, D. V. Thiel, “Switched Parasitic Antenna on a Finite Ground Plane With Conductive Sleeve”, IEEE Transactions On Antennas And Propagation, May 2004 H. Kawakami and T. Ohira "Electrically steerable passive array radiator (ESPAR) antennas", IEEE Antennas Propag. Mag., vol. 47, no. 2, pp.43 -49 2005 Taillefer, E., Hirata, A., Ohira, T., "Direction-of-arrival estimation using radiation power pattern with an ESPAR antenna", Antennas and Propagation, IEEE Transactions on, On page(s): 678 - 684 Volume: 53, Issue: 2, Feb. 2005 M. Sulkowska, K. Nyka, L. Kulas, “Localization in Wireless Sensor Networks Using Switched Parasitic Antennas”, in Proceedings of the 18th International Conference on Microwaves, Radar and Wireless Communications (MIKON '10), pp. 1–4, June 2010. (2012) AWE Communications website. [Online]. Available: http://www.awe-communications.com/Manuals/ (2012) AWE Communications website. [Online]. Available: http://www.awecommunications.com/Download/DemoData/Databases_Material.zip Plapous, C.; Jun Cheng; Taillefer, E.; Hirata, Akifumi; Ohira, T., "Reactance domain MUSIC algorithm for electronically steerable parasitic array radiator," Antennas and Propagation, IEEE Transactions on , vol.52, no.12, pp.3257,3264, Dec. 2004 doi: 10.1109/TAP.2004.83643 An-min Huang; Qun Wan; Xin-Xin Chen; Wan-Lin Yang, "Enhanced Reactance-Domain ESPRIT Method for ESPAR Antenna," TENCON 2006. 2006 IEEE Region 10 Conference , vol., no., pp.1,3, 14-17 Nov. 2006doi: 10.1109/TENCON.2006.344036

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Optimal RFID Beacons Configuration for Accurate Location Techniques within a Corridor Environment Alain Moretto, Elizabeth Colin

MarcHayoz

Allianstic ESIGETEL Villejuif, France Alain.moretto / Elizabeth.colin @esigetel.fr

Telecommunication Dpt. EIA-FR Fribourg, Swiss [email protected]

Abstract—When using fingerprinting or tri/multilateration techniques, emitters must be deployed in the environment. A critical issue is where the emitters should be placed, yet too few studies on this topic have been carried out. This paper focuses on the placement of the emitting sources in order to increase the accuracy in the position estimation.

In this work, two different corridors are meshed with 433MHz-RFID beacons. A two antennas-RFID reader acquires the Received Signal Strength Information (RSSI) from all the beacons. A propagation model is deduced and 3D-trilateration algorithm is subsequently implemented to determine the position of a robot carrying the reader.

This work gives guidelines on the placement of the emitting sources in the context of trilateration based location architecture within a hallway in order to increase both precision and accuracy.

We considered a typical indoor environment as a corridor which is subjected to a multipath dominating phenomenon. Measurements were made in 2 corridors with similar lengths. A grid of twenty four 433MHz-RFID active beacons (tags) was deployed in both cases. An RFID reader acquires the Received Signal Strength Information (RSSI).

Keywords: tri/multilateration; beacon placement; active RFID; RSSI

I.

INTRODUCTION

Tri and multi lateration techniques are now very commonly used for location purposes [1-4]. The performance degradation of such techniques in indoor environments has been widely pointed out and studied [5-6]. Multipath phenomena, time varying fadings and dead spots particularly affect distance estimation between emitters and receivers and, as a consequence, on position estimation accuracy. Some solutions have been suggested to reduce the impact of errors of distance estimation. The first solution is to found an accurate propagation model which takes into account the geometry of the room, indoor environment specificities and material dielectric permittivity [7]. The other solution is to merge different localization techniques such as infrared or ultrasonic sensors, optical beacons recognition, odometry, use of gyroscopes or, more recently, by light intensity measurements [8]; this is known as multi-modal approach. Another (not exclusive) approach is to use statistical tests to identify and eliminate incorrect distance measurements and to use Kalman or particular filters to increase the likelihood to find an object at a given position given many other pieces of information. When it comes to wireless sensor networks (WSN), many researchers have proposed solutions based on specific nodes placement, studying spatial distribution and nodes density for both static and dynamic architecture configurations [9-12]. Yet, when beacons are non-communicating RF objects, too few studies can be found.

We focused on the mean and standard deviation of the localization errors as criteria of accuracy and precision. In order to optimize the accuracy of the position estimation of a robot along a corridor we put under test all the possible quadruplets and found the 5 best quadruplets of beacons. The paper is organized as follows. In section 2, we describe our positioning system. The initial beacon placement within the two environments is described and we elaborate on the choice of our propagation model. Section 3 describes the positioning technique we used as well as the measurement scenario itself. Results are presented and commented in section 4. II.

THE ENVIRONMENT AND ITSMODELING

A. Environment Features Our robot system positioning is based on UHF RFID technology: one 433MHz-reader with two dipole antenna is embedded on the robot and active RFID tags, used as active beacons to obtain the robot location. The considered location environment is two corridors. The first one is 22.8m x 2m x 2.5m, located at ESIGETEL engineering school (corridor E), the second one is 18.5m x 1.5m x 5m, located at IBISC robotic laboratory (corridor I). Fig. 1 shows both corridor geometries are quite different. Corridor E is wide but with a classical ceiling height whereas corridor I is a narrow one with a high ceiling. In both corridors, tags are placed on walls at 1.30m and 2.10m height, distance between two tags is 1.5m. Those heights respectively correspond to a doorknob and a standard door height. The layout is shown in Fig. 2.

68/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 1. Corridor I and E, respectively on the left hand side and the right hand side. Figure 3. Received power measurement and propagation model of tag 2.

Obviously, this medium behavior does not show the multipath effects, responsible of dispersion of the received power measurements. This dispersion can bring about wrong distance estimation. In order to limit deep fadings we implement a classical antenna diversity technique. We compare the received power on each antenna and record the highest level. Fig. 4 shows the distance estimation error modulus distribution for each tag model. 45% of distance estimation error due to the propagation model is below 2m. In this specific configuration, tag number 17 lead to an estimated error below 1m for 40% of all the distances estimated in the corridor

Figure 2. Layout of the tags on the corridor

Corridor E has 30 tags and corridor I has 24 tags. Note that no tag has been placed on the ceiling because of the specific radiation pattern of the reader antenna. A zero is in the axis of the dipole, that’s to say towards the ceiling and the floor. Our tags can be detected from as far as 40 meters in an indoor environment. B. Environment Model As reader acquires RSSI, we need to choose the propagation model of the environment in order to estimate the reader-to-tag distance from the received power. One-Slope model gives a medium trend of the wave propagation behavior. Moreover, it is a simple and bijective relation between power and distance (1). 𝑃𝑟𝑒𝑐𝑒𝑖𝑣𝑒𝑑 =

𝐾 𝑑𝑛

(1)

K and n are two constants to be defined. Measurements of RSSI-power are made along each corridor to define the slope features (K, n). So that, each tag has his own channel model (see Fig. 3). In this figure measurements are in blue and the slope in red.

Figure 4. Distance estimation error modulus distribution due to deviation from the One-Slope ideal model in corridor E.

III.

POSITIONING

The One-Slope model is a bijective one. As a consequence, once the reader embedded on our robot receives the RSSI from each tag, the distance 𝑑𝑖 to ithtag Bican be estimated. The position of the robot can be calculated using Trilateration method.

69/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

A. Trilateration Let assume that electromagnetic power expands in an isotropic way(see Fig. 5). Under this condition, iso-energy curve is a sphere; Let Bi(xi, yi, zi) be the exact location of the ithbeacon in a Cartesian coordinate. The equation of each sphere is: 𝑥 − 𝑥𝑖

2

+ 𝑦 − 𝑦𝑖

2

2

+ 𝑧 − 𝑧𝑖

= 𝑑𝑖 (2)

as Least Square Estimation (LSE). In this work, we deliberately wanted to explore the performance of trilateration (and not multi-lateration) before any additional signal processing. B. Measurementcampaigns At this step, corridors are empty (no furniture and no people walking along) to avoid additional fading or extra scattering sources. Doors remain closed. At 433 MHz, the wave length is about 70cm. Shannon spatial sampling theorem says that a measure should be done every half wave length at most.We then have 61 acquisitions in corridor I and 75 acquisitions in corridor E. Received power is measured by the robot every 30cm from all the RFID-beacons and recorded. Maximum received power at each antenna is recordedas explained previously. C. Localization step

𝑛 Every position is estimated through possible 4 quadruplets of tags (n = 30 for corridor E, ie 27405 quadruplets and n = 24 for corridor I, ie 10626 quadruplets). Positions estimated out of the corridor are not taken into account to compute mean position and standard deviation.

Figure 5. Robot Positioning using Trilateration.

We need four quadratic equations, that’s to say four distance estimations, to find the position (x, y, z) of the robot: x − x1

2

x − xi

2

x − xi

2

x − xi

2

+ y − y1

2

+ y − y2

2

+ y − y3

2

+ y − y4

2

+ z − z1

2

= d1

+ z − z2

2

= d2

+ z − z3

2

= d3

+ z − z4

2

= d4

(3)

Expanding and regrouping terms in (Eq. 3) we obtain: 𝑥 𝐴 𝑦 =𝑏 𝑧 With 𝐴 = 2

𝑥2 − 𝑥1 𝑥3 − 𝑥1 𝑥4 − 𝑥1

𝑦2 − 𝑦1 𝑦3 − 𝑦1 𝑦4 − 𝑦1

𝑧2 − 𝑧1 𝑧3 − 𝑧1 𝑧4 − 𝑧1

IV.

RESULTS

A. Results analysis Fig. 6 shows that 60% of the estimated positions are out the corridor. We focus on the other estimations that are meaningful. Due to the use of antenna diversity technics, and without filtering, 18% of the overall estimated mean errors are less than 2m.

(4)

(5)

And 𝑑12 − 𝑑22 − 𝑥12 − 𝑥22 − 𝑦12 − 𝑦22 − 𝑧12 − 𝑧22 𝑏=

𝑑12 − 𝑑32 − 𝑥12 − 𝑥32 − 𝑦12 − 𝑦32 − 𝑧12 − 𝑧32 𝑑12



𝑑42



𝑥12



𝑥42



𝑦12



𝑦42



𝑧12



(6)

𝑧42

Matrix A(Eq. 5) must be invertible to find a solution. This is clearly not the case if the four beacons are coplanar. Noisy estimation of distance may also lead to a non-invertible matrix. Indeed, with imperfect information the spheres may not intersect at a single point, in fact the spheres may not intersect at all. That’s why an estimate of the position is generally found by looking for the point that simultaneously minimizes the distance to all spheres by using mathematical techniques such

Figure 6. Mean error distribution in corridor E

We tried to find the “best” beacon placement with three criteria and for the all corridor. First criterion is accuracy. This performance criterion is given by the minimum value of the mean error (table 1). Another criterion is precision which is directly linked to the error standard deviation (table 2). At last,

70/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 we try to find the best quadruplets that offer a good accuracy and a good precision at the same time (table 3). If the accuracy of the position estimation is our main goal, table 1 shows that we should not expect a mean error beneath 2.3m. Yet, the choice of tags 1, 2, 3 and 23 leads to an accuracy equivalent or greater than that given by commercial solutions (around 3m). TABLE I.

WE LOOK FOR THE QUADRUPLETS THAT GIVE THE LOWEST MEAN ERROR

Corridor I tag Id

Mean error (m)

Corridor E Standard deviatio n (m)

tag Id

Mean error (m)

Standard deviation (m)

1,2, 2.29 2.26 6,24, 3.15 2.15 3,23 26,27 7,17, 2.39 1.95 8,11, 3.27 2.45 18,19 23,24 9,10, 2.57 2.04 8,23, 3.28 2.85 16,19 24,25 7,9, 2.65 2.19 8,9,2 3.29 2.67 18,19 3,24 9,10, 2.72 2.51 14,16 3.29 2.02 16,17 17,18 If the precision of the position estimation is our main goal, table 2 showsthat reducing the dispersion of position estimation in a corridor has a price: we should not expect an overall accuracy lower than 3,7m. This bounded limit is easily overcome with the help of positioning technics fusion and/or with the help of likelihood maximizing algorithms. TABLE II.

WE LOOK FOR THE QUADRUPLETS THAT GIVE THE LOWEST

TABLE III.

WE LOOK FOR THE QUADRUPLETS THAT GIVE THE LOWEST MEAN ERROR AND LOWEST STANDARD DEVIATION ERROR

Corridor I

Corridor E

tag Id

Mean error (m)

Standard deviatio n (m)

7,17 18,19 9,11, 15,16 1,2, 3,23 9,10, 16,19 6,7, 19,23

2.39

1.95

2.75

1.63

2.29

2.26

2.57

2.04

3.06

1.76

tag Id

Mean error (m)

Standard deviation (m)

13,16 17,18 6,24, 26,27 14,16 17,18 8,24, 26,27 11,15 17,18

3.40

1.82

3.15

2.15

3.29

2.01

3.51

2.03

1.61

2.02

B. Beacon placement To improve the visual representation of the results obtained previously, we represented the first three quadruplets, according to the three chosen criteria, in corridor E (see Fig. 7) and in corridor I (see Fig.8). The first observation that can be done is that beacon should be preferably centered and grouped in order to increase accuracy. Most of the polygons that are an answer to our requirements have three neighbor tags. Slightly spacing neighbor tags can increase the accuracy. Yet, trying to space tags as much as possible in other to cover the corridor is definitely not a good idea.

STANDARD DEVIATION ERROR

Corridor I

Corridor E

tag Id

Mean error (m)

Standard deviatio n (m)

10,11, 14,20 2,8, 19,20 1,3, 4,24 8,9, 19,21 4,12, 15,16

3.71

1.32

4.25

1.35

4.98

1.42

6.43

1.57

5.45

1.62

tag Id

Mean error (m)

Standard deviation (m)

7,9, 10,26 11,15 17,18 3,4, 22,29 3,4, 25,29 6,7, 9,22

4.73

1.43

3.94

1.61

5.32

1.66

4.25

1.73

5.72

1.74

At last, polygons (quadruplets) that lead to a compromise between precision and accuracy should be place to a corridor end.

At last, table 3 gives us the 5 best candidates if we have requirements in precision and accuracy. A mere glance at the first three candidates allows us to choose a balance between both criteria. Quadruplet (9 11, 15, 16) offers a slightly less accurate positioning system over corridor I than the first quadruplet (7, 17, 18,19) but improves precision for instance.

71/278

Figure 7. Three best quadruplet placement in corridor E according to accuracy criterion (upper figure), precision criterion (median figure) or accuracy+precision criterion.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

REFERENCES Sumana Das, Thiago Teixeira and Syed FarazHasan, “Research Issues related to Trilateration and Fingerprinting Methods. An Experimental Overview of Wi-Fi Positioning System” International Journal of Research in Wireless Systems (IJRWS), Volume 1, Issue 1, November 2012. [2] Federico Thomas and LluísRos, “Revisiting Trilateration for Robot Localization”, IEEE transactions on robotics, Vol. 21, N°. 1, February 2005. [3] Jun Wang, Paulo Urriza, Yuxing Han, and Danijela, “weighted centroid localization algorithm theoretical analysis and distributed implementation”, IEEE Transactions on Wireless Communications,Volume:10, Issue: 10, October 2011. [4] Frédéric Lassabe, “Géolocalisation et prédiction dans les réseaux Wi-Fi en intérieur”, M.Sc. Thesis report, Université de Franche-Comté, April 2009. [5] Mihail L. Sichitiu and VaidyanathanRamadurai, “Localization of Wireless Sensor Networks with a Mobile Beacon”, IEEE Convention of the Electrical and ELectronic Engineers in Israel - IEEEI , 2004. [6] ShashankTadakamadla, “Indoor Local Positioning System For ZigBee, Based On RSSI”, M.Sc. Thesis report.,Department of Information Technology and Media, Mid Sweden University, 2006. [7] A. Moretto, E. Colin, “New Indoor Propagation Channel Model for Location Purposes”, Progress In Electromagnetics Research Symposium Proceedings, Taipei, March 25–28, 2013. [8] Youngsuk Kim, Junho Hwan, Jisoo Lee andMyungsikYoo“Position estimation algorithm based on tracking of received light intensity for indoor visible light communication systems”,2011 Third International Conference on Ubiquitous and Future Networks (ICUFN). [9] Randolph L. Moses, Dushyanth Krishnamurthy, and Robert Patterson, “A Self-Localization Method for Wireless Sensor Networks”, EURASIP Journal on Applied Signal Processing, Volume 2003, Issue 4, pp. 348358, March 15, 2003. [10] NirupamaBulusu, John Heidemann, Deborah Estrin, “Density Adaptive Algorithms for Beacon Placement in Wireless Sensor Networks”, In Proceedings of IEEE ICDCS’01. [11] Javier O. Roa, Antonio Ramón Jiménez, Fernando Seco Granja, José Carlos Prieto, Joao L. Ealo, “Optimal Placement of Sensors for Trilateration: Regular Lattices vs Meta-heuristic Solutions01/2007; In proceeding of: Computer Aided Systems Theory - EUROCAST 2007, 11th International Conference on Computer Aided Systems Theory, 2007. [12] Guangjie Han1, Deokjai Choi and Wontaek Lim, “Reference node placement and selection algorithm based on trilateration for indoor sensor networks”, Wireless Communications and Mobile Computing, 2009. [1]

Figure 8. Three best quadruplet placement in corridor I according to accuracy criterion (upper figure), precision criterion (median figure) or accuracy+precision criterion.

V.

CONCLUSION

Beacon placement strongly affects the quality of 3D-spatial localization in a tri-lateration context especially in an indoor environment. Multipath phenomena make it difficult to estimate proper beacon-to-reader distances; this strongly affects the performances of this type of positioning system. In this work, we focus on finding a preconfigured beacon placement in other to increase our positioning system performances according to three criteria: accuracy (mean overall error), precision (error standard deviation) and the fusion of these two criteria. Beacon placement design is suggested and expected performances are given without any signal post-processing nor further performance improvement.

1) Acknowledgment We deeply thank MaximeJubert for endless hours of measurements and for his collaboration during the validation step of our project at the IBISC robotic lab.

72/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 − 31st October 2013

A Cooperative NLoS Identification and Positioning Approach in Wireless Networks Zhoubing Xiong, Roberto Garello Department of Electronics and Telecommunications Politecnico di Torino Turin, Italy {zhoubing.xiong, garello}@polito.it

Abstract—Non-line-of-sight (NLoS) propagation of radio frequency (RF) signal has proven to be challenging for the localization of unknown nodes in wireless networks. In particular, the NLoS range measurements can greatly affect the accuracy of mobile node’s position and in turn may cause the position estimation error diverging. This paper analyzes the Cram´er-Rao lower bound of cooperative localization in presence of NLoS measurements and proposes a cooperative NLoS identification scheme as well as a cooperative positioning algorithm based on belief propagation. The proposed algorithm is fully distributed and does not require prior NLoS state of range measurements. Simulation results show that the proposed algorithm is able to detect the state of each range measurement (NLoS or LoS) and improve positioning accuracy in several NLoS conditions.

I. I NTRODUCTION Nowadays, localization based applications, such as asset tracking, intruder detection, healthcare monitoring and so forth, are revolutionizing our life [1]. These applications often require very accurate position estimation even in challenging environments (e.g., in indoor and industrial environments). One aspect that affects the accuracy of radio-based localization systems is non-line-of-sight (NLoS) propagation that makes range observations to be positively biased. In the literature, lots of approaches have been proposed to mitigate large errors caused by the NLoS links [2]–[8]. In [2]–[4] , some algorithms have been adopted to identify whether a range measurement is in NLoS or LoS status based on channel statistics. In [5] and [6] the authors have proposed some NLoS mitigation algorithms in vehicular applications, but they did not take into account cooperation among unknown mobile nodes. In [7] and [8], cooperation among unknown nodes is exploited, but they required to know the exact status of NLoS links, which might be unrealistic. In cooperative positioning, apart from range measurements with respect to anchors (i.e., nodes whose positions are perfectly known), unknown nodes perform range measurements also among them and exchange aiding data, such as estimated position and the estimated probability density function or the corresponding estimated uncertainty. The cooperation among mobile nodes is beneficial for network localization [1]. In fact, both positioning accuracy and availability are improved. One important aspect of cooperative localization is how to appropriately take into account the uncertainty of unknown nodes’

Francesco Sottile, Maurizio A. Spirito Pervasive Technologies Istituto Superiore Mario Boella Turin, Italy {sottile, spirito}@ismb.it

positions. This task has been already investigated mostly in line-of-sight (LoS) condition [1], where ranging errors are relatively small and corresponding uncertainty can be well modeled. However, in NLoS conditions, ranging errors are much larger and more irregular, thus cooperative localization processes may diverge if the NLoS state associated to range measurements are not identified. This paper focuses on cooperative localization in NLoS scenarios and it adopts a cooperative approach based on a modified version of the belief propagation (BP) algorithm [1], [7]. The proposed algorithm estimates mobile positions and the status of range measurements in parallel. Moreover, it analyzes the positioning bounds in NLoS environment and uses it to check the result of position estimates and NLoS identification. The rest of this paper is organized as follows. Sec. II introduces the measurement models and derives the cooperative Cram´er-Rao lower bound (CRLB) of the positioning error in NLoS scenarios. Sec. III describes the proposed cooperative NLoS detection and positioning algorithm based on the belief propagation (BP) approach [1], [7]. Finally, Sec. IV presents simulation results and Sec. V draws conclusions. II. M EASUREMENTS M ODELING AND CRLB A. Measurements Models Concerning range measurement models, in this work the models presented in [7] have been adopted as they are extracted from experimental measurements by using UWB modules [4]. 1) LoS Model: Range measurements in LoS condition are assumed as Gaussian distributed: r˜ = d + nlos ,

(1)

where d is the exact distance between the two nodes involved in the measurement, and nlos is a Gaussian distributed noise, nlos ∼ N (0, σ 2 ), with zero mean and standard deviation σ = 0.25 m. 2) NLoS Model: Range measurements in NLoS condition are modeled as:

73/278

r˜ = d + nnlos ,

(2)

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 − 31st October 2013 where nnlos is the measurement noise supposed to be exponentially distributed, pnnlos (x) = λ exp(−λx) when x ≥ 0, with rate parameter λ = 0.38 m−1 .

m. The corresponding log-likelihood function is given by:

log (p (Z|X)) =

X X

log p(˜ ra→m |xm , xa )+

m∈M a∈Am

X

B. State Definition Let sn→m be the state associated to the range measurement r˜n→m from neighbor n to mobile m. The state sn→m can assume value either 0 if the corresponding range measurement is performed in LoS condition or 1 in NLoS condition. As a consequence, P (sn→m = 0) + P (sn→m = 1) = 1. Based on the above definitions and assuming that the states associated to range measurements are not prior known, the likelihood function of the range measurement could be simply expressed as the weighted sum on the state: p(˜ rn→m |xm , xn ) =

1 X

P (sn→m = i) p(˜ rn→m |xm , xn , sn→m ),

i=0

(3)

where xm = [xm , ym ] is the position of the mobile m and xn = [xn , yn ] the position of the neighbor n. Note that the likelihood function could be either a normal distribution or an exponential one depending on the link condition: p(˜ r |x , x , s )=  n→m m n n→m   √ 1 exp − (˜rn→m −kx2n −xm k)2 ,

sn→m = 0

− kxn − xm k)) ,

sn→m = 1



2πσ 2

 λ exp (−λ (˜ r

n→m

, (4)

where k·k denotes the Euclidean distance. Some NLoS identification techniques presented in literature are based on the processing of the received signal [3], [4], but they are too complex and not feasible to be implemented on cheap devices. Since range measurements are correlated with the position of the mobile m, it would be efficient to proceed in parallel both mobile position estimate and NLoS identification for all the involved range measurements as presented in sec. III. C. Cram´er-Rao Lower Bound The Cram´er-Rao lower bound (CRLB) expresses a lower bound on the variance of any unbiased estimator. In localization, this information can be used to know which is the maximum achievable positioning accuracy in a given scenario. Also it can be used during on-line estimation process to select the closest set of neighbors that are able to meet the required positioning accuracy while energy for ranging is minimized [9]. In fact, following this approach, the transmission power is adaptively adjusted to reach the selected neighbors. In cooperative localization [10], the available set of range measurements can be written as: n o Z= {˜ ra→m }a∈Am , {˜ rn→m }n∈Mm m∈M . (5) Let A and M denote the full set of anchors and mobiles, respectively, in the network. In (5), Am ⊆ A and Mm ⊂ M are the set of anchors and mobiles, respectively, connected to

X

log p(˜ rn→m |xm , xn ), (6)

m∈M n∈Mm

where X is the set of mobiles’ positions, that is, X = [x1 , x2 , . . . , xM ], where M is the cardinality of M. The CRLB is obtained by inverting the Fisher information matrix (FIM) that is given by the negative expectation of the second-order derivatives of the log-likelihood function:  2  ∂ log p(Z|X) . (7) F = −E ∂X2 From (6) and (7), the global FIM can be decomposed as the sum of two matrices: the first one takes into account links between mobiles and anchors while the second one considers links among mobiles (see [10] for more details) F = Fanch + Fmob .

(8)

In particular, Fanch is a block diagonal matrix whose corresponding values depend on the anchor measurements, (9). On the contrary, Fmob is not a block diagonal matrix as it depends on the partial derivatives among mobiles, (10).   anch F1   Fanch 2   (9) Fanch =  , ..   . Fanch M

Fmob 1  K21  = .  ..

K12 Fmob 2 .. .

... ... .. .

KM 1

KM 2

...



Fmob

 K1M K2M   ..  , . 

(10)

Fmob M

where Kmn is a 0 matrix if there is no measurement between n and m. Considering the fact the a generic range measurement from mobile m to an anchor a can be performed either in LoS or NLoS condition, the set of anchors connected to the mobile m, Am , can be subdivided into two subsets: LoS subset denoted nlos with Alos m and NLoS subset denoted with Am . Therefore, the anch matrix Fm can be expressed as the sum of two matrices that take into account to the above defined subsets: anch Fanch m = Fm

where Fanch m Fanch m

los

los

=

and Fanch m

los

los

anch + Fm

74/278

nlos

 1 ∆x2am 2 2 σ dam ∆yam ∆xam los

X

=

X λ  −∆y 2 am 3 ∆y ∆x d am am nlos am

a∈Am

,

(11)

are given by:

a∈Am

anch Fm

nlos

 ∆xam ∆yam , (12) 2 ∆yam  ∆xam ∆yam , −∆x2am

(13)

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 − 31st October 2013 where ∆xam and ∆yam are the differences of x and y components, respectively, between nodes a and m, i.e. ∆xam = xa − xm , ∆yam = ya − ym , while dam is the Euclidean distance expressed as: p 2 . (14) dam = kxa − xm k = ∆x2am + ∆yam Note that (12) is obtained by second-order differentiating of Gaussian distribution and σ is the noise standard deviation. (13) is the second-order derivative of exponential distribution and λ is the rate parameter. In (13) there is negative sign for the diagonal elements, which means NLoS measurements decreases the Fisher information and have negative effect on the positioning performance. Following the same approach, the matrix Fmob that takes m into account the connection among mobiles is given by:

mob Fm

los

belief of mobile’s position is Gaussian distributed, thus the mobile just needs to send to its neighbors the estimated position and the corresponding uncertainty. This approach known as expectation propagation (EP) [11] is an approximation of the BP algorithm. Based on this assumption, we propose a NLoS identification and positioning algorithm, namely cooperative NLoS identification and positioning algorithm (CIDP). In the following sections, the message passing for a generic mobile m are introduced. A. Incoming Messages The localization approach is realized by factor graph as Fig. 1. In particular, the joint posteriori distribution can be factorized by messages from anchor nodes and mobile neighbors as.

nlos mob los + Fmob , (15) Fmob m m = Fm   2 X 1 ∆xnm ∆xnm ∆ynm , (16) = 2 2 2 ∆y ∆x ∆ynm σ dnm nm nm los n∈Mm

nlos Fmob m

X λ  −∆y 2 nm = 3 ∆y ∆x d nm nm nm nlos n∈Mm

 ∆xnm ∆ynm . (17) −∆x2nm

Concerning the correlation block Kmn , if there is measurement from node n and m, it could be calculated as:   1 ∆xnm ∆ynm ∆x2nm , sn→m = 0 Kmn = − 2 2 2 ∆ynm σ dnm ∆ynm ∆xnm Kmn = −

λ d3nm

2 −∆ynm ∆ynm ∆xnm



 ∆xnm ∆ynm , sn→m = 1 −∆x2nm

Let J be the inverse matrix of FIM and Jm be the 2 × 2 block related to the mobile m, then the CRLB for mobile m can be calculated as: p (18) Ωm , Jm (1, 1) + Jm (2, 2). As it can be observed from (13) and (17), the presence of NLoS measurements make the Fisher information decreasing, as a consequence the variance on the position error increases. In fact, the more severe NLoS condition the larger localization error. This effect will be shown in the simulation results. III. M ESSAGE PASSING A LGORITHM Since there is no prior information about the state of each range measurement, the basic idea would be to use range measurements to infer first the mobile’s position, then the state of range measurements. Alternatively, in order to improve positioning accuracy, both mobile’s positions and links’ states can be estimated in parallel through some iterations of the BP algorithm. However, this approach has some drawbacks. one is the network traffic generated by the cooperation packets (note that the size of messages depends on the number of particles used to approximate the distributions). Another drawback is the computational effort required to calculate the integral of neighbor’s belief. The proposed algorithm assumes that the

Fig. 1.

Factor graph for cooperative positioning.

1) Message from Anchor: The incoming message from an anchor a ∈ Am is proportional to the integral of the multiplication between the likelihood function and the belief of the anchor that is a Dirac delta function centered on xa , i.e. b(xa ) = δ(x − xa ): Z µa→m ∝ p(˜ ra→m |xm , xa )b(xa )dxa = p(˜ ra→m |xm , xa ).

(19)

When referring to more than one state, the likelihood function can be calculated by using (3), thus p(˜ ra |xm , xa ) becomes: p(˜ ra→m |xm , xa ) =

1 X

P (sa→m = i) p(˜ ra→m |xm , xa , sa→m ),

i=0

(20) 2) Message from Mobile Neighbor: Similarly, the incoming message from a mobile neighbor can be expressed as: Z µn→m ∝ p(˜ rn→m |xm , xn )b(xn )dxn . (21) Since the mobile neighbor’s position xn has a certain uncertainty, the belief b(xn ) is not a Dirac delta function. In principle, it can be represented by the distribution of the samples. Thus, the calculation in (21) is too complex to be performed. In order to simplify that calculation, some approaches, presented in [7], assume that b(xn ) is a Gaussian function. In this paper, to further reduce the complexity, the belief of the mobile neighbor n is approximated as a Dirac delta function

75/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 − 31st October 2013 (i.e. as if it is an anchor, b(xn ) ≈ δ(x−ˆ xn )). To compensate this important approximation, the position uncertainty associated to neighbor n is considered as an additional noise for the range measurement r˜n→m . More specifically, the variance associated to ranging (given by σ 2 for LoS measurements and 1/λ2 for NLoS measurements) are increased by the position uncertainty of the mobile’s neighbor. For simplicity, this uncertainty is calculated as the trace of the estimated covariance matrix [12], i.e., trace(Pn ). As a consequence, the new parameters σnm and λnm to be used in the likelihood function are given by (22) and (23), respectively. p (22) σnm = σ 2 + trace(Pn ), λ λnm = p . (23) 1 + λ2 trace(Pn ) In conclusion, by using the above approximation, the incoming message is given by: µn→m ∝ p(˜ rn→m |xm , x ˆn ),

(24)

where p(˜ rn→m |xm , x ˆn ) is the likelihood function evaluated by using the new modified parameters σnm and λnm that take into account the uncertainty of mobile neighbor n.

By applying the assumption that b(xm ) is a delta function, the previous equation can be simplified as: P (sa→m ) ≈

P (sa→m ) = P1 = P1

(25) where µa→m (xm ) and µn→m (xm ) are calculated by using (19) and (21), respectively. After that, the estimated position is calculated as the average value of the belief distribution while the estimated covariance matrix Pm calculated by using the set of particles as reported in [12]. Therefore, the belief is approximated with a Gaussian distribution and the related parameters, i.e. the mean value and the trace of Pm , are broadcast to its neighbors. C. Outgoing Messages The outgoing message is simply proportional to the belief dividing the incoming message from a specific factor node. 1) Messages to Anchor: The message from mobile to anchor node is µm→a (xm ) ∝

b(xm ) . µa→m (xm )

(26)

The state probability is defined as the integration of multiplication of likelihood and message from the mobile: Z P (sa→m ) = p (˜ ra→m |xm , xa , sa→m ) µm→a (xm ) dxm . (27)

P (sa→m = i) p (˜ ra→m |ˆ xm , xa , sa→m )

i=0

p (˜ ra→m |ˆ xm , xa , sa→m = i)

. (29)

Based on previous assumption, the message coming from mobile is not necessary to decide the range measurement state. In fact, only the estimated position and the corresponding trace are necessary to compute the probability of NLoS. 2) Messages to Mobile: The outgoing message to mobile µm→n is the similar to the one to anchor, but it can be canceled out when calculating the NLoS state. Therefore, it is not calculated in the implementation of the algorithm. Similarly, the LoS or NLoS probability is given by p (˜ rn→m |ˆ xm , x ˆn , sn→m )

i=0

n∈Mm

P (sa→m )

i=0

B. Position Estimate

a∈Am

(28)

Since the probability of one range measurement should be normalized, the LoS or NLoS probability can be furthermore simplified as

P (sn→m ) = P1

When all the messages from the anchors and mobile’s neighbors are available, the mobile node can calculate its belief b(xm ) that is proportional to the factorization all the incoming messages and the a priori pdf p (xm ): Y Y b(xm ) ∝ p(xm ) µa→m (xm ) × µn→m (xm ),

p (˜ ra→m |ˆ xm , xa , sa→m ) . µa→m (ˆ xm )

p (˜ rn→m |ˆ xm , x ˆn , sn→m = i)

.

(30)

Finally, hard decision is made when the algorithm converges. For a given range measurement, if P (sn→m ) is larger than 0.5, it is assumed in NLoS state, otherwise it is in LoS state. The fact that the belief of mobile’s position is approximated with a Dirac delta function may result in inaccurate position estimate in NLoS state condition. However, the computational complexity and network traffic can be greatly reduced, making the proposed algorithm suitable for distributed localization and feasible to be implemented in mobile devices with low computational capability. IV. S IMULATION R ESULTS The performance of the proposed C-NDP algorithm is tested by MC simulations. The simulated scenario is typical office environment with size 20×20 meters and the wireless network is a small-scale network composed of 15 nodes. Five of them are anchors deployed at the four corners and in the center of environment in order to provide a good geometry for localization (see Fig. 2). The remaining ten nodes are static unknown nodes whose positions are randomly selected in each run of the simulation. The radio connectivity is chosen as 20 meters. Since NLoS condition is generated by obstacles, symmetric links are considered between unknown nodes, e.g., if r˜n→m is in NLoS state then r˜m→n is also in NLoS state, but the two range measurements are not the same due to the value of the measurement noise. Three positioning algorithms have been tested and then compared. The first one is sum-product algorithm over a wireless network (SPAWN) proposed in [1], and it is a generic belief propagation algorithm for localization. It is supposed to

76/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 − 31st October 2013 The performance of NLoS identification is presented in Fig. 4 and Fig. 5. In particular, Fig. 4 shows the detection error rate for each range measurement and Fig. 5 shows the estimated NLoS probability of all the measurements. The detection performance of measurements from anchors and mobile neighbors shows similar behavior. The error rate is highest when NLoS probability is around 0.6, which indicates that the proposed algorithm has high miss detection rate when NLoS and LoS is equally distributed. At low NLoS probability, the detection performance of mobile measurements is slightly better than that of anchor measurements, due to the simulation condition of symmetric links. At high NLoS probability, the detection performance of anchor measurements is better, because of the increased uncertainty of neighbors’ positions caused by the bad NLoS range measurements. Fig. 2. Simulation environment. Blue squares are anchor nodes and are fixed. Red dots are unknown nodes and are different for each MC runs.

have no knowledge of NLoS states, denoted as SPAWN-NLoSU. The second one is SPAWN proposed in [7], which supposed to perfectly know the NLoS states, denoted as SPAWN-NLoSK. The last one is the proposed CIDP algorithm. 1000 MC runs have been performed for a chosen NLoS probability, and root mean square of the positioning errors (RMSE) have been calculated for performance comparison. Fig. 3 shows the positioning performance of the above mentioned algorithms and the corresponding CRLB. As it can be observed, the presence of NLoS conditions greatly increase the positioning errors. If this is not well aware of, the standard belief propagation algorithm could diverge. The proposed CIDP algorithm is about 0.5 meter worse than SPAWN-NLoS-K, but it does not require to know whether a range measurement is in LoS or NLoS condition. Furthermore, the estimated CRLB, which uses the estimated positions and estimated NLoS status, can bound the positioning errors well. Hence this bound can be used to provide some insights to positioning accuracy and can be used in the energy-efficient positioning algorithm as [9].

Fig. 4.

Fig. 5.

Fig. 3.

Positioning performance.

State detection error rate.

NLoS probability estimate.

As it can be observed from Fig. 5, the estimated NLoS probability is close to real probability. When NLoS probability is smaller than 0.6, the proposed algorithm overestimates the NLoS probability; while the probability is larger than 0.6, the algorithm underestimate the NLoS probability. That is because the detection is based on position estimates. If there are enough LoS range measurements, the range measurements with large errors will be identified as NLoS; but if there are not enough

77/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 − 31st October 2013 LoS range measurements, the range measurements with small errors will be identified as LoS. When NLoS probability goes up to 0.8, the detection errors become larger, which means the LoS range measurements are not enough to localize the nodes. In full NLos conditions, the error on probability detection is around 0.17. The reason is that the range error may not be large even in NLoS condition, e.g., the probability of NLoS ranging error less than 0.5 meter is about 0.17, which coincides estimated probability error in full NLoS condition.

[12] F. Sottile, H. Wymeersch, M. A. Caceres, and M. A. Spirito, “Hybrid GNSS-terrestrial cooperative positioning based on particle filter,” in IEEE Global Telecommunications Conference (GLOBECOM), Dec. 2011, pp. 1–5.

V. C ONCLUSIONS AND F UTURE W ORK This paper analyzed the CRLB of cooperative localization in presence of NLoS range measurements and proposed a cooperative NLoS identification and positioning algorithm. The proposed algorithm was fully distributed with low complexity and low network traffic and did not require prior information of NLoS state. Simulation results showed that the proposed algorithm was able to detect NLoS range measurements and to improve positioning accuracy in the NLoS conditions. However, there is large gap between the existing NLoS positioning algorithm and the CRLB. The future work would be how to narrow this gap. Moreover, an energy-efficient positioning algorithm in NLoS environment can be developed based the proposed CRLB formulae as [9]. R EFERENCES [1] H. Wymeersch, J. Lien, and M. Z. Win, “Cooperative Localization in Wireless Networks,” Proceedings of the IEEE, vol. 97, no. 2, pp. 427– 450, Feb. 2009. [2] S. Gezici, H. Kobayashi, and H. V. Poor, “Nonparametric nonline-ofsight identification,” in Vehicular Technology Conference, 2003. VTC 2003-Fall, vol. 4, Oct. 2003, pp. 2544–2548. [3] I. Guvenc, C.-C. Chong, F. Watanabe, and H. Inamura, “NLOS Identification and Weighted Least-Squares Localization for UWB Systems Using Multipath Channel Statistics,” EURASIP Journal on Advances in Signal Processing, no. 1, 2008. [4] H. Wymeersch, S. Marano, W. M. Gifford, and M. Z. Win, “A Machine Learning Approach to Ranging Error Mitigation for UWB Localization,” IEEE Transactions on Communications, vol. 60, no. 6, pp. 1719–1128, June 2012. [5] K. Yu and Y. J. Guo, “Improved Positioning Algorithms for Nonlineof-Sight Environments,” IEEE Transactions on Vehicular Technology, vol. 57, no. 4, pp. 2342–2353, July 2008. [6] H. Liu, F. Chan, and H. C. So, “Non-Line-of-Sight Mobile Positioning Using Factor Graphs,” IEEE Transactions on Vehicular Technology, vol. 58, no. 9, pp. 5279–5283, Mov. 2009. [7] S. Van de Velde, H. Wymeersch, and H. Steendam, “Comparison of message passing algorithms for cooperative localization under nlos conditions,” in 9th Workshop on Positioning Navigation and Communication (WPNC), Mar. 2012, pp. 1–6. [8] R. M. Vaghefi and R. M. Buehrer, “Cooperative sensor localization with nlos mitigation using semidefinite programming,” in 9th Workshop on Positioning Navigation and Communication (WPNC), Mar. 2012, pp. 13–18. [9] M. Dai, F. Sottile, M. A. Spirito, and R. Garello, “An energy efficient tracking algorithm in uwb-based sensor networks,” in IEEE 8th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), Oct. 2012, pp. 173–178. [10] F. Penna, M. A. Caceres, and H. Wymeersch, “Cram´er-Rao Bound for Hybrid GNSS-Terrestrial Cooperative Positioning,” IEEE Communications Letters, vol. 14, no. 11, pp. 1005–1007, Nov. 2010. [11] T. Minka, “Expectation propagation for approximate bayesian inference,” in 17th Conference in Uncertainty in Artificial Intelligence, Aug. 2001, pp. 362–369.

78/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Visual Landmark Based Positioning Hui Chao, Saumitra Das, Eric Holm, Raghuraman Krishnamoorthi, Ayman Naguib Qualcomm Research CA, USA (huichao, saumitra, eholm, raghuram, anaguib)@qti.qualcomm.com Abstract—In this paper, we discuss a system and algorithms for using storefront logo images as landmark targets for indoor localization. The system searches for known storefront logo imagery as a user pans a smartphone camera or names visible storefronts. As one or more targets are recognized, the location of a user may be estimated by combining image matching results with the visibility information for each storefront on the map. We discuss algorithmic approaches to deal with some of the unique characteristics of storefront logo matching. We discuss algorithms that define visibility information for landmarks in an indoor environment. Finally, we present positioning experiment results in a real shopping mall with our end-to-end positioning system on a phone. Experiments with the system have demonstrated the viability of this approach for a good indoor positioning experience. Keywords - image based positioning, computer vision, visual landmarks;

I.

INTRODUCTION

Positioning in a large indoor shopping mall still poses challenges for existing technologies to deliver precise, reliable, and cost-effective location information. In an indoor environment, a mobile device may be unable to reliably receive GPS signals for position estimation. Various techniques have been proposed to obtain a position fix using ultra-sonic, infrared, magnetic, or radio sensors [1]. However, all of these technologies may be limited in their utility, due to the lack of infrastructure that can provide consistent, reliable and robust signal transmitters in some of the venues. Given readily available optical sensors on mobile devices, image based positioning, which requires no change of indoor infrastructure, could be an alternative or complementary approach to these indoor positioning techniques. Image based positioning has become a popular area of research in recent years due to the advent of smartphones with good connectivity, computing and imaging capabilities. In this approach, environmental components are analyzed and results are matched against pre-captured and stored data. Image or vision based positioning methods may be categorized into two groups. In the first approach, a user’s location may be recognized by simply taking a photo of the nearest street corner or storefront and finding the most similar image in the database with known location [2-6]. This approach recognizes a location and assumes the camera or the user is located in a close vicinity of the structure that was captured in the image. A second approach is similar to Simultaneous Localization and Mapping (SLAM) used in robotic vision. As a robot moves in an unknown environment, it builds up a map containing image

features and their precise 3D location [7-8]. This 3D feature map is then used to determine the location with accurate pose estimation by matching previous recorded image features in the database with ones in the current view. Both approaches require a database of images, or image features that were obtained from previously captured images, with registered locations on a 2D or 3D map. The map is typically created by traversal of the environment. Although previous approaches have proven to be quite effective, attempting to directly match the current scene with previously captured images in a shopping mall may have challenges of deployment and maintenance. This is due to the dynamic nature of shopping venues where the decoration of the environment may often change with seasons or events. Implementing such a solution would require frequent updates of the reference images to ensure data relevance. During our investigation, we made an observation that even in these dynamic and noisy environments, a storefront logo that represents the brand signature of a retailer exhibits some key properties that allow for easier deployment, and could provide a more robust solution for visual landmark based positioning. First, a logo is typically visually consistent across different venues, and is unique and stable over time. Also, publicly available database samples make the harvesting of reference images easier and more effective. Thirdly the information about a store such as its name, location and entrances are often provided on the venue map. A storefront logo image is often placed at the entrance in the direction that is parallel or perpendicular to the wall of the storefront as shown in figure 1. Therefore, the location of a storefront image can be easily registered on a 2D map. Although brand images are robust landmarks for positioning in a shopping mall, they also pose some challenges. First, without a detailed survey of the environment, the exact dimension of a storefront logo image may not be known. Second, when the user pans a camera to capture a storefront scene, a logo may only occupy a small region of the image; i.e., relevant feature points may be concentrated in a relatively small area instead of evenly distributed around the whole image. Furthermore, logo images often consist of high contrast edges without much texture variety or details. Logo images often have repetitive patterns and large variations in illumination due to special lighting placed behind and around the logo. All of these make detection and pose estimation very difficult. To overcome some of these challenges, a method may be used that combines landmark information with trilateration for location estimation. On the map, regions from which a storefront is visible are derived from the topological 2D layout of the architectural structure. This region is called the visibility

79/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 map. It provides the baseline information of the possible locations of the user on the map given that a landmark is visible. As more landmarks are recognized, possible locations of the user become refined, and the position estimation becomes more accurate.

and 1000 test images, the recognition rate degraded to 62%. The experiment was performed on a PC with exhaustive search for matching. A real time mobile solution for storefront logo recognition will require faster matching algorithm and further improvements in detection rate by creating more robust feature points from training data and selecting more discriminative feature points when computing and memory resources are limited. Brand logo detection can be regarded as a special case of object detection. Unlike nature scene images, which are rich in texture details, brand images often lack texture variation, and therefore provide fewer key feature points for matching. In addition, in this use case, reference and test images may be acquired with different resolution, size, quality, and illumination conditions. These factors combine to make logo detection more challenging.

Figure 1. Example of a typical indoor shopping mall where brand logo images are placed on the storefronts.

In the following sections, we first discuss possible algorithmic approaches to deal with some of the unique characteristics of storefront logo matching. We then describe an algorithm that computes the visibility map for a storefront on a 2D map. Finally, we present positioning experiment results in a real shopping mall with our end-to-end positioning system on a phone. II.

LANDMARK RECOGNITION

A storefront logo provides a unique and stable landmark for positioning in a shopping mall. It comes from a small set of brand images, which may be available from various sources including the website of the retailer. Stores of the same retailer in different geographic locations often have a similar appearance for the purpose of consistent branding. These factors make the harvesting of reference landmarks easier and more effective. A typical landmark database may consist of image features extracted from logos that represent 20 to 100 stores in a typical indoor shopping mall. Brand image recognition has many useful applications. Various methods have been proposed using local and global feature matching [9-12]. However, it continues to be a challenging area; a combination of different approaches may be needed to develop a robust detector. A. Image matching with SIFT-like features In our study, an object detector suitable for wide baseline matching was used [13,14]. This detector looks for scale invariant key points and performs a sliding window search to determine the presence and pose of a storefront logo from a database of reference logo images. Figure 2 shows two matching examples. In our experiment, with 24 reference logo images and 400 test images of varying quality, the average recognition rate was 84%. However, as the database size increases, the false positive rate increases. With 53 reference

Figure 2. Image matching results using local invariant feature matching. Corresponding key points are connected with green lines. In the bottom example, even with occlusion, matching key points were found.

B. OCR Some brand logos contain the names of retailers that correspond to the store names on the map. Thus, OCR is a natural choice for detecting some of these landmarks, typically utilizing a relatively small vocabulary containing all of the store names in the venue. An initial experiment [15] suggested that OCR is potentially useful for detecting brand logos. In our preliminary study, although image matching outperformed OCR in most of the cases, OCR performed well for simple text logos such as “Aldo” that don’t exhibit much texture variety and are written in a common font style. C. User explicit naming There may be situations where landmark recognition with computer vision fails, in which case, an explicit naming of the visible storefronts by the user may be used to estimate a user’s location.

80/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 III.

VISIBILITY MAP OF A LANDMARK

In some of the proposed methods [7, 8], pose estimation provided accurate positioning information. However, pose estimation is often not robust or is sometimes not possible when the exact dimensions of a storefront logo image are not known; or when feature points are concentrated in a relatively small part of the overall image as shown in figure 2. These factors make pose estimation increasingly prone to errors and generally unreliable. Also, pose information is typically unknown when performing logo recognition with OCR or with a user’s explicit naming. To overcome these challenges, trilateration using a landmark visibility map can be utilized for location estimation. We consider the case where a user is in an open or hallway area in a shopping mall (not inside a particular store). To compute the visibility map, the hallway area is first identified on the venue map; then for each storefront, its visibility is inferred based on map analysis.

(a)

(b)

B. Infer the Visibile Region of a StoreFront The visibility map of a storefront refers to an area from which this particular storefront may be visible to the user. Similar to computing field-of-view from the location of the storefront, this region is approximated on a 2D venue map, using ray tracing. All hallway region points that are in the lineof-sight of the storefront are identified as visible points. Figure 2c shows the visibility map of Store_A with two storefronts. IV.

LOCATION ESTIMATION

A user’s location may be estimated using visibility maps of the identified storefronts or landmarks. As one or more landmarks are detected at a user’s location, the location of the user or mobile device may be estimated as a function of the overlapping visibility regions. The estimation can be simply derived from the center of mass of the overlapping regions. This estimation may be improved by taking into account the likelihood of where the user might stand. This likelihood can be estimated based on (1) the pose estimation and associated confidence in the case of using computer vision, or (2) the best viewing angle and distance from which a landmark can be seen. An example is shown in figure 4. For the two identified storefronts in figure 2, their visibility regions are highlighted in figure 4a. The user’s location can be estimated from the overlapping area as shown in figure 4b.

(c)

Figure 3. (a) A shopping mall map. (b) Open and hallway areas are identified and highlighted in blue . (c) The regions from which the storefront of Store_A can be visible are highlighted in orange.

A. Identify the Hallway Region. A 2D map with store information for a shopping mall was obtained from a commercially availabe map provider’s web site in combination with the venue’s visitor map. The 2D venue map is first converted to a black and white binary image. The walls and doors are depicted as black pixels and open areas are depicted as white pixels as in figure 3a. Then, an indoor boundary is identified as the largest enclosed area after a morphological “close” and “fill” operation [16]. Within this indoor area, connected white components that represent the walkable area on the map are identified and ranked based on the sizes of their bounding boxes. The top connected component will typically be identified as a hallway region. An example is shown in figure 3b.

(a)

(b)

Figure 4. Location estimation based on two detected landmarks. (a) Nordstrom has two entrances and two visible regions which are hightlighted in blue. J.Jill store’s visbility region is hightlighted in green. (b) The user’s location, highlighted as the red dot, is estimated based on the overlapping area of the visibility region.

V.

POSITIONING EXPERIMENT AND RESULTS

A mobile location system was developed that takes one or more identified landmarks and outputs user’s location. A database containing a list of the names, locations and orientations of all the storefronts on a 2D map was created offline and stored in memory on the mobile device. The visibility

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

81/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 region for each storefront was then computed in real-time on the mobile device. As one or more landmarks were identified, the location of the user was estimated and refined. An experiment was performed with this system. Data was collected in a real shopping mall of 70,000 m2 in area and with about 100 shops on the floor. Twenty-six ground truth location points were marked on the venue map. These points were selected so that they could be easily located at the actual venue. A user was asked to visit each of the 26 pre-marked locations, and at each of these locations to name two to four of the most visible storefronts. Since identified landmarks were obtained with the user explicitly naming the visible stores, pose information was not available. The identified storefronts were collected and used to estimate the user’s location. The following assumptions were made in the visibility map inference: (1) a storefront is only visible within 850 to -850 from the normal of the storefront on the map; (2) a storefront is visible within 80 meters of the storefronts.

results demonstrated the viability of this approach for a good indoor positioning experience.

Figure 6. CDF of the experiment results with 2 or 3 identified storefronts as visual landmarks for location estimation in comparison with location results obtained using a commercially available indoor positioning application. The horizontal axis (x) is the positioining error, the vertical axis (y) is the probability that the error is less than or equal to x

REFERENCES [1] [2]

[3] ground Truth location

[4]

558 meters

X estimated location

[5]

[6]

[7] [8]

[9]

275 meters

Figure 5. Location estimation results are compared against groundtruth. The length and color of the line indicates the amount of error. The longer the line and darker the red, the larger the error.

[10] [11]

The experimental results are shown in figure 5, where ground truth locations are plotted against the estimated locations. The color and length of the line indicates the amount of the error. The overall performance can be seen in the cumulative distribution function (CDF) plotted in figure 6. The positioining error rates with 2 and 3 visual landmarks are compared with the one obtained from a commerically availabe indoor location application. At 50% of the ground truth locations, the error rate is about 12 meters given 3 identified landmarks. The consistency and accuracy of the experimental

[12]

[13] [14] [15] [16]

82/278

R. Mautz, “Indoor Positioining Technologies,” Habilitation Thesis, ETH Zurich, Feb 2012. G. Schindler, M. Brown, and R. Szeliski, “City-scale location recognition,” in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2007. K. Ni, A. Kannan, A. Criminisi, and J. Winn, “Epitomic location recognition,” in CVPR, 2008. H. Aoki, B. Schiele, A. Pentland, “Realtime Personal Positioning System for Wearable Computers,” Proceedings of the 3rd IEEE International Symposium on Wearable Computers, October 18-19, 1999 H. Kawaji, K. Hatada, T. Yamasaki, and K. Aizawa, “Image-Based Indoor Positioning System: Fast Image Matching Using Omnidirectional Panoramic Images,” in 1st ACM International Workshop on Multimodal Pervasive Video Analysis. ACM, 2010. M. Werner, M. Kessel, C. Marouane, “indoor positioining using smartphone camera”, International Conference on Indoor Positioning and Indoor Navigation (IPIN) 2011. Stephen Se, David Lowe, Jim Little, “Vision-based Mobile Robot Localization And Mapping using Scale-Invariant Features” ICRA2001. X. Li, J. Wang, A. Olesk, N. Knight, W. Ding, “Indoor Positioning within a Single Camera and 3D Maps” Ubiquitous Positioning Indoor Navigation and Location Based. Service (UPINLBS), 2010, C. Constantinopoulos, E. Meinhardt-Llopis, Y. Liu, V. Caselles “A ROBUST PIPELINE FOR LOGO DETECTION, ” Constantinos Constantinopoulos, Enric Meinhardt-Llopis, Yunqiang Liu, Vicent Caselles: A robust pipeline for logo detection. ICME 2011. L. Ballan, M. Bertini, A. Del Bimbo, and A. Jain, “Automatic trademark detection and recognition in sport videos,” in Proc.ICME, 2008. J. Schietse, J.P. Eakins, and R.C. Veltkamp, “Practice and challenges in trademark image retrieval,” in Proc. Int. Conf. on Image and Video Retrieval, 2007. F. Pelisson, D. Hall, O. Riff, and J.L. Crowley, “Brand identification using gaussian derivative histograms,” in Proc. of the Int. Conf. on Computer Vision Systems (ICCV), 2003. D.G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60,no. 2, 2004. https://developer.qualcomm.com/mobile-development/mobiletechnologies/computer-vision-fastcv https://developer.vuforia.com/resources/sample-apps/text-recognition R. E. Woods, R. C. Gonzalez “Digital Image Processing,” 2nd Edition, 2002, ISBN-10: 0201180758.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

83/278

- chapter 3 -

Fields, Waves & Electromagnetics

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

RFID System with Tags Positioning based on Phase Measurements Igor Shirokov Dept. of Radio Engineering Sevastopol National Technical University Sevastopol, Ukraine [email protected] Abstract —The problem of radio frequency identification is very actual in modern life. State-of-art approach to mentioned problem assumes not only identifying of tag(s) but its positioning as well. Recently author is developing the homodyne method of useful-signal detection which allows carrying out the phase measurements in a microwave band in a simplest manner. In turn, phase progression at microwave propagation contains information about link length which opens a good opportunity for object positioning with high accuracy. RFID system with tag(s) identifying and positioning is considered in a paper. The homodyne detection of low-frequency signal assumes the tag(s) identification. Phase method of distances measurements is put into a basis of tag(s) positioning. Interrogators are placed in fixed points of a room and radiate low-power microwave signals. Transponders are moved and had to be identified and located. Transponders shift the frequencies of microwave signals (each transponder its own frequency shift, the identifying of transponder) and reradiate the frequency-transformed microwave signals back in the directions of interrogators. Each interrogator selects the low-frequency difference signals and measures the phase differences between these signals and the reference one. Based on these measurements the distances to transponders are calculated. Some aspects of transponder and interrogator design are discussed in a paper. The use of one-port resonant transistor amplifier in a transponder composition improves the technical features of entire system. The use of separate antennas for transmitting and receiving improves the decoupling of these channels and improves the sensitivity of entire system as well. The algorithm of distance determination, based on phase measurements, is discussed in a paper. The serial changing of frequency of microwave signal from 1292.5 MHz to 1302.5 MHz assumes undoubted determination of a distance up to 30 m (60 m of two-path propagation) with high resolution. Keywords-component; RFID; Homodyne detection; Microwave phase measurements; One-port transistor amplifier; Microwave phase shifter; Patch antennas.

I.

INTRODUCTION

Besides the tag identifying in RFID systems the microwave propagation offers a good opportunity for tag positioning. The use of the pulse radar method for measuring distances and angles are quite unsuitable for indoor applications. The resolution of this method is too low and there is a minimal distance requirement of the pulse radar measurement that is usually higher than the room size. The resolution of the phase

method of distance measurements is determined by the microwave length. Depending on the wavelength one can reach an accuracy of 10 mm and better [1]. In this paper the new method of tag identifying and it positioning is presented. Positioning is calculated in terms of distances measurements, from the beacons to transponders (tags). The microwave phase progression measurements are used for these purposes. No doubts, the phase method causes an ambiguity because the phase measurements can only have values in an interval between 0-2π. In this paper the way of bypassing this problem is discussed. The task of simultaneous positioning of several tags often is of need. In this case the problem of tags differentiating appears. Furthermore, the electromagnetic compatibility (EMC) of functioning of several radio engineering units must be taken into account. The simultaneous functioning of these units has not to deteriorate the tag differentiating and its positioning. The way of solving this problem is discussed in the paper. Further, the tags tracking assume the radiating of electromagnetic waves. Obviously, the system radiating power must be as small as possible. The radiating of electromagnetic energy from the tags preferably had to be excluded. This issue is discussed in the paper as well. Besides the mentioned above technical and EMC aspects system had to be efficient from the economic point of view. All of system units must have simplest design. The hardware installation has not to involve essential manpower. The system power consumption must be as small as possible. In other words the system must satisfy the demands of state-of-art tendencies of so called ―green communication‖. These aspects are discussed in the paper. II.

APPROACH TO A PROBLEM

The system implementation, which is free from mentioned problems, assumes using of homodyne method of microwave phase measurements, which is well developed in author’s previous works [2], [3]. The further developing of this approach takes place in the paper. Realizing the homodyne method of microwave phase measurements and, consequently, distances determination within a room, we offer to place the radio beacons (at least two) along the extended wall and at the certain distance b each from another. This distance will be the system base. The number of beacons can be higher than 2 and the ones can be

84/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 placed along the different room walls. The positioning of tags is characterized by the distances d ij from the tags to each beacon. All of that ensures the elimination of doubts in distance determination and additionally it ensures the system functioning on entire room square, at arbitrary distances from beacon(s) to tag(s). But these aspects are not discussed in the paper and doubts elimination is solved organizationally.

MWO

MDC to Transponders MMIX

Microwave Frequencies

Certainly, the system operates in a room only. Usually the material of the wall is not transparent for microwaves (we do not consider the wooden wall) at all, or signals are damped very much. In this case the additional beacons must be installed in the neighboring room. We do not ―lose‖ a tag when it will ―enter‖ the next room. It will be ―visible‖ for beacons of both rooms when it will be in a door aperture.

LMIX

Certainly, the object positioning will be carried out in a plane only. The heights of beacon antennas placing and transponder antennas placing, all of them must be quite equal. The violating of this rule results in appearing of distances determination errors. However, this problem can be solved easily by the placing of additional (third) beacon on a plane of the wall on the certain distance from system base b . The heights of beacons placing and transponders placing can be arbitrary in this case; the calculating routine will solve this problem. The block diagrams of a transponder and a beacon are shown in Fig. 1. Each transponder, which is placed on the object, consists of microwave antenna, controlled transmission phase shifter (CTPS), one-port microwave transistor amplifier (OPTA), and low-frequency oscillator of transponder (LFOT). Each beacon consists of microwave oscillator (MWO), microwave directional coupler (MDC) microwave transmitting antenna, microwave receiving antenna, microwave mixer (MMIX), low-frequency mixer (LMIX), low-frequency heterodyne (LHET), selective amplifier-limiter (SALIM), lowfrequency oscillator of beacon (LFOB), and phase detector (PD). The line ―Microwave Frequencies‖ assumes controlling of microwave oscillator frequencies. The frequency changing is of need for adequate distance determination. The frequencies of different beacons must be different but closely spaced. The problem of frequency choosing will be discussed later.

Transponder Selection

SALIM Phase

The transponders (tags) are placed on the objects that are to be located. The number of objects can be arbitrary, but with the certain restrictions, which will be discussed later. In the paper we will discuss the simultaneous operating of two transponders, not changing the approach to a problem in general. Taking into account the system base b and all of distances d ij , we can determine the tags positions in Cartesian coordinate system with respect to system base and beacons easily enough.

LHET

PD

CTPS

LFOT

LFOB

Beacon

OPTA

Transponder

Figure 1. The block diagrams of a beacon and a transponder

The phase differences of low-frequency signals are obtained on the line ―Phase Differences‖. These phase differences of low-frequency signals contain the information representing the phase progression of microwave signals. The line ―Transponder Selection‖ assumes frequency controlling of low-frequency heterodyne. This Figure represents the serial treating of transponder signals. Obviously, the use of parallel chains after the microwave mixer assumes parallel signal treating. The processing time will be lower, but the hardware cost will be higher in this case. III.

BASE EQUATIONS

th

Each i beacon radiates the microwave signal that can be described as ui1 (t )  U i 0 sin  ωi 0 t  i 0  

where U i 0 is the amplitude, i 0 is the frequency, and i 0 is the initial phase. These oscillations are radiated in the direction of inner part of the room where the j th tag is placed. The microwave, propagated along the distance d ij , obtains the attenuation Aij and phase progression ki 0 dij : uij 2 (t )  AijU i 0 sin ωi 0t  ki 0 d ij  i 0  ,

where ki 0  2π λi 0 is the propagation constant, λ i 0 is the wavelength.

85/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 The j th transponder receives this microwave signal with its microwave antenna. Then the controlled transmission phase shifter implements the monotonous change of microwavesignal phase over the period T j of the low-frequency oscillations on the value π . The low-frequency oscillator generates these oscillations with certain frequency stability. The value of this stability will be discussed later. The shown block diagram assumes passing of microwave signal thru the phase shifter twice. So, the microwave-signal phase will be changed on the value 2π over the period T j of the low-frequency oscillations, as it is shown in Fig. 2a or in Fig.2b. The change of microwave-signal phase over the period T j of the low-frequency oscillations on the value 2π is tantamount to the frequency shift [4] of microwave signal on the frequency  j  2π / Tj . In a certain assumption, this technical solution is equivalent to Doppler’s frequency shifting.

well. Thus, we obtain the microwave signal amplifying in essential value with the excellent noise factor [7]. Further, the amplified microwave signal passes thru the phase shifter again and obtains the frequency and phase shift. The frequency/phase transformed microwave signal will be 

uij 3 (t )  Aij Ui 0 sin  ωi 0   j  t  ki 0 dij  φi 0  φ jL  



where Aij takes into account the transponder gain. The transponder gain determines the operating distance of the system only and it does not affect the accuracy of object positioning. So, we will assume the gain of transponder is equal to 1 ( Aij  Aij ). Transponder reradiates this frequency/phase transformed microwave signal back in the beacon direction. In the beacon the secondary received microwave signal will be

uij 4 (t )  Aij2Ui 0 sin  ωi 0   j  t  ki 0 di  ki0 dij  i 0  φ jL  , where ki0 takes into account the frequency shift ωi 0   j . The frequency shift  j is much lower than the initial frequency (e.g. and ωi 0 fi 0  ωi 0 2π  1.5 GHz F  (10K 100) kHz), then ki0  ki 0 . This secondary received signal is mixed with the original microwave signal and at the mixer output the low-frequency signal of difference is selected. This low-frequency signal will be 

uij 5 (t )  Aij2U i 0 sin  j t  2ki 0 dij  φ jL  



As we can see from (1), the initial frequency ωi 0 and the

Figure 2. The law of microwave signal phase changing

The amount of frequency shift is chosen small. Really, F j ( Fj   j / 2π ) is equal to tens of kilohertz or closely and at any case it does not exceed the value in hundred kilohertz. One more feature is observed in this case: the initial phase of the controlling low-frequency oscillations φ jL is transferred into the microwave-signal phase directly, without any changes. This feature was put on a basis of author’s previous investigations [2], [3]. After the controlled phase shifter the microwave signal is amplified by the one-port microwave transistor amplifier [2]. This microwave amplifier possesses the highest simplicity of design implementation, has very low power consumption, and has excellent noise characteristics. Described amplifier operates in narrow frequency band, but this feature is not dramatic one in our case. Furthermore, the perfect antenna matching can be implemented in a narrow frequency band as

initial phase i 0 of origin microwave signal both are subtracted in the mixer. The only double phase progression 2ki 0 d ij of the microwave signal is of interest for the distance definition. The low-frequency signals from each j th transponder are obtained at the output of each mixer of each i th beacon, but the phase shift will be unique for each pair beacon-transponder and it will be determined by each distance d ij . The frequency shift in  j for each transponder determines its identification. As the frequencies of signals  j from different transponders are quite different, it is inconvenient to measure the phase differences between these signals and the reference one. Avoiding this problem the heterodyning of received signal is proposed. The frequency of heterodyne  i in i th beacon is chosen so that the difference i   j remains constant and the one is equal to 10 kHz, for example. The signal with such frequency is amplified up to limitation and it will be described as

86/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 uij 6 (t )  U 0 sin ij t  2ki 0 d ij  φ jL  φiH  

where ij  i   j , φiH is the initial phase of heterodyne signal. The phase of this signals is compared with the phase of low-frequency reference signal with the same frequency   ij . So, the phase detector output data  ij will be proportional to value ij : 2ki 0 dij  t  φ 

where  is the reduced mutual frequency instability of all of low-frequency oscillators, φ is the sum of all of initial phases of all of low-frequency oscillators. Thus, analyzing the data  ij , we can determine each of distances d ij . IV.

ERRORS AND PROCESSING ALGORITHM

The term t is the dynamic error of phase measurements, φ is the static one. However, what value of the error we are talking about? For signal frequency in 10 kHz the absolute frequency instability of crystal oscillator not exceeds 0.1 Hz. For the signal processing time in 10 ms the dynamic error will be 0.36°, what corresponds to distance determination error in 0.2 mm (twice value) for the frequency of microwave signal in 1.5 GHz. Certainly, we can neglect the dynamic error t . The static error φ is constant for all time of measuring process (ever since all of oscillators are started up). We can exclude this error by the calibration procedure, but it will be excluded automatically in a result of processing-algorithm implementation. Thus, the only thing we must ensure is the high frequency stability of each low-frequency oscillator. In other words, the phase mismatch between any two oscillators can not exceed the phase measurements resolution during the whole time of measuring procedure implementation. If the algorithm of coordinates’ determination is not time-consuming, and the number of iterations is not high, the use of ordinary crystal oscillators will be the best solution for technical implementation. A little bit different approach we must use to the determination of microwave-oscillator frequency instability. Here the measured distance plays an important role. Let assume the maximal operating distance d ij in 50 m and maximal error in distance determination dij in 1 mm (the phase measurements error in 1.2°), then for frequency in 1.5 GHz the maximal frequency instability  f0 f0 will be 3 ppm. Such value of frequency instability is realized by temperature stabilizing of reference crystal oscillator. Generally, it is possible to measure a phase difference between 0 and 2π. The phase progression ki 0 dij will be

represented as 2πn  ki 0 d , where n is integer. In order to avoid this problem we serially change the operating frequency of microwave oscillator of each beacon [5] and we measure the phase differences between the reference low-frequency oscillator signal and low-frequency mixer output signal. At first time we fix the frequency f1 and fix this phase difference as φ1 . After that we change the frequency of microwave oscillator in a certain value f 2 and fix new value of phase difference φ2 and calculate the distance as

di 

(φ1  φ2 ) c. 2π( f1  f 2 )

The frequency difference f1  f 2 was chosen in 5 MHz. Such difference corresponds to undoubt phase measurements in 30 m (taking into account two-way propagation) range. The increasing of this difference increases the system accuracy but decreases the system operation range and vice versa. Certainly, these calculations yield the rough results of distance determination. These calculations let us obtain the number of phase cycles n and the possibility to determine the distance in terms of integer numbers of wavelengths. The exact value of distance d ij can be obtained by measuring the phase difference ki 0 dij . Taking into consideration the accuracy of phase measurements in 1.4° (8 digits) and possible wavelength in 0.2 m, the resolution in distance determination will be about 1 mm. We should understand that the measured distance will be conditional distance, taking into account antennas phase centers and all feeder lengths. V.

SOME FEATURE OF TRANSPONDER DESIGN

A. One-Port Transistor Amplifier Reflection amplifiers are well situated for transponder design according the Fig. 1. Reflection amplifiers for X band operation have been developed in the late seventies of last century by utilizing a circulator and a Gunn diode oscillator, providing the gain in 10 dB with a noise figure of 15 dB. Another reflection amplifier for the 20 GHz frequency band has been developed using a FET and a package resonance as a positive feedback [6]. Experimental results for this amplifier at 23 GHz showed a noise figure in 6 dB with a gain in 8 dB. Similar approach was used in works [7] where a reflection amplifier circuit using a transistor with positive series feedback was suggested. Described amplifier showed a higher stability and efficiency than the reflection amplifier with parallel feedback. A low noise one-port transistor amplifier has been developed in the 1.4-1.6 GHz frequency band to study the capabilities of this kind of amplifiers resulting in a lower cost and simplified research [7]. In mentioned amplifier the active element chosen is a GaAs Hetero Junction FET NE33284AA with a gate length Lg  0.3 μm and a gate width Wg  280 μm. This device has a

87/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 minimum noise factor N f  0.3 dB and an associated gain

G  19 dB at 1.5 GHz when it is biased at Vds  2 V and I d  10 mA [7]. Theoretical calculations, conclusions, and work of the described amplifier, are based on the use of parasitic capacitances and inductances of the FET, which in a combination of using of external constructive and internal parasitic reactive parts form the serial positive feedback providing amplification of microwave signal. It is obvious the parasitic reactive parts of proper FET and reactive parts of its installation have an impact exclusively in the microwave band. In radio-frequency band the influence of these parasitic elements is reduced and signal amplification in this case is problematic enough. Besides, the adjustment of similar amplifier is a man power consuming process, as it is practically impossible to consider and to calculate the influence of parasitic reactive elements on amplifier parameters in advance.

B. Amplifier Simulation The Advanced Design System environment was used for simulation, and corresponding SPICE-model of active element was loaded. As a coil of inductance with taps the segments of strip-line were used. As a substrate the standard dielectric FR-4 of 0.8 mm of thickness was used. The result of simulation is shown in fig. 4.

And the last, the scheme contains in various variants up to four constructive inductances and up to five discrete bypassing, dividing and frequency-forming capacitances, that complicates the design and in addition complicates the process of amplifier adjustment. At the same time it is extremely necessary to have at different system units the stable, simple, and reliable one-port amplifier, which is capable to amplify the signals in microwave band. Furthermore, such amplifier should not demand special adjustment that will allow using it in RFID systems at their mass production. In a paper another approach to one-port transistor amplifier design is proposed [8]. The schematic diagram of one-port resonant transistor amplifier is shown in Fig.3.

Figure 4. Result of simulation of OPTA’s operation

Using of super low noise FET Avago ATF-38143 with 0,4 dB noise figure, 16 dB associated gain, and 230 mmho transconductance, the stable resonant amplification of one-port transistor amplifier up to 45 dB was obtained. C. Experimental Investigations The experimental investigations were implemented with standard standing-wave and attenuation indicator. Scalar bridge for differentiating of incident and reflected waves was used. The photo of measurement implementation is shown in Fig. 5. For gain measurements the following approach was used. First of all the certain level of incident wave was set. The bridge output was shorted, so the level of reflected wave was the same and it was fixed.

Figure 3. Schematic diagram of OPTA

Input signal from Beacon u2 (t ) arrives at a first tap of the inductance coil (here and further — coil) one end of which is connected to the common wire and its other end is connected to the gate of FET. Thus, on this gate the signal, in-phased with an input signal and increased on amplitude is induced. This voltage causes the occurrence of an in-phase current through the channel of the FET which flows through a part of the coil due to the direct connection of FET source with the second tap of the coil. Thus the current through the coil is in-phase with an input signal. In other words the positive feedback is realized and the amplification of signals takes place. The maximum gain reaches at a resonance of the tank formed by coil and input capacitance of FET.

Whereupon, the shorter was removed and 30 dB attenuator was set. So, if the output of attenuator would be shorted, the reflected wave will be attenuated on 60 dB taking into account the double passing through the attenuator. Then the one-port transistor amplifier was connected to the output of attenuator (see fig. 5 a). In this case the instrumentation indications of reflected wave with respect to initially fixed ones plus 60 dB will be the real amplifying of a signal. We obtained the amplifier gain in 42 dB with a power consumption of about 18 mW. (1.8 V of power supply and 10 mA of quiescent current). So, the results of simulations and measurements were well agreed.

88/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 6. Transmitting and receiving patch antennas

Figure 5. OPTA measurements

The dynamic range of network analyzer was not good and it not exceeded 30 dB. So, we can not see the signal amplification in full scale. The low-value instrumentation indications were suppressed with noise strip (see fig. 5 b). However, the maximal value of amplifier gain was interpreted adequately (see mark). Although amplifier operates in quite narrow frequency band (-3 dB bandwidth was near 5 MHz), it is not great drawback in our case. Furthermore, perfect antenna matching can be implemented in narrow band only. VI.

SOME FEATURE OF BEACON DESIGN

The operation of discussed RFID system has similar difficulties, as the operation of conventional radar system. The system energy is weak. The main problem in this case is the suppression of transmitted signal in a receiving channel. Standard Y-circulator can suppress unwanted signal on a value in 20-25 dB only. It is not enough for system operation. We suggested using two separate conventional patch antennas for transmitting channel and for receiving one. Antennas are shown in fig. 6. The patch dimensions were 55 X 55 mm. The distance between the patch edges was 70 mm. As a substrate the standard dielectric FR-4 of 1.5 mm of thickness was used. The simulation of each antenna and the simulation of antennas mutual coupling were implemented in Microwave Office environment. The results of simulation are shown in fig. 7.

Figure 7. Antenna(s) simulations

As a single unit the antenna is well situated for the beacon design. The antenna’s VSWR was 1.13 on a central frequency in 1.3 GHz (see fig. 7 a) and not exceeded 1.3 in a working frequency band. But mutual coupling between antennas exceeded 50 dB on a central frequency. None Y-circulator can ensure such decoupling. This feature of antennas unit corresponds to system demands perfectly. VII. RESTRICTIONS Certainly, the complex indoor environment assumes the multipath microwave propagation. First patch of the beacon emits microwave signal within entire room space and the scattered microwave signals are received with another patch of the beacon. But these scattered signals do not interfere with the

89/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 useful signal because the last has the frequency shift. Only received by the transponder the scattered signal obtains the same frequency shift, but this signal has much lower amplitude than direct one. Surely, the presence of bulk metal in a room will disturb the normal system operation, as well as operation of any other radio engineering system. The number of transponders operating simultaneously in a room is large and it was discussed above. But certain restriction appears due to the signals mixing. The combinatorial components can interfere with useful signal. The careful ID frequency choosing will eliminate this problem. In any case, this problem will be actual for large number of objects in a room. VIII. CONCLUSION

the accurate adjusting of transmitter, transponder, and receiver has to be implemented. The main goal of this adjusting is the ensuring of declared system operation range. The increasing of transmitter out power is the worst way of this problem solving. The system must ensure so-called ―green communication.‖ Now the output microwave power of transmitter does not exceed 15 dBm. REFERENCES [1] [2]

[3]

Thus, the functioning of equipment for precision object positioning was discussed. The considered equipment possesses the simplest design and the lowest cost. At the same time, its metrological features are high. The calculation routines are quite realized. The equipment installation does not demand the extended manpower.

[4]

The transponder, which is set on an object, does not generate any radio signals. It only receives and retransmits the microwave signal from beacon(s). So, the intensity of electromagnetic field in object’s nearby environment is very low.

[5]

The theoretical investigations concerning the system accuracy give the optimistic results, which are confirmed by author’s previous experimental investigations in this field. The final conclusions will be made after the real equipment testing.

[7]

[6]

[8]

The system is at the stage of a proposal. Some of system modules are at improving process now. As the system assumes the radar approach the energy of the system is very weak. So,

90/278

V. B. Pestrjakov, Phase radio engineering systems, Moscow, Soviet radio, 1968, 468 p. (in Russian) I. B. Shirokov, ―The Multitag Microwave RFID System with Extended Operation Range,‖ in Chipless and Convention Radio Frequency Identification: Systems for Ubiquitous Tagging, IGI Global, 2012, 344p. / pp. 197- 217 I. B. Shirokov, ―Precision Indoor Objects Positioning based on Phase Measurements of Microwave Signals,‖ in Evaluating AAL Systems Through Competitive Benchmarking, Indoor Localization and Tracking, S. Chessa and S. Knauth (Eds.): Communications in Computer and Information Science, 309, Springer-Verlag Berlin Heidelberg, 2012, 107 p. / pp. 80–91. J. S. Jaffe, R. C. Mackey, ―Microwave frequency translator,‖ IEEE Trans on Microwave Theory and Techniques, vol.13, pp. 371- 378, 1965. I. B. Shirokov, ―The Method of Distance Measurement from Measuring Station to Transponder,‖ (in Ukrainian) Pat. Ukraine, #93645 pub. in Bull. #4, Feb. 25, MPC G01S 13/32, 7 p. (2011) H. Tohyama, H. Mizuno, ―23-GHz Band GaAs MESFET Reflection Type Amplifier,‖ IEEE Transactions on Microwave Theory and Techniques, vol. MTT-27, no.5, pp. 408-411, May 1979. A. P. Venguer, J. L Medina, R. A. Chávez, and A. Velázquez, ―Low noise one-port microwave transistor amplifier,‖ Microwave and Optical Technology Letters: Vol. 33, No. 2, pp. 100-104, Apr 2002. I. B. Shirokov, ―Shirokov’s one-port resonant transistor amplifier,‖ Pat. Appl. Ukraine # a201114351 from 05 December 2011, МPC H03F 21/00 (in press).

- chapter 4 -

Signal Strength or fingerprinting

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Broadcasting Alert Messages Inside the Building: Challenges & Opportunities

Osama ABU OUN, Christelle BLOCH, François SPIES

Wahabou ABDOU Laboratory of Electronics, Computer Science and Image. University of Burgundy, France Email: [email protected]

FEMTO-ST Lab (CNRS) University of Franche-Comte 1 Cours Leprince-Ringuet 25200 Montbéliard, France

Abstract—Emergency evacuation from buildings during catastrophic events need to be quick, efficient and distributed. Indoor positioning and wireless communications can be used to optimize this process. Indeed, they allow to determine the current location of the people present in a building and to transmit location-based information. This permits to give all of these people the best directions that can help them find their way out of the building, and/or to help the rescue teams find them by giving the approximate location. But this involves many challenges, linked both to indoor environment and to the emergency of the situation. The goal is to improve evacuation with regards to various antagonistic criteria, particularly simplicity of use, fastness and reliability. Various involved issues are described, namely  

The repetition of GNSS information by access points (APs), using beacons. The way each mobile device calculates or receive its own exact location, using the wireless network, which contains both RSSIs and coordinates, Then the paper more specifically focuses on scientific bolts encountered in this context, and finally gives some experimental results gathered in a feasibility study to validate some of the basic concepts of this approach. Keywords-component; indoor positioning, optimization, emergency evacuation, GNSS information.

I.

INTRODUCTION

Even with the successive developments in the GNSS technologies and the possibility of determining the precise location of the receiver up to several meters, the service is not reliable and most of the time is not available indoors especially in the big buildings. Using the wireless networks to broadcast alert messages and customized evacuation directions inside the building organizes and speeds up the process of evacuation. In addition, It helps in distribution the persons to the various exits according to their current locations. Broadcasting could be done either by : (I) using the simple broadcast in which the access points broadcast the data immediately to the mobile phones, or (II) using broadcast trees, where the mobile phones will rebroadcast the data within their coverage areas in order to extend the broadcast range. The broadcast tree root could be a mobile phone connected to the Internet using 3G/4G connection. This paper discusses different scenarios that could be used to apply this solution considering the already mentioned parameters. II.

Context

The mobile phone inside the building could be located in an area in which it has the ability to receive broadcast messages from several Wi-Fi access points. Selection criteria and optimization process should be applied to choose the most appropriate directions according to current position of each mobile phone. Broadcasting method is an essential part of this solution. In the large building especially the public ones, most people do not connect to the Wi-Fi access points in these buildings, either the access points are not public and the access is limited to a certain group, or simply some persons do not need to use the network or may be they do not want to drain the phones batteries. Using the Wi-Fi beacons to broadcast the alerts and evacuation messages can help in dealing with such scenarios without the need to deploy new public Wi-Fi networks inside the buildings. In some cases, people could be trapped inside the building because of some obstacles or because of being injured or having a certain disability. Thus, rescue teams need their exact locations inside the building in order to evacuate them. Thus, using the wireless access points to broadcast the GNSS coordinates inside the building could be one of the best and the most economic solution to overcome this problem.

91/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 III.

Broadcasting Messages Inside the Building

A. Sending GNSS in beacons from an access point Literature has been conducted on the technologies that could enhance delivering the GNSS information inside the buildings. Using the Wi-Fi beacons is one of these technologies [1]. In fact it is possible for the Wi-Fi access points to overload their beacons with certain data, e.g., pre-configured GNSS coordinates, in order to be delivered to the mobile phones and other Wi-Fi clients without the need to have an association with them. Beacons have not been used in a wide range of applications, although it has many advantages over the other solutions in the field of the GNSS like GNSS repeaters and IMES (Indoor Messaging System) [2] in which new hardwares should be installed in the buildings. On the contrary, this solution is a software-based and could be applied by adding an extension to the IEEE 802.11 standard without the need to install new access points or to use special Wi-Fi cards in the clients. In addition it saves the resources of the network and the phones by eliminating the need to establish full connections. B. Using the received beacons to determine the most accurate position In the proposed solution, every access point has its own exact GNSS coordinates. The access points broadcast these coordinates with their beacons, so each mobile phones in a certain place could receive the coordinates from all the access points that cover this place. Furthermore, calculating the distance from these access points could provide the mobile phone with an accurate position. in this model we propose using the hybrid algorithm that combining a signal strength cartography and a calibrated propagation model. This algorithm has been developed by LIFC (Laboratoire d'Informatique de Franche-Comté) / FEMTO-ST to express the distance in a heterogeneous environment [3]. C. Using Wi-Fi to Broadcast Evacuation Directions Broadcasting the evacuation directions using the Wi-Fi network can solve major problems which exist in the traditional ways, some of these problems are related to the people in the building in which some people could have certain disabilities preventing them from receiving the directions, and the other problems are related to the state of building during the evacuation, for instance, having some blocked exits or dangerous corridors because of the fire. There are three different levels suggested for broadcasting the evacuation directions in a building using Wi-Fi:  All the Wi-Fi access points in the building broadcast the emergency exits accompanied with their exact positions to all the mobiles. In this case the evacuation management system (if there is any) has no information about the persons who exist in the building and their approximate locations. Thus, each mobile is going to decide which exit is the closest according to the approximate distance between the mobile and the exit.  Each Wi-Fi access point broadcasts the emergency exits which are located in the same range of the access point itself. As in the first level, there is no information available about the mobiles and the persons in the building and the mobile will decide which exit is the closest.  Broadcasting customized directions to each mobile according to the position of the mobile and the building situation. The directions should be generated by the evacuation management system in the building, this solution works between three entities: the mobile phone, the access points, and the management server. The protocol depends on the broadcast and the Wi-Fi management frames to exchange the data between the mobile phone and the access points without any association between them. This gives the ability to the mobile phone to stay connected to another network, while it is using the positioning and evacuation services of the building internal network. The access points relay the positions of the mobile phones to the server in order to keep updated snapshot of the mobile phones inside the building. Thus in evacuation time, the server will send the best evacuation plan for each mobile phone through the access point. IV.

Experiments and Evaluation

A. Experiment Design During this study, multiple scenarios have been simulated so as to measure the time needed to evacuate a building by following the receiving the evacuation directions which have been sent using the Wi-Fi broadcast after. The simulation is done using NS2 equipped with the “Shadowing Patterns” model, many variables are taken into consideration, these variables could be categorized into three main groups:  Building structure: the building dimensions, positions of emergency exits, capacity of emergency exits.  Network structure: Wi-Fi access points and their positions.  Population: number of persons in the building, their initial positions, the initial target coordinates and speed. B. Experiment Policies Following we discuss the policies used during the simulation:  Person movement policy: the person moves from its initial position to its target in straight line till it receives the evacuation broadcast along with the available evacuation plans, it stops moving and evaluates all the plans according to the distance between

92/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 its position and the emergency exit of each plan. Then it starts moving in a straight line toward the closest exit. At the exit it joins the waiting queue that organizes passing the exit to the outside according to the exit capacity.  Initial person position, Initial person target: random functions could cover the whole area of the building or a certain side of it.  Person speed, Evacuation speed: random functions give different values for each person.  Exits positions: all the emergency exits are located in the external walls of the buildings.  Access points: distributed in which they cover the whole area.  Exits performance: statistics contain : time of first and last persons arrived, time of first and last persons evacuated.  Evacuation performance: statistics contain the summary of all the exits and the evaluation of the evacuation process. C. Experiment Scenarios Utilizing aforementioned policies, we test and analyze three different scenarios, each scenario has been applied ten times according to the following criteria: TABLE I. Scenario

Area

APs

Exits

Persons

Distributions

1

20m * 20m

3

2

25

100%, 75%, 50%

2

40m * 40m

3

2

50

100%, 75%, 50%

3

60m * 60m

3

3

150

100%, 75%, 50%

Experiment Criteria

D. Experiment Results In all scenarios, when the persons have been distributed over the whole building (Distribution over 100%), all the exits worked almost the same. In fact, we monitored the time evacuation of the last person, we got similar times. It is not the same case when we tested the same scenarios with the same parameters except for the distribution, in which we used distribution over 75%. The result in this case have been changed completely. In the scenarios with three exits, 50% of the persons have been evacuated using one exit and the other two exits evacuated the rest, whereas in the two exits scenarios, 75% of the persons have been evacuated through one exit. Therefore the total evacuation time has been increased about 15% comparing to the same scenarios when the we used the distribution 100%. The worst evacuation time was noted when we applied the same scenarios with distributing the persons over 50%. The total evacuation time has been increased by 150% comparing to the first test. In the scenarios with three exits, more than 90% of the persons have been evacuated using one exit. considering that the reason of the evacuation is an earthquake and we have only few minutes to evacuate the building. If we consider the time needed to evacuate the persons from the building is equal to the evacuation time that we got using the ideal distribution “Distribution over 100%”, that means only about 33% of the persons in the building will be able to to leave it in the right time when they are distributed over 50% of its area. Since we are comparing between the same building in different scenarios, we can say that the reason is a bad load balancing that resulted from choosing the evacuation plan by the people using just the distance between their current position and each exit, while ignoring the current situation of the building, even if they already calculated their exact position and they received the right evacuation plans . By analyzing the results, we found that it was possible to evacuate about 25-30% of the persons who couldn't leave in case they used one of the other exits, especially for the persons who where nearly at the center of the building between all the exits. V. Conclusion These experiments proved that broadcasting the alert messages and multiple evacuation directions inside the building, in which each person will choose the best plan according to the distance to the exit, can be useful only in the ideal situation where the positions of the persons cover the whole building and there is no diversity in the density in the building, which is not the case we have to deal with it most of the time. In most of the cases, the persons positions will be concentrated in certain places inside the building. Therefore, in case of the evacuation, they will line up in front of the closest exit waiting for their turn to leave the building, whilst the other exits are empty. Note that the other exits are far but the persons would leave faster using them. In these experiment, we supposed that each mobile phone represents just one person, in the real situations and in most of the case we will find that one group of two or three persons or even more will use the same evacuation directions of one mobile phone. Furthermore, some persons ignore the directions and follow the crowds. Therefore, more people are waiting on the crowded exits. The solution is to build a protocol can monitor the situation of the building and the persons inside it in order to generate the possible evacuation plans for each mobile phone as soon as the evacuation process is triggered. Evacuation plans should be generated depending on the current situation of the building and it could generate new plans in case of any update. The system should depend on the management frame in the IEEE 802.11, so the persons can communicate with the system without the need to connect to certain Wi-Fi network. It can also offer two ways of communication, server to clients and clients to server. The system should define three different entities: the management server, the Wi-Fi access point, and the mobile phone. The access points relay all the data to the server for processing and encapsulate the data from the server using the appropriate IEEE 802.11 management frame.

93/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 REFERENCES [1] [2]

Ranveer Chandra, Jitendra Padhye, Lenin Ravindranath, Alec Wolman, Microsoft Research: Beacon-Stuffing: Wi-Fi Without Associations. Naohiko Kohtake , Shusuke Morimoto , Satoshi Kogure and Dinesh Manandhar, 2011 International Conference on Indoor Positioning and Indoor Navigation (IPIN), 21-23 September 2011, Guimarães, Portugal [3] Matteo Cypriani, Frédéric Lassabe, Philippe Canalda, François Spies, Wi-Fi-Based Indoor Positioning: Basic Techniques, Hybrid Algorithms and Open Software Platform, 2010 INTERNATIONAL CO NFERENCE ON INDOOR POSITIONING AND INDOOR NAVIGATION (IPIN), 15-17 SEPTEMBER 2010, ZÜRICH, SWITZERLAND

94/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

For a Better Characterization of Wi-Fi-based Indoor Positioning Systems Fr´ed´eric Lassabe

Matteo Cypriani

Philippe Canalda and Franc¸ois Spies

Research Institute on Transports Energy and Society Belfort-Montb´eliard University of Technology Rue Ernest Thierry-Mieg Belfort, France Email: [email protected]

Laboratoire de Recherche T´el´ebec en Communication Souterraine UQAT, 675, 1e avenue Val d’Or, QC, Canada Email: [email protected]

D´epartement d’Informatique et Syst`emes Complexes FEMTO-ST, Universit´e de Franche-Comt´e Montb´eliard, France Email: [email protected] Email: [email protected]

Abstract—Many Wi-Fi indoor positioning systems exist, which have all been published through scientific articles including performance estimation, often in terms of accuracy. However, deployment of such systems is conditional not only to their accuracy but also to various criteria which might be unclear or implicit. In this article, we present a detailed taxonomy of Wi-Fi indoor positioning systems. This work aims at providing a set of criteria that we identified through tests, state of the art study and experience in developing these systems, and that can be extended to various types of indoor positioning systems. Criteria consider modelling of RSSI data as well as hardware and software architectures to meet the requirements of a Wi-Fi IPS.

I. I NTRODUCTION Considering the state of the art of indoor positioning systems, it has become hard to choose which hardware, physical measurements, architecture, and algorithm to use or to study when dealing with such systems. This article aims at providing an overall view of criteria involved in indoor positioning systems design and development. In the remainder of the document, we first present a taxonomy of various properties of Wi-Fi Indoor Positioning Systems (IPS). Second, we apply this taxonomy to a set of related work we studied or developed. Therefore, it provides a guideline to the development and the deployment of a system of this kind, given its goals as well as hardware and/or software constraints. From the taxonomy and its application to various systems, we draw conclusions about design and models choices according to various systems related to the deployment context. II. A RCHITECTURE In this section, we cover the physical elements of Wi-Fi based positioning systems and their architecture. Then, we present the impact of the infrastructure over the centralization of the positioning algorithms, and finally, we define the implicit and explicit positioning as well as their impact on privacy. c 978-1-4673-1954-6/12/$31.00 2012 IEEE

A. Wi-Fi architectures As Wi-Fi is a wireless communication medium, its first use is to transmit data between devices. However, its signals can be used to locate mobile devices within a Wi-Fi network range. Most indoors Wi-Fi networks are designed among the following topologies: I infrastructure mode, I ad-hoc mode, I mesh networks. Infrastructure and mesh modes are relying on access points, which are fixed wired and wireless bridges to which user devices, also defined as client stations, connect to obtain network access. With ad hoc mode, every mobile device in the topology acts as a client station as well as a router. In this subsection, we focus on Wi-Fi positioning systems based on infrastructure mode. Physical measurements used to locate a mobile device can be gathered either by the mobile device itself, or by the network infrastructure. We describe here the advantages and drawbacks of these choices in terms of measurements. 1) Measurements performed by the mobile device: mobile devices range from laptop computers with Wi-Fi network interface card, to smartphones whose SoC usually embed WiFi access. These devices also support many operating systems, from desktop OS such as GNU/Linux, MS Windows and Mac OS to OS dedicated to light devices such as MS Windows CE and RT, Apple iOS, and Google Android, and many others. An advantage of using the mobile device to provide measurements lies in the fact that almost any combination of hardware and OS provides Wi-Fi measurements capabilities1 . However, three drawbacks are identified. First, the number of hardware and software configurations makes it very difficult to build an application running on every combination of device and OS. This problem is mitigated by some circumstances 1 A major exception being Apple iOS which actually provides the feature but contractually refuses positioning applications based on such feature in its App Store.

95/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 where you have control over the mobile devices used by the system. On the contrary, when designing a system open to all customers with their own devices, such a choice becomes very problematic. Second, while the OS API might provide access to Wi-Fi RSSI values through their kernel, there is no guarantee that it is actually implemented in the Wi-Fi chipset firmware, which could lead to inconsistent measurements and bad location estimation. Moreover, various chip models may provide various measurements qualities and accuracy, which will impair the resulting position estimate. 2) Measurements performed by the infrastructure: performing the measurements on the infrastructure devices is particularly interesting because once solved for a network, virtually any mobile device’s signal within range can be measured, therefore allowing to locate the mobile device. This requires to be able and authorized to add software to the infrastructure access points. Although most high-end access points (e.g. Cisco) won’t provide these features, a solution relies on adding dedicated, low cost devices that provide an API to add software. For instance, the OpenWrt Linuxbased OS [1] for access point is supported by hundreds of models across various brands and provides packet sniffing as well as Wi-Fi signals measurements. Measuring Wi-Fi signals from the infrastructure is interesting for two main reasons: any Wi-Fi device will be measurable, and the measurements will remain consistent. However, it is not always possible to know the mobile devices characteristics such as antenna gain and output power, which impacts on the RSSI measured. Whatever solution is chosen, the infrastructure can still operate normally concerning data transfers while providing transparent Wi-Fi measurements to the positioning system. B. Centralized or distributed The position computation can be centralised or distributed. Centralizing the location computation is interesting when determination of the location is based on a lot of data that has to be maintained in a consistent state, such as fingerprinting systems. Centralized architecture performs well with infrastructure-side measurements, because usually, the access points are wired on the same network and can communicate with the server at wire speed without impacting the Wi-Fi bandwidth. A distributed system can perform position computation on many unsynchronized and remote devices. Such an architecture is implemented either by having the mobile devices compute their own location (ad hoc positioning), or by providing many light weight positioning servers across the network. The light weight servers could be colocated with the access points or other existing devices on the network. Designing and maintaining a distributed positioning system is more complicated than a centralized one because all data required must be synchronized on all devices that compute locations. However, it scales very well with an increasing number of mobile devices.

C. Privacy, implicit and explicit positioning Privacy is of great concern of today’s life, where all individual deeds can be processed by computers in real-time. It is a sensitive topic that has to be dealt with when providing a positioning system. Before dealing with positioning systems and privacy, we have to define implicit and explicit positioning. Explicit positioning requires that the user actively requires his location. Implicit positioning can be performed without the user requiring it, and even without him knowing it. It can be based on any data provided by the user’s device during its regular operation (network transmissions, etc.). 1) Privacy: On one hand, with mobile-centered systems, the device gathers and exploits its measurements to determine its location. It does not require to give any information to other devices, so it can only be located based on the user’s will. On the other hand, in an infrastructure-centered system, implicit positioning can be used to watch and monitor the mobiles in a centralized way. As an example, [2] describes the capability to identify the mobile terminals a separate criterion, named recognition. III. T RANSMISSION MEDIUM In a positioning system, the transmission medium is the medium used to either transmit location information, or to be measured to get information to locate a mobile device. Common transmission medium used for positioning include: • radio networks, based or not on a standard like WiFi (IEEE 802.11), Bluetooth (IEEE 802.15.1), ZigBee (based on IEEE 802.15.4), etc., • infrared light, • ultrasounds, • mechanical devices like accelerometers, gyroscopes or infloor sensors, • optical devices (video cameras) [3], • geodesy instruments, like laser telemeters and theodolites. Some positioning systems rely on a combination of two or more transmission media (see Active Bat [4] for instance). Positioning algorithms, best expected accuracy, scalability, system architecture and energy consumption all depend to some extent on the medium used. For example, ultrasounds may not be used in a noisy environment. In the next subsections, we describe only radio networks, since Wi-Fi is a radio-based network2 . Radio networks have the ability to transmit through obstacles such as building walls, even if they are strongly impacted. There is no global rule when using radio networks to locate devices, since there are so many standards with various properties. A. Short range radio In short range radio networks, e.g. Bluetooth, the devices maximum range reaches only a few meters in a realistic environment. Therefore, many devices are required to locate a mobile device.

96/278

2 We

ignore the IR implementation of IEEE 802.11

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 RFID (Radio Frequency IDentification) is another short range radio medium best suited for asset tracking in logistics or industry. It relies on passive tags that can transmit a signal induced by a RFID reader. It estimates the location as the RFID reader location, which can be recorded. B. Medium range radio Wi-Fi and ZigBee are medium range radio networks. Although they have a potential range of hundreds of meters outside, indoors, their range usually only reaches several dozens of meters. It allows to use less devices than in short range networks, but getting an accurate location requires more complex algorithms to process measurements of the signal carrier wave. There exist many indoor positioning algorithms based on Wi-Fi, which will be approached later in this document. C. Long range radio These radio media are usually used for outdoor positioning. They include GSM, UMTS, and LTE netwoks. To some extent, we also may include GNSS in this category. IV. P ERFORMANCE METRICS In this section, we propose several performance metrics used to evaluate positioning systems performances, which will be addressed quickly in the next subsections. A. Symbology Symbology is the representation of locations solved by the positioning system. Common cases are: • cartesian coordinates in a local system of coordinates, • global spheric coordinates (latitude and longitude), • discrete locations such as presence in rooms [5]. Coordinates can be mapped over discrete locations while the opposite may not be possible. B. Spatial Scale The criterion defines the size of the positioning system coverage. The spatial scale is often linked with the transmission medium used to provide positioning. In [6] only three scales are considered: building, campus and city. C. Calibration The calibration of a positioning system is an offline step performed when setting up a positioning system. Calibration is required by many systems in order to gather the data necessary to its operation. Since positioning algorithms base their output on the data built during calibration, calibration is often a critical step for the positioning to work properly, especially for systems based on a fingerprinting of the environment. On the opposite, some systems like those purely based on signal propagation models, do not require any calibration – besides a calibration of the mobile terminals themselves, that is sometimes required. Some systems are able to calibrate on their own, in an automated way. We call this process self-calibration.

D. Stability Stability is the ability for the system to perform with accuracy even with perturbation. Perturbation may range from devices failure (hardware failure, power cut or empty batteries, etc.) to environment changes, such as building topology modification or furniture reorganization. The later case is the most critical, since nothing prevents the system to operate but may strongly impact the overall performances. Stability is closely related to fault tolerance and fault detection. Some work is conducted with this particular property in mind [7]. E. Accuracy The accuracy is the criterion that immediately comes into mind when it is question of evaluating positioning systems performances. However, as shown in this article, it is far from being the only one, nor is it always the most important; furthermore, it can be evaluated in several ways, possibly related to positioning symbology. In coordinates systems, accuracy can be defined as the positioning error (distance between location estimate and real location), while in room presence systems, it is evaluated through the percentage of good room detections. The best way to compare several positioning systems is to run them in the same testbed and compare the results obtained from the series of tests. However, even such comparison may be biased, especially if a positioning system was developped inside the considered testbed. F. Cost The cost of a system includes: • the initial hardware cost, • the deployment cost, • the maintenance cost, • the energy consumption. Note that the energetic consumption of the equipment is also related to the autonomy, particularly of the mobile terminals (cf. subsection IV-J). G. Positioning Rate The positioning rate determines the frequency to which the position of the mobiles is computed. We can express it in Hertz (number of positions per second) or with a time unit (delay between two successive positions). The importance of these criteria is proportional to the mobile speed and number of mobiles to locate when positioning is centralized on a server. H. Positioning Delay The positioning delay is the delay between a positioning request and its resolution. It is not necessarily related to positioning rate. For instance, a high delay may be related to a positioning algorithm requiring to gather data for several seconds before being able to compute the device location. A high delay, coupled to a low positioning rate may be a symptom of a positioning system whose computation is too slow to determine a position.

97/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 I. Scalability In Wi-Fi positioning systems, scalability is bound to the evolution of the size of the serviced area and by the number of devices to locate.

the claims of the authors of the systems. Other criteria range from 1 (poor) to 4 (excellent). Concerning architecture, centralization, and calibration all systems are in infrastructure mode, centralized, and calibrated. VI. C ONCLUSION AND FUTURE TRENDS

J. Energy Consumption Depending on the system’s application, the energy consumption of the mobile terminal must be taken into account. In Wi-Fi positioning, energy can be used for transmitting positioning requests, scan surrounding signals and/or to compute one’s location. These are the key points to optimize. K. Publication When a system is published outside the scientific domain, it usually means it is mature enough to be made available for regular users. Commercial systems are not necessarily better than R&D systems, but they are expected to be more robust and provide user friendly interface to be used. There are several publication systems among whose two are particularly used: patented proprietary systems are protected, developed and maintained only by one (or more) commercial organization. Free (Libre) and Open Source Software (FLOSS), on the other hand, are published with their source code and specifications and are maintained by a community which may include several commercial organizations. V. OVERVIEW OF S TATE OF THE A RT W I -F I P OSITIONING S YSTEMS System RADAR [8] Ekahau [9] Horus [10] OwlPS [11] Nibble [5] Aeroscout [12] Point2map [13] RADAR [8] Ekahau [9] Horus [10] OwlPS [11] Nibble [5] Aeroscout [12] Point2map [13]

II-C1 no yes ? no ? yes no IV-F 1 1 1 3 2 3 2

IV-A c c c c r c c IV-G 2 2 3 2 3 3 2

IV-B b b c b b c b IV-H 2 4 4 4 4 4 3

IV-D u u u u u u u IV-I 1 1 2 3 1 2 1

IV-E 3m 1 − 3m 4m 4.5m 95% 3 − 10m 4 − 5m IV-J 4 3 3 4 ? 4 4

IV-K p p s s s p p

TABLE I TAXONOMY APPLIED TO STATE OF THE ART SYSTEMS . C RITERIA NUMBERED BY THEIR ENTRY IN THE DOCUMENT.

Table I shows the systems properties. Privacy is denoted based on the ability to locate someone without his authorization. Symbology is denoted c for coordinates, and r for room presence. Spatial scale is defined by respectively b, c and w for building, campus and wide system. Publication is defined as p and s for patented and published in scientific papers. Stability is denoted u and s for unstable and stable. Accuracy is given concerning Wi-Fi-based results (since Aeroscout also resorts on other components). It is the average accuracy, given it is the only metric provided by all the papers. Accuracy is based on

In this article, we present a set of criteria to qualify positioning systems not only through their raw accuracy, but also through various useful properties either about software architecture and models or hardware architecture. Indeed, studying positioning systems and their algorithms only through accuracy does not enlighten the necessary trade-offs between system cost, available hardware, people for maintenance, and so on. We compare some major state of the art systems through our criteria. This comparison shows that most of the systems published are centralized and require calibration to be operational. Mostly, they use fingerprinting algorithms. Next steps in the field of positioning systems comparison methods shall include new criteria, such as system interoperability, and ease of use, such as provided by a RESTful protocol. R EFERENCES [1] OpenWrt official website, “http://openwrt.org/.” [2] J. Hightower and G. Borriello, “A survey and taxonomy of location systems for ubiquitous computing,” tech. rep., IEEE Computer, 2001. [3] F. Fleuret, J. Berclaz, R. Lengagne, and P. Fua, “Multi-camera people tracking with a probabilistic occupancy map,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007. [4] A. Ward, A. Jones, and A. Harper, “A new location technique for the active office,” IEEE Personnal Communications, vol. 5, pp. 42 – 47, October 1997. [5] P. Castro, P. Chiu, T. Kremenek, and R. R. Muntz, “A probabilistic room location service for wireless networked environments,” in UbiComp ’01: Proceedings of the 3rd international conference on Ubiquitous Computing, (London, UK), pp. 18–34, Springer-Verlag, 2001. [6] M. Kjaergaard, “A taxonomy for radio location fingerprinting,” in Location- and Context-Awareness (J. Hightower, B. Schiele, and T. Strang, eds.), vol. 4718 of Lecture Notes in Computer Science, pp. 139–156, Springer Berlin / Heidelberg, 2007. [7] C. Laoudias, M. Michaelides, and C. Panayiotou, “Fault tolerant positioning using WLAN signal strength fingerprints,” in Indoor Positioning and Indoor Navigation (IPIN), 2010 International Conference on, pp. 1– 8, Sept. 2010. [8] P. Bahl and V. N. Padmanabhan, “RADAR: An in-building RF-based user location and tracking system,” in INFOCOM (2), pp. 775–784, 2000. [9] R. Roos, P. Myllym¨aki, H. Tirri, P. Misikangas, and J. Siev¨anen, “A probabilistic approach to WLAN user location estimation,” International Journal of Wireless Information Networks, vol. 9, pp. 155–164, July 2002. [10] M. A. Youssef, A. Agrawala, A. U. Shankar, and S. H. Noh, “A probabilistic clustering-based indoor location determination system,” Tech. Report CS-TR-4350, University of Maryland, Mar. 2002. [11] OwlPS project’s official web page, “http://owlps.pu-pm.univ-fcomte.fr/.” [12] A. E. V. Solutions, “Aeroscout system: Bridging the gap between wi-fi, active rfid and gps.” [13] J. Cardona, F. Lassabe, and A. Herrera, “System and method for determining location of a wi-fi device with the assistance of fixed receivers,” May 14 2013. US Patent App. 13/069,219.

98/278

- chapter 5 -

Time of Flight, TOF, TOA, TDOA

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Locating and classifying of objects with a compact ultrasonic 3D sensor Christian Walter

Herbert Schweinzer

Institute of Electrodynamics, Microwave and Circuit Engineering Vienna University of Technology Vienna, Austria [email protected]

Institute of Electrodynamics, Microwave and Circuit Engineering Vienna University of Technology Vienna, Austria [email protected]

Abstract— Various applications exist for scene analysis based on ultrasonic sensors – including robotics, automation, map building and obstacle avoidance. We present a compact sensor for 3D scene analysis with a wide field of view extending the mainlobe of the transducer, inherent 3D location awareness and low cost. The sensor employs a centered electrostatic transducer and four microphones in a small size spatial configuration. Accurate time of flight measurements are performed using pulse compression techniques. Low-cost is achieved by using binary correlation techniques allowing the use of single bit A/D converters. Precise angle of arrival measurements are performed using multichannel cross-correlation between microphones. This information together with multiple measurements at different positions is used to obtain a cloud of 3D reflection points. These points are further processed, segmented into groups and identified with physical objects if possible. Measurements are then compared with a simulation of the scene showing the suitability of our sensor for scene analysis. Keywords-component; ultrasonic, localization, indoor, map building

I.

3D

scene

analysis,

INTRODUCTION

Scene analysis is an important area for different applications like robotics, automation, supervising and map building. Key objectives of such a system are determination of position, orientation and type of objects in a-priori unknown environment. Using ultrasound for this task is beneficial due to its low cost, low propagation speed allowing accurate time of arrival (ToA) measurement, insensitivity to dust or foggy atmosphere and inherent data reduction [1]. Data reduction is an important aspect for scene analysis as one of the main difficulties is segmentation and model fitting of data [2]. Before data is segmented model fitting is not possible. On the other side segmentation already requires some ideas about geometric objects leaving us with a chicken-and-egg dilemma. Compared to optical systems ultrasound has some advantages here due to the specular reflection properties of objects where inherent data reduction exists. On the other side, systems using ultrasound require some form of motion as the information obtained from a single position is limited. Different approaches for sensor design has evolved over the last decades which can be distinguished by their measurement

principle, if they work in 2D or 3D, their geometric configuration and the required number of transducers. Early systems used simple sensor sweeping techniques as in [3]. In systems employing more than one sensor the most common configuration is the binaural one for 2D localization. Soon this has been extended to 3D by various researchers [4, 5]. Our proposed sensor configuration can be used for 3D localization, is low in cost and size and provides high accuracy. The sensor consists of four microphones with a centered transmitter where time of flight (ToF) measurements can be used for localization of passive reflection points in the room. The main distance information is contained in the ToF whereas the direction information is contained in time difference of arrival (TDoA) measurements. At a given sensor position, the type of object involves whether a reflection point exists. Two constraints have to be met: First of all the object must have a acoustically hard boundary such that acoustic waves are reflected and secondly the law of reflection must be fulfilled. As at a single position only limited information is obtained sensor motion is required to gather enough information for scene analysis. Scene reconstruction is performed using the reflection points measured within the sensor coordinate system and the position of the sensor in the room. Therefore both measurements should have high accuracy to get satisfying results. In case of mobile robot applications, a combination of this sensor with an indoor positioning system (IPS) is possible. LOSNUS, an IPS developed at our group, which enables high accurate positions measurements with uncertainties smaller than 1cm [6, 7], can be used for sensor locating. II.

SENSOR DESIGN

A. Sensor A similar sensor design as in [5] is used and shown in Fig. 1. It consists of a cross of four microphones M1-M4 with a center ultrasonic transducer T. Spacing between the microphones is of equal distance 2d. As a general rule sensor construction is a tradeoff between spatial resolution [8] and compactness which reduces both sensor size and object dependence of measurements. Assuming only convex or planar objects the time differences between the channels are bound by 0 ≤ |ToFi − ToFj | ≤ 2d/c for the pairs M1/M2 and M3/M4. Between other pairs the time difference is bound by 1.41·d/c.

99/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 and it can be seen that there are zero outliers from -8° to plus 60°. The complete algorithm can be described as following. For an input signal ri, 1≤i≤4, where i is the microphone index, binary cross correlation is performed with all templates tj, 1≤j≤T of length M. Figure 1. Sensor configuration

(1) B. Signal processing High resolution ToF measurements are usually performed using pulse compression techniques or phase measurements. Our system uses pulse compression techniques using a linear frequency modulated chirp (LFM) where the obtainable resolution is a function of the time-bandwidth product. Pulse compression requires a known signal, called template, for detecting the presence of the template in the received signal. Difficulties arise when a single template signal is applied outside of the main lobe of a broadband ultrasonic transducer. Echoes resulting from directions outside the mainlobe are heavily changed in phase/amplitude and correlation can drop by more than 50% [8, 9]. Furthermore the ToF estimates are no longer correct due to imperfect matching with the template. A possible solution for this is using multiple learned templates signals which can also be used for direction estimation as in [10]. Other possibilities are spatial prediction of the sensor response following Huygens principle and modeling the sensor as a set of small point sources, as well as numerical integration over the transducer surface where each point contributes to the pressure field in the far field [11]. A comparison of the different methods for point synthesis (point), no prediction (none) and manually extracted templates (measured) is shown in Fig. 2 where the sensor was placed in front of a flat wall and was rotated from 0 – 60° on its x-axis. As the movement is continuous so should be the time differences. The peak sensitivity of the sensor is about 4 microseconds per degree [8] making clear that these errors cannot be neglected if accurate positions of reflection points are needed. 250 M2/M1 - point M2/M1 - none

200

M2/M1 - measured M2/M1 - tdoa M4/M3 - point

150

M4/M3 - none [us]

M4/M3 - measured M4/M3 - tdoa

100

For the T cross-correlation waveforms peaks are identified in all four channels. To suppress side lobes and to avoid multiple interpretation of the same echo only the best peaks within a given time window are selected. In the next step peaks are combined together in groups. A set of peaks belong to the same group if they obey the time difference constraints presented in the sensor section. Afterwards more accurate time differences are obtained by performing channel cross correlation between the microphone pairs. Final ToF results are calculated as following: The time delays are fitted to the ToF data giving a single time offset. The reported ToF results from the offset plus the time delays. C. Calculation of reflection points Based on the four ToF measurements distance estimates can be found by multiplication with the speed of sound. Having four distance estimates the position of a reflection point in our coordinate system can be calculated according to [6] where d is the microphone spacing as shown in Fig. 1 and di are the distances. (2)

III.

A. Tracing of Objects For further analysis and if multiple measurements are available it is important that reflection points which belong to the same physical object are grouped together. The algorithm proposed is designed for a mobile robot where Ri shall deliver the position of the robot at time instant i. Let d(Ri,Rj) be the Euclidean distance between the two robot positions Ri and Rj. A measured position Pim at time instant i where m is the measurement index belongs to a group G if  PjnG with (3)

50

0

-50 -10

SCENE ANALYSIS

0

10

20

30

40

50

60



Figure 2. Errors in time difference estimations for different methods (point = Huygens model, measured = manually extracted templates, none = single template, tdoa = channel cross correlation)

The proposed solution is to use pulse compression with a small set of templates for echo detection and then applying cross-correlation between channels to identify individual time delays. The plot in Fig. 2 shows exactly this algorithm (TDoA)

S is the snap size and B is the backtracking length. The first condition expresses that the distance between two points cannot be larger than the distance the robot has moved plus a snap length which accounts for measurement uncertainty. The second condition ensures that positions on different objects are not in the same group. B. Geometrical acoustics Geometrical acoustics or ray acoustics describes sound propagation in terms of rays. The well know equation (4) is the vector formulation of the law of reflection as shown in Fig. 3(a). All vectors are assumed to be unity vectors.

100/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 3. (a) Law of reflection, (b) Surface diffraction

IV.

EXPERIMENTAL SETUP

Measurements have been performed in a standard office room with a size of 500x300x330cm shown in Fig. 4(a). The sensor was mounted on a linear belt (Schunk PowerCube™ PLB 070) assumed to be parallel to the sensor x-axis. Movement has been performed in steps of 1mm starting from x=0mm to x=300mm yielding 301 measurements. In addition to the objects already present in the room we have added a small cylinder with a diameter of 84mm. The electrostatic transmitter used is a Senscomp series 600 environmental grade transducer. The driving signal was a linear frequency modulated chirp with duration of 750μs, a bandwidth of 28 kHz and a center frequency of 51 kHz. The microphones used are four SPM0404UD5 from Knowles. V.

RESULTS

A. Localization result The post processed positions data for the X/Y plane is shown in Figure 4(b). The Z/Y plane is not shown due to limited space. In total 1283 reflection points have been obtained from the ToF data. Three different physical objects could be identified which are responsible for the echoes: The cylinder placed as additional object into the room, the wall

B. Cylinder The individual ToF for the microphone pairs M1/M2 and M3/M4 are shown in Figure 5. cylinder only 1.426

cylinder only

(a)

1.424

X: 0.11 Y: 1.418

1.416 0

0.05

0.15

1 - cylinder - M4

1.42 X: 0.11 Y: 1.417

1.416

X: 0.133 Y: 1.417

0.1

1 - cylinder - M3

1.422

1.418

1.418 X: 0.091 Y: 1.417

(b)

1.424

1 - cylinder - M2

1.422 1.42

1.426

1 - cylinder - M1

ToF "c [m]

While transmission, reflection and absorption can be easily treated by geometrical acoustics, diffraction effects can be accounted for by an extension of the theory, e.g. by the geometrical theory of diffraction [12]. In case of a homogenous medium a surface diffracted ray around a cylinder can be found by imaging a string from a point P to Q pulled taut as shown in Fig. 3(b). (a) (b)

(front side of the cabinet) and the floor. The floor is not located at x=0cm as the sensor was not perfectly parallel to the floor and sensitivity for echoes at angles of 90 degree is lowest. One object next to the floor was not further identified. Additional detected groups exists which are due to multiple reflections. For example the group labeled “wall-sensor-wall” consists of positions calculated from the acoustic wave propagating from the sensor to the wall, is reflected back to the sensor, is then reflected back to the wall and finally reflected back to the sensor. The echo “wall-cylinder-wall” is even more complex. The wave is reflected from the wall to the backside of the cylinder, then back to the wall and then back to the sensor. Despite the simplicity of the scene the capabilities of our sensor construction can be seen. A simple range based system with an opening angle of 20° would track the cylinder most of the times and only at the extents of the belt would see the wall. Information about direction and the floor would be completely lost.

ToF "c [m]

(4)

0.2

0.25

0.3

0.35

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

x [m]

x [m]

Figure 5. ToF distances for cylinder, (a) M1/M2 ,(b) M3/M4

As the cylinder axis is parallel to the z-axis there is nearly no time difference for the microphone pairs M3 and M4. Looking at Fig. 5(a) showing our coordinate system and microphone placement we can see that at the position x=0cm microphone M1 is closer. At the position x=9.1cm the ToF reaches the smallest value for M1. This is the case when the cylinder is centered between the transducer and the microphone M1. At the distance x=11cm the transducer is directly in front of the cylinder giving the same ToF for both microphones. At x=13.3cm the first case is repeated for microphone M2.

(a)

(b)

(c)

Figure 4. (a) Photo of scence, (b) Post processed traces for X/Y plane, (c) Cylinder in X/Y plane at enlarged scale of 5cm in X and 7mm in Y

101/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 shown in Fig. 7. Similar to Fig. 4(b) when the sensor is in front of the belt, reflection points are either on the left of right side where they form groups. Fig. 8(a) shows the simulation of the cylinder. Diffractions effects have been modeled as well and the simulation results for the wall are shown in Fig. 8(b). These results agree well with the measurements in Fig. 5(a) and Fig. 6(a) 1.426

M1

(a)

1.424

2.58

X: 0.1 Y: 2.575

2.575

1.422

X: 0.11 Y: 1.419

1.42

M1

(b)

M2

M2

X: 0.14 Y: 2.575

ToF "c [m]

ToF "c [m]

C. ToF and energy for wall As simple the wall seems to be as an object for scene analysis, it becomes more complicated when objects are in front of the wall. This can be seen in Fig. 6(a) where the ToF distances for the microphone pair M1/M2 are shown. We can see that for a microphone in the vicinity of the cylinder the ToF becomes larger. As this happens at different positions for each microphone, echoes have a perceived wrong direction. This is the reason why in Fig. 4(b) positions for the wall are not drawn behind the cylinder – instead they are all directed either to the left or to the right. Fig. 6(b) shows the effect of diffraction on the energy of the echo.

2.57 1.418

X: 0.09 Y: 1.419

X: 0.13 Y: 1.419

1.416

ToF plot - wall

4

3

X: 0.094 Y: 2.577

2 - wall - M2

2.5

X: 0.134 Y: 2.575

2.565

Energy plot - wall

x 10

0

2 - wall - M1

(a)

2 - wall - M1

(b)

X: 0.006 Y: 2.635e+04

0.05

0.1

0.15 x [m]

0.2

0.25

0

0.05

0.1

0.15 x [m]

0.2

0.25

0.3

2 - wall - M2 2 - wall - M3

Figure 8. (a) ToF distances for cylinder, (b) ToF distances for wall

2 - wall - M4 2 x(n) 2

2.575 1.5

P

ToF "c [m]

2.58

VII. CONCLUSION

2.57 1

2.565

0.5

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0 0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

x [m]

x [m]

Figure 6. (a) ToF distances for wall with diffraction effects, (b) Echo energies with break-in in case of diffraction

D. Spurious echoes Two spurious echoes have been identified. One group of positions is coming from exactly the same direction as the echoes from the wall and have exactly twice the distance from the sensor. This was verified with the data from Fig. 4(b) where the distance to the wall is 130.6cm. The distance to the group labeled wall-sensor-wall is approximately twice as large. The other spurious object identified was an echo coming from a reflection on the wall, again reflected from the backside of the cylinder and then reflected back to the sensor again from the wall. The distance of the cylinder at x=−11cm is 71.68cm. The distance of the wall is 130.6cm. The diameter of the cylinder is 84mm. Therefore the total acoustic path taken by the echo is 130.6cm + 2(130.6cm − 71.68cm − 8.4cm) + 130.6cm = 362.26cm. This closely resembles the distance of approximately 181cm shown in Fig. 4(b).

A compact sensor together with its use for algorithms for 3D scene analysis has been presented. Practical measurements and comparison with a simulation of the example scene outline the benefits compared to simpler solutions like range based sensors neglecting important details of the environment. Different objects have been identified and due to the high resolution of the proposed system identification of the object type is possible. Furthermore practical problems have been identified including diffraction effects around obstacles and fake objects due to multiple reflections. Further work at our group is ongoing with a new robot system with enhanced movement capabilities. The main focus of our future research is automated object classification and identification of fake objects due to multiple reflections and/or diffraction effects. REFERENCES [1] [2] [3] [4]

VI.

SIMULATION

To verify a correct operation of the sensor in a diffraction situation the scene has been simulated within MATLAB using a plane wall as a reflector and a cylinder. The wall and the cylinder have been assumed to extend infinitely along the zaxis.

[5] [6] [7]

[8] [9] [10]

Figure 7. Simulation of scene with dots showing reflection points

[11]

For each sensor position on the belt all possible reflection paths have been calculated including direct reflections from the sphere, direct reflections from the wall and diffraction across the cylinder. The result of the simulation in the X/Y plane is

[12] [13]

102/278

L. Kleeman, “Fast and accurate sonar trackers using double pulse coding”, IEEE Intelligent Robots and Systems, vol. 2, 1999 S. I. Kim, S. J. Ahn, “Extraction of Geometric Primitives from Point Cloud Data”, ICCAS, June 2005 J. Borenstein, Y. Koren, “Obstacle avoidance with ultrasonic sensors”, IEEE Journal of Robotics and Automation, vol. 4, April 1988. H. Akbarally, L. Kleeman, “A sonar sensor for accurate 3D target localisation and classification”, IEEE ICRA, vol. 3, May 1995 G. Kaniak, H. Schweinzer, “A 3D Airborne Ultrasound Sensor for HighPrecision Location Data Estimation and Conjunction”, IMTC, May 2008 C. Walter, M. Syafrudin, H. Schweinzer, “A Self-contained and Selfchecking LPS with High Accuracy”, ISPRS IJGI, 2013 – Unpublished H. Schweinzer, M. Syafrudin, “LOSNUS: An ultrasonic system enabling high accuracy and secure TDoA locating of numerous devices”, IEEE IPIN, Sept 2010 C. Walter, H. Schweinzer, "An accurate compact ultrasonic 3D sensor using broadband impulses requiring no initial calibration", IMTC, 2012 H. Elmer, H. Schweinzer, "Dependency of correlative ultrasonic measurement upon transducer's orientation," IEEE Sensors, vol. 1, 2003. G. Kaniak, H. Schweinzer, “Advanced ultrasound object detection in air by intensive use of side lobes of transducer radiation pattern”, IEEE Sensors, Oct 2008 M. Zollner, “Schallfeld der kreisförmigen Kolbenmembran” in Elektroakustik, 3rd, Springer, pp. 96-102, ISBN 3540646655 J. Keller, “Geometrical Theory of Diffraction”, Journal of Optical Society of America, Vol 52, 1962 A. Pierce, “Diffraction of sound around corners and over wide barriers”, Journal of Acoustical Society of America, Vol 55, 1974

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Location Estimation Algorithms for the High Accuracy LPS LOSNUS Mohammad Syafrudin

Christian Walter and Herbert Schweinzer

Institute of Electrodynamics, Microwave and Circuit Engineering Vienna University of Technology Vienna, Austria [email protected]

Institute of Electrodynamics, Microwave and Circuit Engineering Vienna University of Technology Vienna, Austria [email protected] [email protected]

Abstract—Local Positioning Systems (LPSs) based on Ultrasound are mostly aimed for tracking of mobile devices or persons. However, LPS LOSNUS is mainly designed for locating numerous static devices with high locating accuracy especially in a wireless sensor network (WSN). Applications in WSN could significantly be improved including network integration based on node locations, supervising locations with respect to accidentally disarrangement and detecting faking of node locations. This article presents a localization algorithm for LPS LOSNUS in a six-transmitter configuration which can tolerate a single failure in a ToA measurement resulting from arbitrary failure modes. The localization algorithm uses hyperbolic multilateration in combination with proximity based grouping and final determination of the position by averaging, selection by smallest GDOP or by applying a non-linear least square algorithm to the correct ToAs. The article includes a short description of the system, the algorithms and a performance comparison to other localization algorithms based on real-world measurement.

In this article we present a localization algorithm for LPS LOSNUS in a six-Tx configuration which can tolerate a single failure in ToA measurement resulting from arbitrary failure modes, delivers high locating accuracy and reduces biasing of the final location result. The localization algorithm uses TDoA multilateration in combination with proximity based grouping and final determination of the position by NLS, averaging or selecting the position with the smallest GDOP [3]. Grouping in combination with averaging or GDOP selection are fast deterministic algorithms with low computational complexity compared to iterative minimization algorithms. Transmitter #1

Frame #1

Transmitter #3 …

Delay #2

Frame #2

Delay #3 Frame #3 …

T

(a)

Framer # x Distance Measurement (Chirp coded) 256 µs

Keywords-3D localization; ToA; TDo; LPS; GDOP

I.

Transmitter #2

Delay #1

Offset

Transmitter Coding Field 384 µs

T

(b)

INTRODUCTION

The localization algorithm’s performance mainly depends on the accuracy of the distance estimation and geometrical constellation. Inaccuracies in the distance estimates can be due to signal interference by multi-path propagation, obstacles in the direction of wave propagation resulting in diffraction, and damping or blocking. Bad positioning of the transmitters (Txs) can result in a large geometric dilution of precision (GDOP) depending on the receiver position. Different localization algorithms exist for estimating the position of static or mobile devices. The classical non-linear least squares (NLS) estimates the coordinate by minimizing the sum of squared residuals. It is highly sensitive to erroneous measured distances and even a single erroneous measurement will affect the estimated position [1]. Different methods which do not require a-priori information about non-line-of-sight (NLOS) have been proposed to mitigate a NLOS problem: Robust multilateration (RMult) [1] estimates the position by minimizing the sum of absolute values of residuals yielding better result than the classical NLS; least median of squares (LMedS) [2] estimates the position by selecting the solution with the smallest median of the squared residuals. It delivers better performance than the classical NLS.

Figure 1. (a) Sequence of signal transmission using well defined delays to ensure non-overlapping reception of frames. (b) Transmitted frames consisting of a constant linear freq. modulated chirp and a transmitter coding time slot.

II.

BASIC PRINCIPLE OF LOSNUS

LOSNUS is a local positioning system (LPS) based on ultrasonic range measurements. It is mainly designed for locating numerous static devices with high accuracy [4] although moving objects can be located with reduced accuracy. The LOSNUS operating phase is based on Tx positions obtained by a calibration process described in [5]. The calibration remains valid as long as the Txs are not moved. Calibration uses ToF measurements and requires at least four receivers, six Txs and a known reference distance. The reference distance is used for scaling the output of the calibration algorithm and allows the calibration algorithm to work only with ToF ratios instead of absolute distances. In the operating phase of LOSNUS ToA measurements are performed enabling only TDoA algorithms for locating. The transmitters are fired sequentially with a given protocol (Fig.1a). ToA measurements are obtained using binary cross-correlation with a known reference signal realized as linear frequency

103/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 modulated chirp (Fig.1b). Txs are identified by a fixedfrequency coding. III.

case of outliers it significantly outperforms the classical NLS as outliers are not used as input data for the algorithm.

LOCALIZATION ALGORITHM

IV.

The basic TDoA equation is given in (1) where ti and tj are the ToA measurements for transmitter Txi and Txj, Rx is the unknown receiver position and c is the speed of sound. The value of c is best estimated by using a known distance from a permanently installed fixed receiver used during calibration. (1) Three such equations can be used for an analytical solution of the receiver position. In case minimization algorithms are used the time difference residuals are calculated as shown in (2) (2) The classical NLS minimizes the sum of the squares of all residuals, the RMult minimizes the absolute values of the residuals and the LMedS estimates the position by selecting the solution with the smallest median of the squared residuals. TABLE I.

COMPARISON OF THE ALGORITHMS

Compu- A-priori tational NLOS ComInforplexity mation

Statistical Errors from 230 Measurements [mm] Max. Avg.

Max.

Standard Deviation for Each Position [mm]

No

Algorithms

1

Classical NLS

Yes

No

> 1m

102.4

837.2 0.61 166.4

2

RMult

Yes

No

> 1m

58.85

911.7 0.76 93.20

3

LMedS

Yes

No

30.65

10.23

9.33

2.0

4

Group(NLS)

Yes

Yes

13.08

7.81

1.76

0.61 1.18

5

Group(Mean)

No

Yes

14.01

8.26

2.81

0.63 1.49

6

Group(DOP)

No

Yes

14.88

7.97

2.45

0.65 1.31

Min.

Avg.

RESULT

A. Definition of algorithm error Due to the calibration process known reference positions are available which can be used for defining the error at each position. Let Pri, 1 ≤ i ≤ 23 be reference positions along the reference belt. The error is calculated as (4) B. Comparison with other methods Fig. 2 shows errors from repeated measurements at the positions of the belt. It can be seen that the algorithms LMedS and grouping are resilient to outliers. For some outliers RMult is resilient but NLS is not. In case of outliers RMult performs better than NLS most of the times. Grouping performs best regarding both maximum and average errors. Average errors is smallest for Group(NLS). If simpler algorithms like mean or selection on the GDOP are used performance is still acceptable. In case of grouping the standard deviation is highest for Group(Mean). Grouping always outperforms the NLS, RMult, and LMedS algorithms. The comparison of the algorithms is summarized in Table 1.

4.86

A. Grouping Algorithms In our test case, grouping first computes all 15 = C(6,4) possible solutions. Six is the number of Txs and four is the number of ToA for the analytical solution. In case of no outliers all 15 positions will be close where the spread radius is given by R equals 3 times the expected GDOP. The a-priori information required is the ToA standard uncertainty. In case of a single error in the ToA measurements only 5 = C(5,4) positions are close together where the others are spread. Simulations and practical verifications have shown that in the TDoA case the other positions do not form groups with the cardinality larger than the correct group. The algorithm can be described as following. Let {Loc1,…,LocM}, M = C(6,4), be the set of all calculated positions by the TDoA algorithm. We define respective groups as (3) From the set of groups the largest group with cardinality of at least 5 is selected. If such a group cannot be found more than one ToA must have been incorrect. The Group(Mean) calculates the position by taking the mean of the x/y/z coordinates of the position within the best group. As each position in the group belongs to a specific set of ToAs which belong to specific Txs the estimated GDOP can be calculated for each position. Based on these facts the element with the smallest GDOP can be selected. We call this Group(DOP). The last method is using the elements in the best group to identify correct ToAs. Using these ToAs the NLS can be executed. In

Figure 2. Error comparison of different methods for selected positions on the belt.

V.

CONCLUSION

The article presented a localization algorithm for LPS LOSNUS in a six-transmitter configuration which is designed for high accuracy. The localization algorithm uses hyperbolic multilateration in combination with proximity based grouping and provides the best solution being able to tolerate a single failure in ToA measurement resulting from different failure modes. REFERENCES [1] [2] [3] [4]

[5]

104/278

Nawaz and N. Trigoni, "Robust localization in cluttered environments with NLOS propagation," in IEEE 7th MASS'10, 2010, pp.166-175 R. Casas, et al., "Hidden Issues in Deploying an Indoor Location System," in Pervasive Computing, IEEE, 2007, pp.62-69. J.D.Bard and F.M.Ham, "Time difference of arrival dilution of precision and applications," IEEE Trans. on Signal Processing, vol.47, 1999. H.Schweinzer and M.Syafrudin, "LOSNUS: An ultrasonic system enabling high accuracy and secure TDoA locating of numerous devices," in IPIN, 2010, pp.1-8. C.Walter, et al., “A self-contained and self-checking LPS with high accuracy.” ISPRS Int. J. Geo-Inf. 2013,vol.2, pp.908-934.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Infrastructure-less TDOF/AOA-based Indoor Positioning with Radio Waves Canan Aydogdu

Kadir Atilla Toker

Ilkay Kozak

Electrical and Electronics Engineering Department Izmir Institute of Technology Izmir, Turkey [email protected]

Software Engineering Department Izmir University Izmir, Turkey [email protected]

Politeknik Ltd. Izmir, Turkey [email protected]

Abstract— Infrastructure-less indoor positioning is a necessity for mobile ad-hoc networks (MANET) which are required to work in any indoor environment. A MANET formed by emergency firstresponders in a damaged building where either no infrastructure exists or the existing infrastructure is useless, a group of military soldiers in an indoor enemy territory, a group of adventurers in a cave, robots underground, distant reconnaissance vehicles on planets, cubesats in the space, etc., are example scenarios where infrastructure-less indoor positioning is inevitable. Although ultra-wideband (UWB), ultrasound or infrared indoor positioning techniques have been proposed so far, the range of the communication becomes lower for higher precision techniques. In this study, we propose an infrastructure-less indoor positioning technique which is expected to work for an indoor range of about 150m and is based on distance and direction angle measurement. An experimental study is carried out with a couple of wireless devices equipped with field programmable gate arrays (FPGA) and transparent ISM band transceivers. The direction of the object to be located is figured out by a rotational antenna. The technique developed is expected to achieve a positioning error of at most ±1.5 meters for distance measurement, while determining the angle of direction within acceptable values, while achieving a 150-meter indoor range. The developed technique provides localization with high enough precision for most of the above mentioned application scenarios and can be extended for more number of users/devices in a MANET to find a localization map of the network. Keywords- indoor localization; infrastructure-less; time difference of flight (TDOF); mobile ad-hoc networks (MANET); field programmable gate arrays (FPGA)

I.

INTRODUCTION

Mobile ad-hoc networks (MANETs) are a collection of mobile users/objects which form a wireless network in an adhoc manner without the need for an infrastructure. The decentralized operation and self-healing property, together with mobility support, envision MANETs to play a significant role in future emergency/rescue operations, disaster relief scenarios and military networks, where position information

of mobile users/objects is critical. Infrastructure-less positioning is necessary for MANETs in indoor environments or MANETs in outdoor environments where the global positioning system (GPS) is useless or inefficient due to the precision provided. Owing to the life-critical missions undertaken by emergency, disaster relief or military operations, the accuracy of the infrastructure-less positioning to be employed in MANETs may become important. Moreover, since no infrastructure is used, the accuracy of distance measured among mobile users of a MANET becomes critical. Time of arrival (ToA) or time difference of flight (TDOF) methods have proven to achieve better accuracies in distance measurement compared to received signal strength (RSS) measurements, which fluctuate inconsistently due to multipath effects. Hence, in this study, we use TDOF techniques in order to measure the distances among users, eliminating the need for time synchronization among mobile users. Moreover, emergency, disaster relief or military operations generally take place in environments, where obstacles among users are inherent. For example, firemen inside a building will have to locate each other in a non-line of sight environment where the walls and floors form various obstacles. Or soldiers inside a cave, rescue members inside a damaged building during an earthquake, etc. will have no line of sight. Hence, the right localization technique to use should have signals able to penetrate through obstacles as much as possible. An acoustic, infrared, optic or high frequency radio wave is not appropriate for these applications. In this study, we use low frequency radio waves. In this study, we propose an infrastructure-less indoor positioning technique which is expected to work by TDOF distance measurements by low-frequency radio waves and angle of arrival (AOA) measurements by a rotational directional antenna. An experimental study is carried out with a couple of wireless devices equipped with FPGAs and transparent ISM band transceivers for an indoor range of about 150m. The advantage of asynchronous digital logic like FPGA, is that it allows the control signal computation to

105/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 propagate without holding intermediate memory. The elimination of the computational delay minimizes the reference to output delay significantly. FPGA are used for time difference of flight (TDOF) measurements in order to depict the distance between two units. The direction of the object to be located is figured out by a rotational antenna. The technique developed is expected to achieve a positioning error of at most ±1.5 meters for distance measurement for the operation frequency, while determining the angle of direction within acceptable values, while achieving a 150-meter indoor range. The developed technique provides localization with high enough precision for most of the above mentioned application scenarios and can be extended for more number of users/devices in a MANET to find a localization map of the network. II.

LITERATURE REVIEW

The global positioning system (GPS) has penetrated into many aspects of our daily lives and is used widespread by applications for tracking vehicles, people and goods, as well as navigational search. However, GPS is useless if at least four GPS satellites are not in line of sight, such as inside buildings, caves, underground; in outdoor environments such as urban cities with high building where canyon effect is apparent, underwater tunnels or cubesats in space. Various positioning systems, which are proposed for indoor environments can be grouped under four categories: 1) Positioning techniques focusing on deployment of WiFi/Bluetooth/UWB/Infrared/GSM, etc. based [1-3, 7-10]. 2) Positioning techniques making use of sensors such as accelerometers, gyroscopes, etc., additional to the infrastructure based positioning systems above [11, 12]. 3) Infrastructure-less positioning systems which make use of the mentioned sensors only [13]. 4) Infrastructure-less radio frequency based positioning systems [14-18]. Positioning systems in the first and second group are dependent on a specific infrastructure, where a summary of various techniques and their range versus accuracy is shown in Figure 1. Infrastructure based positioning systems are specific to the deployed place and are not applicable to many situations, such as emergency first-responders in a damaged building where either no infrastructure exists or the existing infrastructure is useless. A group of military soldiers in an indoor enemy territory, a group of adventurers in a cave, robots underground, distant reconnaissance vehicles on planets, cubesats in the space, etc., are other example scenarios where infrastructure-less indoor positioning is inevitable. Infrastructure-less positioning with sensors such as accelerometers and gyroscopes in the third group, are experimentally shown to exhibit increasing positioning errors with increasing distance travelled, for example a 2-4m. positioning error occurs for a 49m. travelled distance in [13].

Figure 1. Range versus accuracy of infrastructure based positioning systems used today [3]

The fourth category of positioning systems is infrastructure-less positioning by radio waves, which is the focus of this study [14-19]. Infrastructure-less indoor positioning, also referred as ad-hoc mobile positioning, is illustrated in Figure 2. It is a necessity for mobile ad-hoc networks (MANET) which is required to work in any indoor environment. A MANET formed by emergency firstresponders in a damaged building where either no infrastructure exists or the existing infrastructure is useless, a group of military soldiers in an indoor enemy territory, a group of adventurers in a cave, robots underground, distant reconnaissance vehicles on planets, cubesats in the space, etc., are example scenarios where infrastructure-less indoor positioning is inevitable. Researches on infrastructure-less positioning with radio waves has focused on either theory or simulations in [14-17] dealing with cooperative localization or efficient position calculation methods at medium access control and higher layers. An ultra-wideband (UWB) based system, which is developed by Decawave, provides 10-15cm. positioning accuracy [18] for tracking goods in a factory or tracking possessions of people with high accuracy. The main problem with UWB tracking is the high cost of the systems and the low range of communication. Providing a 1GHz bandwidth, requires UWB to be used at high carrier frequencies, which limits the range of communication to about 20m., which makes UWB inefficient for many applications including emergency situations. An infrastructure-free positioning device is introduced by Lambda:4 [19] in [20]. A handheld device of a 1,2 kg weight is capable of locating cigarette pack sized transmitters. The infrastructure-less positioning system has an accuracy of 15m. in a range of 2-5km. This device is developed and patented for emergency first-responders [21]. The main problem with this device is the usage of 2.4GHz ISM band, where interference from Wi-Fi, Bluetooth and ZigBee may become a problem. Moreover, the range and penetration of 2.4 GHz radio waves through several numbers of walls and floors is low compared to a lower frequency radio wave.

106/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 The first target is achieved by TDOF distance measurements, whereas the second and third targets are achieved using a low radio frequency, which is selected to be 868MHz for current experiments. The fourth target requires further future studies considering settled regulations for emergency applications. IV.

The scope of this study is developing an infrastructure-free positioning technique as in Figure 2.b., which achieves high enough precision, range and penetration though obstacles for emergency, disaster relief and military applications. Each mobile user i in the MANET determines the distance to each of its neighbors j S, dij, for i ≠ j, where S is the set of users in the MANET. Each node determines distance between its neighbors by TDOA measurements obtained by sending a broadcast packet and receiving answers from its neighbors. Figure 3 illustrates the time measurements at different time instants. Due to mobility of nodes in a MANET, distance measurements are repeated at different time instants s, obtaining dij(s).

a)

b) Figure 2. Indoor positioning (inside building, tunnel, cave or undergrund), outdoor positioning where GPS signals are jammed on purpose or in space require some other positioning system other than GPS. This system may have two different strcutures: a) A deployed infrastructure based positioning system, b) an infrastructure-free positioning system. The focus of this study is the development and experimentation of an infrastructurefree positioning system.

III.

METHOD

TARGETS FOR INFRASTRUCTURE-LESS POSITIONING

Despite the variety of positioning systems proposed so far, a positioning technology for mobile ad-hoc networks such as emergency first responders, military, cubesats, etc. still does not exist. The major targets to be achieved by an infrastructureless positioning system to be used for emergency, disaster relief and military applications are as follows: 1.

High enough precision: A localization accuracy of at least a few meters is required in order to decide on where a mobile user inside a building is.

2.

High range: Although range is dependent on density of mobile users in a MANET, a range of at least a few hundred meters is required for keeping the connectivity.

3.

High penetration through walls, floors and concrete: Mobile users should be localized despite obstacles among themselves.

4.

Low interference: A dedicated channel or interference mitigating techniques should be used in these life-critical applications.

A. Initialization All nodes of the MANET are switched on and exchange identification and address information. A pseudo-random timing sequence ∆tij is calculated at nodes i and j by a function, which is the same for all nodes and has the addresses of node i and node j as inputs. This pseudo-random timing sequence ∆tij has a finite size and repeats itself upon completion. It is used for adding a fixed delay to the TDOF measurements in order to mitigate the inconsistent delays introduced by transceiver hardware while switching between transmit and receive states. B. Distance measurement Each mobile user i, sends a broadcast packet including a preamble during which synchronization among two mobile units is achieved. The transmitting node i starts a counter at the end of the last bit sent. The correlation among the expected bit sequence and received bit sequence provides the exact timing of the end of the last bit at the receiving unit j. After delaying for ∆tij(m), at the mth reception, node j sends back to node i. Node i, records the time of the first bit received from node j and checks the identity and address of the packet received. If the this packet is the one that is sent, the time difference between sending and receiving the packet to node j, ∆Tij(m), is obtained. The distance between node i and j at the mth time instant, dij(m), is obtained by dij(m)= c{∆Tij(m) - ∆tij(m)}/2

(1)

where c is the speed of light. Since multipaths are received later than the original signal, dij(1) is the actual distance between nodes i and j at time instant m. C. Angle measurement technique A directional antenna is rotated at each time instant m and dij(m) measurements are taken from each 45° angle interval. This way, the angular position of neighbors is obtained for each node. Angular positions together with the distances to

978-1-4673-1954-6/12/$31.00 ©2013 IEEE

107/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 [4]

[5]

[6] [7]

[8]

[9]

[10] [11]

[12]

[13]

Figure 3. Method for distance measurement: mobile user i measures distance of users j and k.

each neighbor provide a localization map for the MANET without using an infrastructure. V.

[14]

[15]

FUTURE WORK

Experiments are carried out to mitigate interference at the right radio frequency module in order to detect exact timing of reception of signals. The experiments carried out with a couple of nodes currently will be extended to a group of nodes in a MANET in the future.

[16]

[17]

REFERENCES [1]

[2]

[3]

Hui Liu, Houshang Darabi, Pat Banerjee, Jing Liu, ―Survey of Wireless Indoor Positioning Techniques and Systems‖, IEEE Transactions on Systems, Man, and Cybernetics—Part C: Applications and Reviews, vol. 37, no. 6, November 2007. Y. Gu, A. Lo, I.G. Niemegeers, "A Survey of Indoor Positioning Systems for Wireless Personal Networks", IEEE Communications Surveys and Tutorials, vol. 11, no. 1, first quarter, 2009. R. Mautz, ―Overview of current indoor positioning systems,‖ Geodesy and Cartography, DOI: 10.3846/1392-1541.2009.35.18-22, vol. 35, no. 1, pp. 18-22.

[18] [19] [20]

[21]

108/278

Alessandro Magnani, Kin K. Leung, ―Self-Organized, Scalable GPSFree Localization of Wireless Sensors‖, in Proceedings of the WCNC 2007, pp. 3801-3806. Nan Yu, James M. Kohel, Larry Romans, and Lute Maleki, ―Quantum Gravity Gradiometer Sensor for Earth Science Applications‖, Jet Propulsion Laboratory, California, Institute of Technology, under a contract with NASA. CXM543 Datasheet Willow Technologies Ltd, http://www.willow.co.uk/CXM543_Datasheet.pdf ] S. Gezici, Z. Tian, V. Giannakis, H. Kobaysahi, F. Molisch, H. Poor,and Z. Sahinoglu, ―Localization via ultra-wideband radios: A look at positioning aspects for future sensor networks‖, IEEE Signal Processing Magazine, 22 (2005), no. 4, 70–84. S. Holm, ―Hybrid ultrasound-rfid indoor positioning: Combining the best of both worlds‖, in Proceedings of the IEEE Int RFID Conf, 2009, pp. 155–162. D. Skournetou and E. Lohan, ―Pulse shaping investigation for theapplicability of future gnss signals in indoor environments,‖ in Proceedings of the 2010 International Conference on Indoor Positioning and Indoor Navigation (IPIN), September 18, 2010. Ubisense, http://www.ubisense.net/en Anshul Rai, Krishna Kant Chintalapudi, Venkata N. Padmanabhan, Rijurekha Sen,‖ Zee: Zero-Effort Crowdsourcing for Indoor Localization‖, in Proceedings of the MobiCom’12, August 22–26, 2012, Istanbul, Turkey. He Wang, Souvik Sen, Ahmed Elgohary, Moustafa Farid, Moustafa Youssef, Romit Roy Choudhury, ―Unsupervised Indoor Localization‖, in Proceedings of the MobiSys’12, June 25–29, 2012, Low Wood Bay, Lake District, UK Guillaume Trehard, Sylvie Lamy-Perbal and Mehdi Boukallel, ―Indoor Infrastructure-less Solution based on SensorAugmented Smartphone for Pedestrian Localisation,‖ in Proceedings of the 2012 International Conference on Ubiquitous Positioning Indoor Navigation and Location Based Service, 4-5th October 2012. Dmitri D. Perkins, Ramesh Tumati, Hongyi Wu, Ikhlas Ajbar, ―Chapter: Localization in Wireless Ad Hoc Networks‖ in Resource Management in Wireless NetworkingNetwork Theory and Applications, ISBN: 978-0387-23807-4, Volume 16, 2005, pp 507-542. Z. Merhi,M. Elgamel, R. Ayoubi, M. Bayoumi, ―Tals: TrigonometryBased Ad-Hoc Localization System for Wireless Sensor Networks‖, Proc. of 7th International Wireless Communications and Mobile Computing Conference (IWCMC), pp. 59-64, 4-8 July 2011. Tolga Eren, ―Cooperative localization in wireless ad hoc and sensor networks using hybrid distance and bearing (angle of arrival) measurements‖, EURASIP Journal on Wireless Communications and Networking 2011 2011:72. Davide Dardari, Chia-Chin Chong, Damien B. Jourdan, Lorenzo Mucchi, ―Cooperative Localization in Wireless Ad Hoc and Sensor Networks‖, EURASIP Journal on Advances in Signal Processing 2008, 2008:353289. Decawave, http://www.decawave.com/ Lambda:4, http://www.lambda4.com/EN/ Rönne Reimann, ―Locating and distance measurement by high frequency radio waves‖, in Proceedings of the 2011 Indoor Positioning and Indoor Navigation Conference, IPIN 2011, – short papers, posters and demos, Moreira, Adriano J. C.; Meneses, Filipe M. L. (eds.). Guimarães, Portugal. Patent: REIMANN Rönne: Method to determine the location of a receiver. Nov, 22 2012: WO 2012/155990

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Sound Based Indoor Localization – Practical Implementation Considerations João Moutinho

Diamantino Freitas

Rui Esteves Araújo

INESC TEC (formerly INESC Porto), Faculty of Engineering, University of Porto Rua Dr. Roberto Frias 4200-465 Porto, Portugal

Faculty of Engineering, University of Porto Rua Dr. Roberto Frias 4200-465 Porto, Portugal

INESC TEC (formerly INESC Porto), Faculty of Engineering, University of Porto Rua Dr. Roberto Frias 4200-465 Porto, Portugal

Abstract—Among the several signal types used for state-of-theart indoor personal localization, ultrasound, electromagnetic and light supported signals stand out as the most popular. However, when considering a balance of characteristics, audible sound based localization stands out as an interesting possibility since it allows the use of off-the-shelf inexpensive components. It must be considered that no solution will turn into a real application if it is too difficult or costly to implement. From the fixed indoor environment, one can reasonably assume that many spaces already provide a public address sound system. From the moving person, one may expect that a sound receiver with a wireless transmitter with enough indoors signal coverage may be carried on. This could possibly be implemented by means of a cell phone. Working with these two premises together to build a localization system, is the objective of the current work. There are inherent problems of not using dedicated proprietary tools in this process. Issues like: adaptation of a public address sound system to allow the simultaneous separated excitation of loudspeakers so that ToF (time of flight) and therefore distance may be estimated; simultaneous access and multiple users; data hiding in sound so that only reasonably small disturbance in the acoustical environment may occur; how may a simple audio channel like the one of a common cell phone be used as a localizable acoustic receiver, raise the problems in hands. This paper is focused on theoretical and practical aspects of a possible real implementation of an audible sound based indoor localization scheme using a standard audio channel receiver. Experimental results using TDMA, FDMA and CDMA access schemes in a sound communication system show that it is possible to have an interesting localization accuracy of a few centimeters in non-ideal conditions (reverberant room). Issues concerning the moving person’s device (latency and limited directivity/frequency response) are addressed and possible solutions are proposed. Keywords-TDMA; FDMA; CDMA; TDE; ToF; Sound-based; Indoor Localization

I.

INTRODUCTION

One of the most popular research areas in ubiquitous or pervasive computing is the development of location-aware systems [1]. These are systems in which electronic devices provide the users some kind of information or service depending on their location. The basilar component of a locationaware system is the location-sensing mechanism. In order to develop an inexpensive, easily deployable, widely compatible localization system, one must adequate the problem constrains

to the everyday present technologies and deal with the consequences of not having dedicated equipment to perform the measurements and consequently achieve indoor localization. Therefore, it is this paper’s objective to discuss some of the problems in hands while providing some possible solutions. The present results are also used to demonstrate the importance of some issues like the choice of the (audible) excitation signal and its directivity, the medium access for multi-user, multipleaccess technique and the importance of the time-of-flight (ToF) measurement in position determination. II.

RELATED WORK

Many technologies with different types of signal were already studied to provide reliable, precise and accurate localization of persons or devices. The existing approaches have explored almost every type of signal: Infrared, Radio Frequency, Artificial Vision, Inertial sensors, Ultrasound and finally Audible Sound. Even though every type has its own pros and cons, the Audible Sound approach is one of the emerging approaches with still much to be studied. It has been somehow left behind due to its initial premise: it is audible and therefore it is assumed that it will disturb the acoustic environmental in a nondesirable way. But using off-the-shelf inexpensive or preexistent components is tempting and therefore some audible sound based techniques can be found in the literature. Most of them use sound as a natural consequence of their operation, just like airplanes that produce noise that can be used to track them [2]. A 3-D indoor positioning system (IPS) named Beep [3] was designed as a cheap positioning solution using audible sound technology. Beep uses a standard 3-D multilateration algorithm based on TOA measured by the Beep system sensors as a PDA or another device emits sound signals. Other possibilities rely on having microphone arrays [4] to track some sound source positioning by angle of arrival (AOA) techniques. Another possible approach is a technique named “Acoustic Background Spectrum” where sound fingerprinting is employed to uniquely identify rooms or spaces in a passive way (no excitation signal), just with the noise “fingerprint” of that space [5]. Very recently an acoustic indoor localization system employing CDMA [6] was developed. It uses off-theshelf components and localizes a microphone within an indoor

109/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 space by using sound cues provided by loudspeakers. In this work, time of arrival measurements of acoustic signals, which are binary-phase-shift-keying modulated Gold code sequences using direct-sequence (DS) spread spectrum (SS) technique, are performed. Other approaches also use off-the-shelf devices achieving a sub-meter accuracy. They use tablets, smartphones and laptops to provide wireless data connection and interface [7][8]. Using DS code division multiple access (CDMA) with different coding techniques is common in some approaches and allows to perform simultaneous accurate distance measurements while providing some immunity to noise and interference. III.

DETERMINING THE POSITION

Localization is assured by measuring the distance vector between anchors and the mobile device(s). One can assume that sounds are played from all loudspeakers, starting at time ‫ݐ‬଴ and that sound from speaker ݅ reaches the microphone at time ‫ݐ‬௜ . If ܿ is the speed of sound, (‫ݔ‬, ‫ )ݕ‬the position of the mobile device in a two-dimensional version of the problem and (ܺ௜ , ܻ௜ ) the position of anchor ݅ (loudspeaker ݅), the propagation delays, also called ToF, ‫ݐ‬௜ − ‫ݐ‬଴ and distances ݀௜ between anchors and the mobile device are described by ݀௜ = ܿ(‫ݐ‬௜ − ‫ݐ‬଴ ) = ඥ(‫ ݔ‬− ܺ௜ )ଶ + (‫ ݕ‬− ܻ௜ )ଶ

(2)

The arrival times ‫ݐ‬௜ of the signals may be estimated using correlation methods as is explained in the following. The time instant ‫ݐ‬଴ can be determined using a technique described ahead as “Circle Shrinking”. The ݅ anchor’s positions (ܺ௜ , ܻ௜ ) are considered to be known. Due to the presence of noise in the ݀௜ estimations, the desired and unknown mobile device’s position (‫ݔ‬, ‫ )ݕ‬can’t be obtained just by solving the system of equations. The location needs to be determined by an algorithm of source localization that considers an error minimization approach. Using nonlinear least square estimation methods like Gauss Newton, Newton-Raphson and Steepest Descent has provided sufficiently accurate results while maintaining low computational complexity. Their similar performances lead us to believe that each one of these methods is suitable for this purpose, converging to the solution almost at the same iteration. However, a small advantage was found in Gauss Newton method due to its simplicity and faster processing. IV.

EXPERIMENTS AND RESULTS

The experiments were performed in a research laboratory room environment with 7m x 9m x 3m size. From this total area only 6m x 7m was used as is depicted in figure 1. The room is occupied by a set of furniture, computers, persons, with plaster reverberant walls in which two are outer walls with four large windows. This room was not adapted in any way for this experiment. Twenty three “ground truth” points were marked on the floor as landmarks to allow error estimation. Four ordinary satellite computer loudspeakers were wall mounted at ear level and used as anchors angularly distributed. The mobile device is represented by an omnidirectional “flat” frequency response measurement condenser microphone. It was used for these experiments as an ideal receiver to validate the other aspects independently.

Figure 1. Experiental setup. Corner square red dots represent the anchors (speakers) while the 23 ground truth points are the small circles in yellow.

The sound emission and capture was performed at 44,1kHz sampling rate using an EASERA Gateway sound board from Pre-Sonus, a low latency/low noise IEEE1394 interface sound board with ASIO drivers. All processing was performed in a 1.6GHz Dual Core PC Laptop with 2GB RAM running Windows 8 with Matlab2012b. In all experiences the air temperature was measured to correct the value of speed of sound, necessary to estimate distance according to ܿ = 331.45 ඥ1 + (ܶ/273.15)

(3)

No effect on the speed of sound was considered regarding humidity and wind, being considered to be negligible. Three experiments (A, B and C) were conducted to estimate the importance of some decisions considering methods and algorithms so that the accuracy, precision and general performance obtainable is maximized: A) Latency analysis considering the Easera Gateway sound board with different API tools; B) Comparison on three correlation techniques to perform Time Delay Estimation (TDE): cross-correlation, generalized cross-correlation phase transform and maximum likelihood; C) Evaluation of the position estimation error and reliability with a sufficient SNR considering: TDMA (with unit pulses), FDMA (with chirps) and CDMA (with coded PN). V.

RESULTS AND DISCUSSION

The most compelling results on the conducted experiments are here presented to better illustrate some of the practical issues approached in an implementation of an IPS. A) Latency analysis was performed with the same sound board considering the two scenarios:

110/278

- Using standard WDM drivers and Matlab’s DAQ; - Using ASIO drivers and PortAudio mex files.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 As is it possible to observe in figure 3, GCC-PHAT provided the best (sharper) results even in a rather low SNR scenario with its ability to avoid spreading of the peak of the correlation function. This was also verified with several levels of additive white noise providing the best results in TDE with no significant increase in computational complexity, especially comparing with CC results which are easier to compute, but worse in low SNR.

Figure 2. Latency analysis using two different sound board interfaces. On top, WDM drivers and Matlab’s DAQ. On the bottom, ASIO drivers and a mex file using PortAudio multichannel interface.

As can be seen in figure 2, there remains no doubt that the bottom ASIO interface has a fixed latency. It has a higher value (around 51ms) due to the use of an exterior .mex file in Matlab and also due to the sound driver configuration where one can select latency as function of the processor load charge. However it is preferred over a variable lower latency because it’s stable latency value may be subtracted leaving no latency noise. Having a fixed latency may be very useful because one may subtract a fixed value to delay and obtain ToF more easily and precisely. B) Time delay estimation is one of the key operations to correctly estimate the distance by using the ToF. The “comparison” between the sent signal and the received one, will allow to estimate the delay and therefore the distance. Between many possibilities, three correlation methods were tested due to their computational simplicity [9]: crosscorrelation (CC), generalized cross-correlation phase transform (GCC-PHAT) and maximum likelihood (ML).

C) The performed experience evaluates the use of three different methods to convey the excitation audible sound signal to a receiver so that TDE can be as accurate, precise and reliable as possible in real conditions (the noisy reverberant space). Table I summarizes the results comparing average error, reliability and minimum SNR on these three methods considering a minimum SNR so that reliability of the distance vectors does not decrease under 50% in the worst measurement position. Results demonstrate that the CDMA method has performed slightly better than the other two. Achieving a 1,3cm average error in the center points may be considered in the range of the best state-of-the-art results. The chirp (bird like) FDMA approach had difficulties in estimating some positions due to its relatively small bandwidth, forcing to adapt the experiment so that the loudspeakers are redirected at some measurement points especially when close to walls and corners. In this situation directionality factor is greater than one and the reverberation is interpreted as the direct signal. Significant overestimation on the distance vector was noticed when no direction adjustment was performed to the loudspeakers. The other wide band approaches, TDMA and CDMA, were not affected in the same way, but results also demonstrate significantly better results in center points. The directivity subject in the speakers or in the mobile microphone may be considered together with the frequency response of all the parts, as it will affect the ability to perform TDE. Channel’s equalization may have to be considered to avoid TDE errors. TDMA method has shown not to be robust in a noisy environment. Even though reliability, in the experiment criteria, is one of the greatest, the required minimum SNR is quite larger than the others. The pulse detection technique used was based on maximum detection and therefore, a simple impulsive masking noise is enough to make the TDE fail and consequently everything else. One of the most meaningful observations is related with the minimum SNR that each method requires. As previously mentioned, TDMA pulse method is very demanding, only performing well (with a reliability criteria of 30cm error in distance vectors) above 24,7dB SNR. In the other hand, CDMA performed very well with its 7,2dB minimum SNR. A value in which the used sound was found almost undetectable, complying with the objective of performing without being acoustically annoying. TABLE I.

Figure 3. Correlation methods comparison with a 12,83dB SNR for a 1000 samples delay. Cross-correlation on top, Generalized Cross-Correlation in the middle and Maximum Likelihood on the bottom.

111/278

COMPARISON BETWEEN TDMA, FDMA AND CDMA

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 VI.

PRACTICAL IMPLEMENTATION CONSIDERATIONS

The latency problem (finding ‫ݐ‬଴ in equation 2) becomes critical when one wants to measure ToF. No matter which is the architecture, it is expected that it will take some time to emit/receive/process the excitation signal. Since ToF is used to calculate the distance with TDE, one can conclude that latency will cause an overestimation if it isn’t subtracted to the total time. Using a low latency sound board is not a sufficient condition to assure a reliable ToF measurement. Even though its latency may be low, a variable latency is far more harmful to a distance measurement as it cannot be subtracted as a fixed previously known fixed amount. Some previous work, as for instance [6] has used a dedicated microphone in a known position to calculate the delay at every iteration. It is a simple possible solution, but it requires additional hardware. The conducted experiment has shown a fixed latency possibility (in the same run) that avoids the use of a calibration microphone. It is however prudent to have into account a technique we called Circle Shrinking that prevents the latency to affect ToF measurements in TDE. Considering latency to be constant in the small time window of a run (most of the time a viable assumption) and that latency overestimates distance, one can think of the TDE calculated distances as circles, centered in the anchors positions, that need to be shrinked by the latency amount so that one can minimize the interception area between circles as it is shown in figure 4. Latency can be therefore eliminated even if it is variable between runs. However it can be computationally demanding to calculate this interception area and to minimize it. One must take into account the application requirements in precision and accuracy to evaluate what is reasonable. Sometimes, a small estimation error in distance vectors may be acceptable. The source localization algorithm may deal with it very well. For example, one-sample error in TDE at 44,1kHz represents less than a centimeter error in a distance vector from an anchor, and less in the final position. Time delay estimation determines the ‫ݐ‬௜ in equation 2 and is another concerning aspect of the distance vectors determination. The correlation method used and its performance in terms of delay detection and computational complexity may determine the success of using TDE to estimate distance. A poor delay measurement will result in an even poorer distance estimation, depending on the sampling frequency and the other parameters.

Figure 4. 25% circle shrinking ilustration. The overestimated distance vector on each anchor are interatively reduced to minimize the solution space.

A sharper peak provides a better TDE estimation. The ML technique provides a sharper peak by having its weighting function to attenuate the signals in the spectral region where the SNR is the lowest. However, the GCC-PHAT method has proven to provide better delay detection in white noise like environments confirming some literature results [8][10]. VII. CONCLUSIONS AND FUTURE WORK It has been shown that audible sound is a viable signal to estimate position indoors. The results on the performed experiences demonstrate better performance in the use of CDMA to achieve accurate and precise positioning with the lowest SNR. Using CDMA also fulfills the objective of minimizing any caused disturbance in the acoustic environment. Among the three experimented correlation techniques used for TDE, GCC-PHAT has proven to be the most effective in real noise situations. In the near future the work will be focused on the moving person’s device and its limitations in reception and transmission. The Doppler effect will also be evaluated when considering a moving device. Also, efforts will be conducted in improving perceptual masking to minimize even further any sound disturbance in the acoustic environment. ACKNOWLEDGMENT This work was financed by FCT (Fundação para a Ciência e Tecnologia) with the associated PhD grant reference SFRH/BD/79048/2011, FEDER through ”Programa Operacional Factores de Competitividade – COMPETE” and by National Funding through FCT in project FCOMP-01-0124FEDER-13852. REFERENCES [1]

Ferraro, Richard, and Murat Aktihanoglu, “Location-Aware Applications.”, Co., 2011. [2] Blumrich, Reinhard, Altmann, Jurgen, “Medium-range localization of aircraft via triangulation”, App.Acoustics Vol.61, Iss.1, 2000, pp.65-82. [3] A. Mandal, C. V. Lopes, T. Givargis, A. Haghighat, R. Jurdak, and P. Baldi, ”Beep: 3D Indoor Positioning Using Audible Sound”, Proc. IEEE CCNC, Las Vegas, 2005. [4] Atmoko H., Tan D. C., Tian G. Y. and Fazenda Bruno, “Accurate sound source localization in a reverberant environment using multiple acoustic sensors”, Meas. Sci. Technol. 19 (2008) 024003 (10pp). [5] Stephen P. Tarzia, Peter A. Dinda, Robert P. Dick, and Gokhan Memik. 2011. Indoor localization without infrastructure using the acoustic background spectrum; MobiSys '11. ACM, NY, USA, 155-168. [6] Sertatıl, Cem, Mustafa A. Altınkaya, and Kosai Raoof, "A novel acoustic indoor localization system employing CDMA.", Digital Signal Processing 22 (2012), 506-517. [7] C. V. Lopes , A. Haghighat , A. Mandal , T. Givargis and P. Baldi "Localization of Off-the-Shelf Mobile Devices Using Audible Sound: Architectures, Protocols and Performance Assessment", ACM SIGMOBILE Mobile Computing and Communication Review, vol. 10, no. 2, 2006. [8] Rishabh, Ish, Don Kimber, and John Adcock, "Indoor localization using controlled ambient sounds.", Indoor Positioning and Indoor Navigation (IPIN), 2012 International Conference on. IEEE, 2012. [9] Zekavat, Reza, and R. Michael Buehrer, “Handbook of position location: Theory, practice and advances”, Vol. 27. Wiley. com, 2011. [10] Khaddour, Hasan, “A comparison of algorithms of sound source localization based on time delay estimation”, Elektrorevue vol.2, no. 1, April 2011.

112/278

- chapter 6 -

Mapping, Simultaneous Location And Mapping (SLAM)

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Proposed Methodology for Labeling Topological Maps to Represent Rich Semantic Information for Vision Impaired Navigation. J.A.D.C.Anuradha Jayakody Department of Electrical and Computer Engineering Curtin University Perth ,Western Australia [email protected]

Iain Murray Department of Electrical and Computer Engineering Curtin University Perth ,Western Australia [email protected]

Abstract— Navigation in indoor environments is highly challenging for the strictly vision impaired, particularly in unknown environments visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use objects that are not natural for vision impaired individuals. It is very helpful if it contain semantic information of the location. This paper presents a methodology to add interesting tags/Labels for the typical indoor topological map. Authors pay special attention to the semantic labels of the different type of indoor places and propose a simple way to include the tags to build topological map. Keyword-component; Assistive Technology; vision impairment Indoor Place Classification; Semantic Labeling; Indoor Map

I.

INTRODUCTION

Blindness affects approximately 45 million people worldwide. Because of rapid population growth, this number is expected to double by the year 2020 [1]. As with the sighted population they want to be informed about persons and objects in their environment and object features may be of importance when navigating a path to a given destination. Blind and vision impaired people would wish to exact information about appropriate paths, dangers, distances and critical situations Navigator of certain buildings, like supermarkets and shopping complex, usually navigate themselves through the building using a floor plan they got at the entrance, or by following the signs on walls. In other words, it is rather primitive way of navigation. When a building gets more complex, this type of navigation tends to fail, because it is hard for the visitor to find his way. In the case vision impaired people it is almost an impossible task. II.

RELATED WORK

Topological maps have been quite popular in the robotics field[2].They are believed to be cognitively more adequate, since they can be stored more compactly than geometric maps, and can also be communicated more easily to users of a mobile robot. Many researchers have considered the issues of building topological maps of the environment from the data gathered with a mobile robot [2, 4]. However, few techniques

exists that permit semantic information to be added to these maps [3]. III.

TOPLOGICAL MAP WITH SEMANTIC LABLING

This section provides a simple classification to identify basic classifications of indoor classes that can be used for identify or discover the topology of the indoor environment. The main two classes are as mentioned below. 

Places



Transition

The “Places” are the nodes of the model and “transitions” correspond to the edges between nodes. The class, “Places” includes subclass like corridor, Room 1, Room2…, Room n, Office environment details and etc. The class “Transitions” incudes subclasses like Door, Stairs, Elevators, Escalators, etc. Both sub classes provide the model with augmented semantic information, but of particular interest it is the analytical fact that to be considered the type of transition. It is important to label these specific subclasses in a topological map to assist vision impaired individuals within an indoor environment. IV.

PROPOSED SEMANTIC LABELING FRAMEWORK

The proposed labeling framework for constructed semantic map is composed of five main modules, as shown in fig.1. V.

AUTOMATED LABELING ALGORITHAM

An algorithm will read the sensor input coming through image processing and apply a set of predefined rules to identify the specific labels in the incoming image of the spatial environment. Newly proposed algorithm fig. 2 can work with two types of labels can be namely: specific transitions and place labels and Generic transition and place labels. Generic transition and place labels can be ordinary labels such as staircase, lift, washroom, office area, doors, walls and etc. In the novel architecture these generic transition and place labels can be kept in the local database (DB) of the smart phone.

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

113/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 Specific transition and place labels are unique to the given spatial environment of the surroundings. They can be organization’s Manager’s room label, Assistant Manager’s room label and other official labels. After identifying the significant labels the generated map will be updated according to the results. Insert Image and Gait Analysis Data  SMART Phone Sensor Data  Input through Image Processing Note: Gait is the pattern of movement of the humans



 

 

will focus on the implementation of models using the proposed architecture and test it in the real world environments. ACKNOWLEDGMENT

This work has been supported by Curtin University Perth, Western Australia and Sri Lanka Institute of Information Technology, Malabe, Sri Lanka.

Raw & Spatial Data Acquisition segment the incoming sensor data in to two categories namely: o Indoor environment specific o Data landmark based on data semantics.

Store, Information Detection & Extraction The main component in the process of navigation is the map. In this layer put the accent on the digital form of the map information and on the principal producers and users of map database.

Topological Map Building Topological map divides the set of nodes in the navigational graphs into different areas. An area consists of a set of interconnected “nodes” with the same place classification.

Figure 2. Algoritham for Symantic Labling



The nodes represent recognizable indoor specific locations and landmarks. The edges representing clear paths from one node to another usually door and 1: corridors. FIGURE SYMANTIC LABLING FRAMEWORK



Symantec Labeling Track places of interests which important to integrate into the created map.E.g doors, office names, elevators with their reachable doors and staircases with the corresponding number of steps.

REFERENCES [1]

[2]

[3]

[4] Figure 1. Symantic Labling Framework [5]

VI.

CONCLUSION

This work presents a novel approach to automatic insertion semantic labeling to the constructed map in an indoor environment that can be assist the vision impaired individuals by giving rich full information. In future work, the authors

[6]

114/278

J.A.D.C.A. Jayakody, N. Abhayasinghe, I. Murray, “AccessBIM Model for Environmental Characteristics for Vision Impaired Indoor Navigation and Way Finding,” in International Conference on Indoor Positioning and Indoor Navigation, November 2012, [Online]. Available:http://www.surveying.unsw.edu.au/ipin2012/proceedings/sub missions/98_Paper.pdf [Mar. 5, 2013]. S. Thrun, A. Bucken, Integrating grid-based and topological maps for mobile robot navigation, in:Proc. of the National Conference on Artificial Intelligence, 1996, pp. 944–950. J. Santos-Victor, R. Vassallo, H. Schneebeli, Topological maps for visual navigation, in: International Conference on Computer Vision Systems, 1999, pp. 21 36. A. N¨uchter, J. Hertzberg, Towards semantic maps for mobile robots, Robotics and Autonomous Systems 56 (11) (2008) 915–926. A. Tapus, R. Siegwart, Incremental robot mapping with fingerprints of places, Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2005) 2429–2434. S. Vasuvedan, S. Gachter, V. Nguyen, R. Siegwart, Cognitive maps for mobile robots - an object based approach, Robotics and Autonomous Systems 55 (5) (2007) 359–371.

- chapter 7 -

Robotics & Control Systems

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013

Improvements and Evaluation of the Indoor Laser Localization System GaLocate Jan Kokert, Florian Wolling, Fabian Höflinger and Leonhard M. Reindl Department of Microsystems Engineering - IMTEK University of Freiburg, Germany E-mail: {kokert, wollingf, hoeflinger, reindl}@imtek.uni-freiburg.de II. W ORKING P RINCIPLE Abstract—GaLocate (localization based on galvanometer laser scanning), previously reported on the IPIN2012, Our localization system GaLocate consists of a laser scanner is a promising solution for the intersection observation mounted on the ceiling and several receivers (mobile tags) in production sites. Vehicles are equipped with a small which are mounted on the AGVs, shown in Fig. 1. The mobile retro-reflective tag which is detectable to a laser scanner tags are detectable due to a retro-reflector to the laser scanner. mounted on the ceiling. The location is based on the two In the beginning the scanner performs a coarse scan pattern angles of laser beam deflection with respect to the scanner. to search for mobile tags. If the scanner receives a reflection In this paper we present the latest hardware and from a tag, a fine scan will be done within this area. software improvements. New experimental results includThe laser beam is deflected successively by two mirrors ing the overall scanning performance and the repetition which are tilted by galvanometer actuators [4]. If all mobile accuracy are discussed. tags are in the x-y plane, their position (xm , ym , 0) is determined Keywords—Internal logistic, multi-target localization, by the two angles ϕ and θ with respect to the scanner (xs , ys , zs ) laser-scanning, pattern recognition, embedded systems. according to xm = xs + zs · tan ϕ ·

I. I NTRODUCTION Automated guided vehicles (AGVs) are a part of many modern production and assembly lines [1]. To allow autonomous navigation a reliable localization of the vehicles is mandatory [2]. The traditional way to guide AGVs is to use inductive wires buried into the floor, but this solution is very inflexible. Stateof-the-art transport robots are equipped with laser line scanners (LIDAR) due to safety issues. These scanners can also be used to navigate by means of SLAM algorithms (Simultaneous localization and mapping). This approach may fail in highly dynamic areas like intersections, where staff or other transport vehicles may cross [3]. Intended to observe the traffic in these dynamic areas our system GaLocate provide absolute position data. The system has an inherent line of sight condition, which can be solved by sensor fusion using data from odometry or gyroscopes.

1 cos θ

(1)

ym = ys + zs · tan θ .

(2)

The data transfer of the investigated positions from the scanner to the mobile tags is done by an omnidirectional infrared communication. The data assignment is realized by a scandetecting photodiode in the center of the reflector [5]. III. I MPROVEMENTS OF G A L OCATE The scanner components are mounted on an aluminum plate. The optics is facing down as shown in Fig. 2. A new system control comprising an FPGA and a PC was realized. The FPGA controls the time critical hardware components like the galvanometer movements. Hardware parameters are addressable via an UART command parser. The PC runs a control software performing scan-cycle and pattern-recognition algorithms and visualizes the measurement data in a GUI. The software is PID control

power supplies

ϕ θ

galvanometer optics

y

UART cable x

FPGA board Figure 1: Angle definition in our realized concept. The position of the mobile tag (green) is determined with respect to the position of the scanner (yellow).

115/278

Figure 2: The GaLocate laser scanner hardware.

IR-Tx

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013

80

60

0.4 ftot, 20 ftot, 40 ftot, 60

ftot fmodel σx σy

fmodel σx,40 σy,40

0.3

40

0.2

20

0.1

0

19.2 57.6 115.2

256.0

baud rate fbaud in kBd

(a) Performance and accuracy with respect to communication speed.

10

20

30

40

50

60

repetition accuracy σ in mm

performance ftot in Hz

programmed in C++ (Qt) and uses the QextSerialPort library [6]. The data transfer is organized in events (3 bytes) like "reflection begin/end detected" or "scanning new line now" and commands (4 bytes) like "start fine scan at x, y with a size of w × h now". With the new software it is for the first time possible to perform a tracked scanning. This is a coarse scan followed by successive fine scans. The center of each next fine scan is calculated from the data of the last one using two different algorithms: Averaging and circle fitting. The averaging algorithm takes the arithmetic average of x and y values separately including all outliers. The circle fitting algorithm calculates a circle for every combination of three measured points and wights all circle center points afterwards [7]. The intended IR-trigger signal from [5] to achieve a resolution in the sub-millimeter region was resigned. The delays of scanning in x and y direction sequently are too high and the scanning resolution is still sufficient.

0

number of lines nlines

(b) ... with respect to number of lines in one fine scan at fbaud = 256 kBd.

V. CONCLUSION AND FURTHER RESEARCH IV. E XPERIMENTAL R ESULTS Since [5] all scanner components were constructed to a In [5] we calculated the theoretical scanner performance. Besides the galvanometer scanning time tscan , the model is now working prototype. A new communication software was written extended to cover communication and calculation delays (tcom to control the scanner and to visualize the data. Furthermore and tcalc ). The overall performance ftot (successive fine scans 2D tracked scanning was realized with high performance and accuracy, confirmed by experiments. per time) for m mobile tags can then be calculated by: It was shown, that the UART communication between PC 1 ftot = (3) and scanner is a dominant delay. To improve this, the software m · (tscan + tcom + tcalc ) control should be embedded in the scanner. The calculation 4byte · ncmd + 3byte · nevt tcom = (4) of successive fine scan positions can be improved by adding fbaud /10data8, start, stop a kinematic model and filters (Kalman) and thus increase the tscan = (ndiv + 1)/ ffpga · (nlines + 2) · w (5) maximum object velocity during tracking. To tolerate line of In (4) fbaud is the baud rate, ncmd and nevt are the number sight interruptions an IMU (inertial measurement unit) can be of commands and events to be sent. In (5) ffpga is 100 MHz integrated in the mobile tag to extrapolate the positions. and ndiv is the clock divider value. The scan covers a square ACKNOWLEDGMENT area with an edge length of w in digits and follows a meander pattern which consists of nlines horizontal lines. This work has been supported by the German Research The edge length w is dynamically adjusted, to be nedge = 3 Foundation (Deutsche Forschungsgemeinschaft, DFG) within times the reflector diameter aref = 21 mm. With a position the Research Training Group 1103 (Embedded Microsystems). accuracy of ϕstep = 15.2 µrad and a distance to the scanner of zs = 880 mm, the edge length can be calculated by (6). The R EFERENCES number of received events nevt can be estimated by (7). In the experiments we choose m = 1, ndiv = 7 and assume that [1] T. Albrecht, “Cellular intralogistics: ATV swarms replace traditional conveyor technology in the internet of things,” Fraunhofer IML - Annual ncmd = 3 and tcalc = 5 ms. report, pp. 51–52, 2010. aref [2] R. Askin and J. Goldberg, Design and Analysis of Lean Production Systems. w = nedge · = 4710 (6) John Wiley & Sons, Inc., 2002. zs · ϕstep [3] J. Levinson and S. Thrun, “Robust vehicle localization in urban envinevt = nlines + nlines /nedge · 2 (7) ronments using probabilistic maps,” in Robotics and Automation (ICRA), (new row)

(rise and fall)

In the experiments we tracked a non-moving retroreflector using the circle fitting algorithm. Figure 3a shows the performance and accuracy with respect to the communication speed fbaud and three different values for nlines = {20, 40, 60}. The total speed ftot is increasing significantly with increasing baud rate, however accuracy σx and σy seems to be independent. Figure 3b shows the performance and accuracy with respect to the lines per fine scan nlines . With increasing nlines the total speed ftot and the standard deviation σx and σy is decreasing.

2010 IEEE International Conference on. IEEE, 2010, pp. 4372–4378. [4] Cambridge Technology, 6215H Optical Scanner Mechanical and Electrical Specifications, March 2007. [5] J. Kokert, F. Hoflinger, and L. M. Reindl, “Indoor localization system based on galvanometer-laser-scanning for numerous mobile tags (GaLocate),” in Indoor Positioning and Indoor Navigation (IPIN), 2012 International Conference on. IEEE, 2012, pp. 1–7. [6] B. Fosdick. (2013) A cross-platform serial port class. [Online]. Available: http://sourceforge.net/projects/qextserialport/ [7] L. Maisonobe, “Finding the circle that best fits a set of points,” in (white paper), Oktober 2007.

116/278

Observability Properties of Mirror-Based IMU-Camera Calibration Ghazaleh Panahandeh, Peter H¨andel, and Magnus Jansson KTH Royal Institute of Technology, ACCESS Linnaeus Center, Stockholm, Sweden Email:{ghpa, ph, janssonm}@kth.se

Abstract—In this paper, we study the observability properties of visual inertial calibration parameters for the system proposed in [1]. In this system, calibration is performed using the measurements collected from a visual inertial rig in front of a planar mirror. To construct the visual observations, a set of key features attached to the visual inertial rig are selected where the 3D positions of the key features are unknown. During calibration, the system is navigating in front of the planar mirror while the vision sensor observes the reflections of the key features in the mirror, and the inertial sensor measures the system’s linear accelerations and rotational velocities over time. The observability properties of this time-varying nonlinear system is derived based on the Lie derivative rank condition test. We show that the calibration parameters and the 3D position of the key features are observable for the proposed model. Hence, our proposed method can conveniently be used in low-cost consumer products like visual inertial based applications in smartphones such as localization, 3D reconstruction, and surveillance applications. Index Terms—IMU-Camera calibration, motion estimation, VINS, computer vision.

I. I NTRODUCTION Recently, there has been a growing interest in the development of visual inertial navigation systems. Of particular interest is the use of lightweight and cheap motion capture sensors such as an inertial measurement unit (IMU) with an optical sensor such as a monocular camera. However, accurate information fusion between the sensors requires sensor-tosensor calibration. That is, estimating the 6-DoF transformation (the relative rotations and translations) between visual and inertial coordinate frames; disregarding such a transformation will introduce un-modeled biases in the system that may grow over time. The current IMU-camera calibration techniques are typically implemented for in-lab purposes. Since they either require a calibration target or are computationally very demanding (e.g., methods which are based on building a map of environments with unknown landmarks). Hence, these methods are not convenient to use in smart-phones with limited power consumption and without having access to a calibration target. In [1], we proposed a practical visual inertial calibration method, which is based on visual observations in front of a planar mirror. In particular, the visual inertial system is navigating in front of the planar mirror, where the camera observes a set of features’ reflections (known as key features) in the mirror. The key features are considered to be static with respect to the camera and such that their reflections can always be tracked over images. For this nonlinear system, we derive the state-space model, and estimate the calibration parameters

Mirror y x

{I} z IMU

z {C}} Camera

z x

{G}

y

Fig. 1. IMU-camera rig and the corresponding coordinate frames. The relative IMU-camera rotation and translation are depicted as C(C sI ) and I pC , respectively. Feature f is rigidly attached to the IMU-camera where its reflection in the mirror is in the camera’s field of view.

along with other system state variables using the unscented Kalman filter. In this paper, we show that for this time-varying nonlinear system the IMU-camera calibration parameters, as well as the 3D positions of the key features with respect to the camera, are observable. II. S YSTEM D ESCRIPTION The hardware of our visual inertial system consists of a monocular camera—as a vision sensor—that is rigidly mounted on an IMU—as an inertial sensor. For estimating the 6-DoF rigid body transformation between the camera and the IMU, we propose an approach based on an IMU-camera egomotion estimation method [1]. During calibration, we assume that the IMU-camera is navigating in front of a planar mirror, which is horizontally or vertically aligned. We formulate the problem in a state-space model setting and use the unscented Kalman filter for state estimation. The IMU measurements (linear acceleration and rotational velocity) with higher rate are used for state propagation and the camera measurements with lower rates are used for state correction. The visual corrections are obtained from the positions of the key features in the 2D image plane, which are tracked between image frames. The key features are located arbitrarily (without any prior assumption on their 3D positions with respect to the camera) on the camera body such that their reflections in the mirror are in the camera’s field of view (see Fig. 1). Hereafter, we briefly describe the system process and measurement model used for the observability analysis.

117/278

We consider the system state variables: 1) motion parameters of the sensors (rotation, velocity, position) in the global reference frame, 2) IMU-camera calibration parameters, 3) the 3D positions of the key features with respect to the camera. The total system state vector is h i⊤ x = I sG ⊤ G vI ⊤ G pI ⊤ C sI ⊤ I pC ⊤ C p f1 ⊤ · · · C p fM ⊤ ,

   1   03×3 03×1 2D  g  03×3  C(I sG )⊤   G      vI  03×3   03×3        03×1  03×3   03×3        = 03×1  + 03×3  ω +  0  a,    3×3     03×1  03×3  03×3        .     ..   ..  ..  ..    .   .    . Cp ˙ fM 03×1 03×3 03×3 

To prove that a system is observable, it is sufficient to show that O is of full column rank. For an unobservable system, the null vectors of O span the system’s unobservable subspace. Hence, to find the unobservable subspace, we have to find the null space of matrix O, where O may have infinitely many rows. This can be a very challenging and difficult task especially for high-dimensional systems. We study the observability of our IMU-camera system based on the algebraic test and following the analysis given in [3]; details of the analyses and derivations can be found in [4]. We prove that the null space of the observability matrix O in (5), using only two key features, is spanned by five directions corresponding to the columns of   ∂ I sG

where C p´ fk represents the 3D position of the k-th virtual feature with respect to the camera. The 3D position of the virtual key feature fk in the camera coordinate frame, C p´ fk is a nonlinear function of state variables as p´ fk =C p fk − 2C(C sI )C(I sG )er e⊤ (4) r   ⊤C ⊤ ⊤I I C G I C( sG ) C( sI ) p fk + pI + C( sG ) pC ,

where er is the normal of the mirror with respect to the global frame; depending on the alignment of the mirror. III. N ONLINEAR O BSERVABILITY A NALYSIS We study the observability properties of our nonlinear system by analyzing the rank condition of its observability matrix, which is constructed from the spans of the system’s

03×2

 [ e j ed ] 03×2   03×2 [ e j ed ] N= 03×2 03×2   03×2 03×2  03×2 03×2

I

(5)

.. .

03×2

(2)

where 21 D , ∂∂I θsG , C(s) is the rotation matrix corresponding to G s; ω and a are the rotational velocities and linear accelerations, respectively, measured by the IMU. Assuming a calibrated pinhole camera, the camera measurements from the virtual features (the reflections of key features in the mirror) in normalized pixel coordinates can be expressed as   uk 1 (3) zk = hk =  vk  = ⊤C C p´ fk , ´ fk e 3 p 1

C

∇L h

 ∇L1fi h      .. . O, .   n ∇Lf f ...f h  ij d 

(1)

where I sG represents the orientation of the global frame {G} in the IMU’s frame of reference {I} (Cayley-Gibbs-Rodrigues parameterization), G vI and G pI denote the velocity and position of {I} in {G}, respectively; C sI represents the rotation of the IMU in the camera frame, I pC is the position of {C} in {I}, and C p fk for k ∈ {1, ..., M} is the position of the k-th key feature in the camera reference frame. For the observability analysis, we write the system process model (eq.(4), [1]) in the input-linear form as I s˙  G G  v˙ I  G   p˙ I   C s˙   I   I p˙   C  C p˙   f1 

Lie derivatives [2]. The observability matrix O is defined as   0

03×2 03×2

Cer 03×1 03×1 03×1 03×1 03×1 03×1

∂IθG

   ,   

(6)

which implies that the IMU-camera calibration parameters and the 3D positions of the key features with respect to the camera are all observable. Moreover, the unobservable directions correspond to the system’s planar translation and velocity orthogonal to er (first and second block columns of N) and rotation around er (third block column of N). IV. C ONCLUSION We have studied the observability properties of the IMUcamera calibration parameters for the proposed system in [1]. We show that the calibration parameters and the 3D positions of the key features with respect to the camera are observable when only two key features are used. Hence, our proposed system can conveniently be used in smart-phones with limited power consumption and without having access to a calibration target. Finally, we have verified the findings of our analysis both with simulations and real experiments. R EFERENCES [1] G. Panahandeh and M. Jansson, “IMU-camera self-calibration using planar mirror reflection,” in Proc. IEEE Int. Conf. on Indoor Positioning and Indoor Navigation (IPIN), Guimares, Portugal, pp. 1–7, Sep. 21-23, 2011. [2] R. Hermann and A. Krener, “Nonlinear controllability and observability,” IEEE Trans. on Automatic Control, vol. 22, no. 4, pp. 728–740, 1977. [3] G. Panahandeh, C. X. Guo, M. Jansson, and S. I. Roumeliotis, “Observability analysis of a vision-aided inertial navigation system using planar features on the ground,” in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2013. [4] G. Panahandeh, “Observability analysis of mirror-based imu-camera self-calibration: Supplemental material,” http://kth.diva-portal.org/smash/ record.jsf?pid=diva2:656197 , 2013.

118/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 th-31th October 2013

Processing speed test of Stereoscopic vSLAM in an Indoors environment Using Opencv GPU- surf

Delgado, J.V.

Kurka, P.R.G.

Ferreira, L.O.S.

Faculdade de Engenharia Mecânica Universidade Estadual de Campinas Campinas, São Paulo, Brazil [email protected]

Faculdade de Engenharia Mecânica Universidade Estadual de Campinas Campinas, São Paulo, Brazil [email protected]

Faculdade de Engenharia Mecânica Universidade Estadual de Campinas Campinas, São Paulo, Brazil [email protected]

Abstract— This paper presents a speed test of a vSLAM (visual simultaneous location and mapping) application. In that framework, we process a stereoscopic image in order to find invariant interest points (keypoints) using the SURF (Speeded-up robust features) algorithm. Such an algorithm is computationally expensive due to the frequent and large number of required data processing. The SURF algorithm is implemented in three graphic cards using CUDA trough Opencv library, in order to evaluate the requirements of processing speed and efficiency. The vSLAM process begins with the calibration of a stereoscopic camera, followed by 3-D reconstruction of keypoints position. The visual odometry is recovered by estimating the successive movements of the cameras with respect to the identified spatial locations of the keypoints. The vSLAM is applied to a real indoors navigation experiment.

modules of the compute unified device architecture (CUDA[5]), to be used in real time applications of GPU processing.

Keywords- vSLAM, SURF, Processing, Opencv.

The SLAM problem is dived in three parts [10]. The first one, Scene Flow, identifies keypoints in the environment. The second part Visual Odometry, calculates the motion between identified keypoints. The last one, Global-SLAM, builds the map and re-localizes the cameras attached to the manipulator. In this paper, the Scene Flow task is implemented in a Graphics processing unit (GPU). Visual Odometry and Global-SLAM are implemented only in an ordinary computer processor. The vSLAM algorithm is shown in Fig.1. The outlined block, represents the processing under GPU. Stereoscopic videos are recorded and later processed on mobile and desktop GPU units.

I.

Stereoscopic Camera, GPU

INTRODUCTION

The accurate uses of vSLAM autonomous navigation applications require a heavy computational effort. Literature works suggest the use of graphics processing units (GPU) in order to achieve online and real time processing speed. A stereoscopic omnidirectional vSLAM application with the use of a GPU is found in the work of Lui [1]. A real time visual mapping application is presented by Konolige [2]. A commercial depth sensor (Kinect) together with a GPU is used in a vSLAM algorithm by Newcombe [3] The work of Clipp, B. [4] presents a vSLAM implementation using CPUs and GPU to process stereoscopic images, achieving a performance of 61 frames per second (fps). Open source software, such as the Open Computer Vision (OpenCV) library are also useful tools for the development of image processing applications [6]. Nagendra [7], has developed a method to extract and classify vehicle data, using an OpenCV's filtering module. Katzourakis, D [8], use the OpenCV to process images from a web-cam, providing a roadmap on how to perform experiments with cheap sensors on real vehicles. The 2009 version of OpenCV includes some

The present paper proposes a vSLAM application test using stereoscopic images. Such an algorithm is computationally expensive due to the frequent and large number of required data to process. The keypoint identification algorithm is implemented in a graphic card. Algorithm compilations are tested on three different graphic units. Mobile and desktop GPUs are tested in order to evaluate their performances Discussion on how to build a real time mobile vSLAM navigation device are presented in the conclusions. II.

SOFTWARE ARCHITECTURE

A. Scene Flow The exploration begins with the acquisition of two stereoscopic images taken at successive path positions at times, (t-1) and (t). The identification of keypoints is done by correlating three images using the OpenCV features2D module, designed to run on a GPU. Finally the spatial positions of the points correlated in the three images are reconstructed trough stereoscopic triangulation. The speeded-up robust features (SURF) algorithm [9] is used, in order to find interest points (keypoints) that are inariant to orientation, scale and illumination changes. Every point is associated to a descriptor vector, which contains image point coordinates and neighborhood stability indicator, among

Brazilian research founding agencies, CNPq ,CAPES and Fapesp sponsors of the present work .

119/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 th-31th October 2013 other parameters. The interest points can be correlated through descriptors matching. A correlation algorithm searches for the best fits between keypoint sets..The algorithm also performs a filtering of false positive correlations

other hand, the virtual camera was modeled with a baseline of 135mm, the images size of 640x480.

Read stereo video Read Calibration parameters Load stereo image (t) Keypoints Identification and Correlations

GPU-OpenCV

Triangulation

Figure 2. Stereoscopic camera fixed on a helmet. The Allied cameras were sicronized by resetting the bufer after every capture.

Path recovery

Figure 1. Arquitechure of vSLAM. the gray block present an hybrid implementation between CPU and GPU.

Final correlated points are reconstructed in the Euclidian space using the stereoscopic calibration camera parameters, that is, rotation matrix, translation vector, focal length and image center. B. Visual Odometry The trajectory path is the connection of movements recovered from stereoscopic images taken at two successive times (t-1) and (t). The image is represented by point clouds in Euclidian coordinates. The movement is described in terms of estimated planar rotation matrices and translation vectors obtained via a least squares method.

B. Virtual enviroment The algorithms were tested in a controlled environment, before their implementation in the real word. This work, uses Autodesk 3DS Max modeling 3D to test the vSLAM algorithm in an indoor environment. Virtual cameras capture rendered images of a virtual office environment. The object textures were created from photographs of real materials, in order to test the SURF algorithm as shown in Fig.3.

C. Global –SLAM The environment reconstruction is the transformation and storage of point clouds recovered at each movement and referenced to a global system of coordinates. The environment and 3D path visualization uses the OpenGL library which also runs on a GPU. III.

MATERIALS

The vSLAM algorithm was implemented in real and virtual environments. A pair of stereoscopic cameras was used to capture real word images. A 3D modeling program was used to simulate a virtual environment and stereoscopic cameras. The processing test required the use of deferent's CUPs and GPUs. A. Stereoscopic Camera Images from the real and virtual parallel stereoscopic systems are used as inputs to the vSLAM algorithms. The real stereoscopic system has two Guppy pro (Allied Vision) cameras with 9mm lens and a baseline of 100mm shown in Fig.2. Images of 1280x960 pixels are taken at 7.5 fps. On the

Figure 3. Virtual emviroment and Stereoscopic camera fixed on a virtual robot. The Allied cameras were sicronized by resetting the bufer after every capture.

C. The used GPUs specification The characteristics of the three graphic processor units are displayed on table I. The Quadro 4000 and the GTX 680 are mobile devices, designed for low power consumption and ensemble on mobile computer. The GPU GTX 560 is a desktop graphic unit.

120/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 th-31th October 2013 GPU Reference Characteristics Processing Power (GFLOPS) NVIDIA® CUDA™ Parallel Processor Cores Graphics Clock (MHz) Max Power Consumption (w) Memory Bandwidth (GB/sec) Memory GB

GTX 560Ti

Quadro 4000M

GTX 680M

1263,4

638,4

1935,4

384

336

1344

822

475

720

170

100

100

128

128

115,2

1

2

4

GPUs characteristic reading on the Nvidia page [12].

The performance of mobile GPUs changes when the power is supplied by batteries. In the next section the visual odometry performance is evaluated, running the algorithm on different devices and energy supplies. IV.

Figure 4. Velocities of visulal odometry algorithm on diferent devices. The video captured (image size 1280x980) by the real stereoscopic camera was tested on a GPUs and CPUs devices. The mobile GPUs were tested with and with out batery. ,

RESULTS

The vSLAM algorithm was processed under different, processing units and environments. Stereoscopic navigating videos into virtual and real environment were processed on different GPUs and CPUs. The recovered navigation path is presented on a video [13]. The processing speed for online visualization of the video was 30% of the average speeds, contempt in the tests.

In the second test, rendered images of size 640X480 pixels are preprocessed in the GPUs and a CPU, Fig. 4. The Quadro 4000M operating with battery achieves a processing speed similar to that of a CPU. Such a fact illustrates the limits of using the GPU on a battery operated mobile device.

A. Speed test Two versions (release compilations) of the visual odometry algorithm were tested. The first one runs on GPU and CPU. This was tested on two mobile devices and on a desktop platform. The second program runs on pure CPUs. Speed results for processing are presented in Figs. 4 and 5. Some speed variations on the processing time illustrate the dissimilarity of environment textures. Speeds in Fig3, represent the processing behavior with a stereoscopic video (image size 1280x980). The fastest result was obtained with the GPU unit GTX 560Ti, at an average speed of 6 fps. Mobile devices Quadro 4000M and GTX 680M experience a 5 times speed drop when using the battery source. Processing speed test on a pure CPU shows small variations between different unit processors. The Intel core i7 950 3.07GHz and Intel core i7 3610QM 2. 3GHz, display a 0.045 fps speed difference. A speed performance increase of 20 times is observed between the faster GPU and the CPUs, giving evidence of the superior performance of the GPU.

Figure 5. Velocities of visulal odometry algorithm on a virtual enviroment. The video rendered at (640x480) on the GPU 560TI present the real time processing. 30 fps.

B. vSLAM experiment Navigation in a 37.5 meters indoors environment was recorded in a stereoscopic video and later processed with the odometry algorithm. A typical illustration of stereoscopic images taken at successive time instants is shown in Fig 6.

121/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 th-31th October 2013 recover the pose at every time.

Left(t)

Right (t) ACKNOWLEDGMENT The authors wish to thanks, the Brazilian research founding agencies, CNPq, CAPES and Fapesp, sponsors of the present work, and Industrial Automation (DCA) at the School of Electrical and Computer Engineering (FEEC), University of Campinas (Unicamp).

Left(t-1)

Right (t-1) REFERENCES [1]

[2]

[3] Figure 6. A two pairs of stereoscopic images taken in a indoor enviroment. The up two images are on current position t and the down are on tha one step before t-1.

The path recovered in the indoors environment is presented in Fig 7. Accuracy measurement of the recovered path is not addressed in the present work, but can be found in ref. [11,14]. V.

[4]

[5]

[6]

CONCLUSION

The visual odometry algorithm requires a parallelization technique due to the large amount of image information to process. The results show an increase of performance of about 10 times, between the use of GPU and CPU. Power supply is also seen as a restriction for the performance of mobile devices. On the virtual test, the GPU Quadro4000 had almost the same performance than that of a CPU. The test on a real indoors environment suggests the use of GPUs for large images and the possibility to arrive to real time processing speed (30 fps) with an image size of 640x480 pixels. Similar comparative works can be found in the literature. Konolige, Kurt, [2] achieves a processing speed of 15fps with an image size of 512x384 pixels on a CPU. Clipp, B [4] achieves a 15fps with an image size of 1224x1024 pixels using a GPU.

[7]

[8]

[9]

[10]

[11]

[12] [13]

[14]

Figure 7. The path recovery is presneted as the conection of locals coordenate systems. The red points are the reconstructed fearured used to

122/278

Lui, Wen Lik Dennis, and Ray Jarvis. "A pure vision-based approach to topological SLAM." Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on. IEEE, 2010. Konolige, Kurt, and Motilal Agrawal. "FrameSLAM: From bundle adjustment to real-time visual mapping." Robotics, IEEE Transactions on 24.5 (2008): 1066-1077. Newcombe, Richard A. and Davison..," KinectFusion: Real-time dense surface mapping and tracking ", Proc. 10th IEEE Int Mixed and Augmented Reality (ISMAR) Symp, 127-136, 2011. Clipp, B. and Jongwoo Lim and Frahm, J.-M. and Pollefeys, M.," Parallel, real-time visual SLAM ", Proc. IEEE/RSJ Int Intelligent Robots and Systems (IROS) Conf, 3961-3968, 2010. Sanders, Jason, and Edward Kandrot. CUDA by example: an introduction to general-purpose GPU programming. Addison-Wesley Professional, 2010. Bradski, G. "OpenCV 2.4.2", Dr. Dobb's Journal of Software Tools, 2000 Nagendra, P." Performance characterization of automotive computer vision systems using Graphics Processing Units (GPUs) ", Proc. Int Image Information Processing (ICIIP) Conf, 1-4,2011. Katzourakis, D. I. and Velenis, E. and Abbink, D. A. and Happee, R. and Holweg, E., " Race-Car Instrumentation for Driving Behavior Studies ", IEEE_J_IM, Vol 61, 462-474, 2012. Herbert Bay, Andreas Ess, Tinne Tuytelaars, Luc Van Gool, "SURF: Speeded Up Robust Features", Computer Vision and Image Understanding (CVIU), Vol. 110, No. 3, pp. 346--359, 2008 Clipp, B. and Jongwoo Lim and Frahm, J.-M. and Pollefeys, M.," Parallel, real-time visual SLAM ", Proc. IEEE/RSJ Int Intelligent Robots and Systems (IROS) Conf, 3961-3968, 2010. Delgado, V. J, “Localization and navigation of an autonomous mobile robot trough odometry and stereoscopic vision,” Master Dissertation. UNICAMP. Brazil, Febrary 2012. Nvidia Developer Zone. (2013,Oct10) [Online]. Available: https://developer.nvidia.com/cuda-gpus Delgado, V. J., (2013, Oct 20). "Processing speed test of Stereoscopic vSLAM in an Indoors environment GPU vs CPU"[YouTube Video file]. Retrieved from: http://www.youtube.com/watch?v=pUVL17ub9M&feature=youtu.be Delgado, V. J., Paulo R. Kurka, and Eleri Cardozo. "Visual odometry in mobile robots." Robotics Symposium, 2011 IEEE IX Latin American and IEEE Colombian Conference on Automatic Control and Industry Applications (LARC). IEEE, 2011.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Enhanced View-based Navigation for Human Navigation by Mobile Robots Using Front and Rear Vision Sensors Masaaki Tanaka, Yoshiaki Mizuchi, Akimasa Suzuki and Hiroki Imamura

Yoshinobu Hagiwara National Institute of Informatics 2-1-2 Hitotsubashi, Chiyoda, Tokyo, 101-8430, Japan [email protected]

Graduate School of Engineering Soka University 1-236 Tangi-machi, Hachioji, Tokyo, 192-8577, Japan [email protected]

Abstract—In this paper, we propose an enhanced view-based navigation which is robust against a featureless scene using front and rear vision sensors, and evaluate the proposed method. Position and rotation of a mobile robot can be estimated by image matching and ego-motion estimation using suitable one or two vision sensors. In conventional view-based navigations, it is difficult to estimate position and rotation of a mobile robot in case the mobile robot heads for a featureless scene (e.g. wall surface). By using the proposed method, a mobile robot is expected to enable human navigation in an actual environment. To evaluate the proposed method, we conducted experiments at a corner and in a path heading for a lateral wall. From experimental results, we confirmed the feasibility of the position and rotation estimation for the human navigation by the mobile robot. Keywords-component; view-based navigation; human navigation; front and rear cameras; robot; obstacle avoidance

I.

which is robust against featureless scenes by using front and rear vision sensors installed on a mobile robot. Installing two vision sensors on front and rear of a mobile robot is expected to have the advantage of navigating back and forth with a single recording and utilizing rear vision sensor for detecting following human. The image matching and the ego-motion estimation are performed with suitable one or two vision sensors. By using our proposed method, it is expected that view-based navigation is enhanced to featureless scenes and enables human navigation in an actual environment. II.

PROPOSED METHOD

Fig. 1 shows the overview of our proposed method. Images at the top of Fig. 1 show recorded images obtained along the recording path presented by the dotted horizontal line. Upper images are images from the front vision sensor, and lower images are images from the rear vision sensor. Images in the

INTRODUCTION

Recently human navigation by mobile robots has attracted interest. In extensive facilities, navigation to places indicated by visitors using mobile robots is useful. To realize the human navigation, it is definitely necessary to estimate position and rotation of a mobile robot. View-based navigation [1], as one of approaches to the position and rotation estimation, has been proposed. The view-based navigation is able to estimate position and rotation of a mobile robot using image matching between a current image and recorded images. This estimation can be performed without accumulation of positional errors even in a long path. By applying this view-based navigation, we have investigated a robot navigation system [2], which has enabled to avoid static obstacles using ego-motion. The egomotion is calculated from corresponding SURF (Speeded Up Robust Features) [3] feature points in a current image and the most matched image in recorded images. However, if a mobile robot heads for a featureless scene, which appears at a corner or in a path heading for a lateral wall during dynamic obstacle avoidance, the view-based navigation with ego-motion has difficulty in estimating position and rotation of a mobile robot. Therefore, we realize an enhanced view-based navigation 978-1-4673-1954-6/12/$31.00 ©2012 IEEE

123/278

Figure 1. Overview of our proposed method

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 III.

EXPERIMENTS

A. Position and rotation estimation at a corner To evaluate the proposed method for the robustness against a featureless scene at a corner, we conducted an experiment at a corner in a corridor of our laboratory. Fig. 3 shows a measurement system for experiments of the position and rotation estimation. The measurement system has two laser pointers to adjust its position and rotation, two Kinects as front

Figure 2. Ego-motion estimation

middle of Fig. 1 show current images. The upper image is an image from the front vision sensor, and the lower image is an image from the rear vision sensor. In the image matching process of Fig. 1 (I), most similar front and rear images are determined by comparing current front and rear images to each of front and rear images recorded at respective points on the recording path using the SURF method. In the ego-motion estimation process of Fig. 1 (II), the ego-motion is calculated from 3D-positions of corresponding SURF feature points between the current images and the most similar front and rear recorded images. In the position and rotation estimation process of Fig. 1 (III), position and rotation are estimated from the position of the most similar recorded images and the estimated ego-motion. Fig. 2 shows the conceptual diagram of the ego-motion estimation with the front vision sensor. The ego-motion estimation with the rear vision sensor is performed in the same way. In the coordinate system of Fig. 2, the origin is the position of most similar recorded images. The z-axis is the line on the recording path. The x-axis is the line perpendicular to the z-axis. The y-axis is the line perpendicular to the xz plane. Rh stands for the height of vision sensors attached on the mobile robot. The point R (0, Rh, 0) represents the position of most similar recorded images. The point C (Cx, Rh, Cz) represents the current position of the mobile robot. Besides, circles show the positions of sampled feature points viewed from the point R. Triangles show the positions of corresponding feature points viewed from the point C. Squares show corresponding feature points rotated. First, the triangles are rotated to be matched rotationally with the circles using the singular value decomposition [4]. The estimated rotation from the triangles to the squares is equal to the estimated rotation θy of the mobile robot. Next, the squares are translated to be matched with the circles. The estimated translation from the squares to the circles is equivalent to the estimated translation of the mobile robot from the point R to the point C. The egomotion is obtained as the estimated rotation θy and the estimated translation (Cx, 0, Cz) of the mobile robot. The egomotion estimation is performed with both front and rear vision sensors, and the positions of feature points viewed from the current position are rotated and translated using each egomotion. The ego-motion with the largest number of matching feature points is the estimated ego-motion, finally. By estimating the ego-motion using each of front and rear vision sensors, the mobile robot is expected to estimate its position and rotation more robustly against featureless scenes.

124/278

Figure 3. Measurement system for experiments

Figure 4. Experimental path

(a) Front image (b) Rear image Figure 5. Captured images at the origin in Fig. 4

Figure 6. Estimated positions

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 Fig. 8, circles and lozenges represent rotational errors by the proposed method and those by the conventional method, respectively. With the conventional method, positional errors over 10cm occurred at 4 positions and rotational errors over 1deg. occurred at 5 positions. The maximum errors of position and rotation were 38.1cm and 8.5deg. respectively. On the other hand, with the proposed method, positional errors and rotational errors were under 5cm and 1deg. at all positions. From these results, it is confirmed that the proposed method is able to estimate position and rotation of a mobile robot accurately at a corner which can bring featureless scenes. B. Position and rotation estimation in dynamic obstacle avoidance To evaluate the proposed method for the robustness against a featureless scene in heading for a lateral wall for dynamic obstacle avoidance, we conducted an experiment in a corridor in our laboratory. On the assumption of a sudden obstacle in

Figure 7. Positional errors in the experiment at the corner

Figure 8. Rotational errors in the experiment at the corner

and rear vision sensors, and two notebook PCs connected with each Kinect. The estimation accuracy of the proposed method is evaluated by comparing estimated position and rotation to precise position and rotation. Fig. 4 shows an experimental path. The experimental path ends at the point 91.5cm from a forward wall. 91.5cm is half the width of the corridor and a mobile robot is assumed to turn at this point. In Fig. 4, a white circle and a square in dotted outline represent a start position of the recording path and that of the experimental path, respectively. Black circles and squares in solid outline represent capture positions on the recording path and those on the experimental path, respectively. In the recording path of 400cm, recorded images were captured at 100cm intervals. In the experimental path of 400cm, images were captured at 20cm intervals. Fig. 5 (a)(b) show front and rear captured images at the origin in Fig. 4. Fig. 6 shows the experimental result by the conventional method [2] and the proposed method. In Fig. 6, squares are capture positions on the experimental path. Lozenges and circles represent estimated positions by the conventional method and those by the proposed method, respectively. From the experimental result, it is confirmed that the proposed method is able to estimate the position of the mobile robot robustly against a featureless scene of a wall surface at the corner. On the other hand, the conventional method had difficulty in estimating the position of the mobile robot at some positions. Fig. 7 and Fig. 8 show positional errors and rotational errors in the experiment. In Fig. 7, circles and lozenges represent positional errors by the proposed method and those by the conventional method, respectively. In

Figure 9. Experimental paths

(a) Front image

(b) Rear image

Figure 10. Captured images at the point C in Fig. 9 TABLE I.

POSITIONAL AND ROTATIONAL ERRORS WITH THE CONVENTIONAL METHOD Positional error (cm)

Rotational error (deg.)

Distance to the obstacle (cm)

Average

Standard deviation

Average

Standard deviation

220 160 100 80

5.2 4.6 28.2 95.3

11.0 5.2 33.5 97.7

2.4 2.3 7.4 7.8

4.2 2.5 11.2 17.0

TABLE II.

POSITIONAL AND ROTATIONAL ERRORS WITH THE PROPOSED METHOD Positional error (cm)

Rotational error (deg.)

Distance to the obstacle (cm)

Average

Standard deviation

Average

Standard deviation

220 160 100 80

5.0 4.6 3.7 48.7

6.2 5.7 4.2 76.1

1.1 1.8 2.6 3.2

1.2 1.8 2.6 3.3

125/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 the middle of the corridor, the robot runs on avoidance paths from various distances to the obstacle. The same measurement system as in Fig. 3 is used. Fig. 9 shows experimental paths. The obstacle is located at the origin of the coordinate system. The maximum distance from the recording path during avoidance is 60cm. Avoidance starts from 220cm, 160cm, 100cm and 80cm to the obstacle, and angles between each avoidance path and the surface of the lateral wall are approximately 15.3deg., 20.6deg., 31.0deg. and 36.9deg, respectively. In Fig. 9, a white lozenge represents a start position of the recording path. Also, hexagons, squares, triangles and circles in dotted outlines represent start positions of avoidance paths from 220cm, 160cm, 100cm and 80cm to the obstacle, respectively. Black lozenges represent capture positions on the recording path. Also, hexagons, squares, triangles and circles in solid lines represent capture positions on the avoidance paths from 220cm, 160cm, 100cm and 80cm to the obstacle, respectively. In the recording path, recorded images were captured at 100cm intervals. In the avoidance paths, images were captured at 20cm intervals. Fig. 10 (a)(b) shows front and rear captured images at the point C, which is the end position of the avoidance path from 80cm to the obstacle. Experimental results are shown in TABLE I and TABLE II. TABLE I shows positional and rotational errors with the conventional method. In TABLE I, in avoidance paths from 100cm and 80cm to the object, it was difficult to estimate position and rotation of the mobile robot during avoidance. TABLE II shows positional and rotational errors with the proposed method. In TABLE II, in avoidance paths from 220cm, 160cm and 100cm to the obstacle, it was confirmed that the average of positional errors and the average of rotational errors are under 10cm and 3deg. respectively. The estimation accuracy is considered to be useful for controlling a mobile robot in the human navigation in an actual environment. In the avoiding path from 80cm to the obstacle, it was difficult to estimate position and rotation of a mobile robot during avoidance. This is due to having selected recorded images at the position 100cm before in the image matching process because of similarity in intersection of current and recorded

images. Far matched points in the current images seem to have had more positional errors and to have led to wrong estimation. From the experimental result in TABLE II, we confirmed that the acceptable range of the distance to the obstacle was improved up to 100cm by the proposed method, which means a mobile robot can avoid a dynamic obstacle at up to approximately 31.0 deg. to the surface of a lateral wall. Therefore, it is confirmed that the proposed method is able to estimate the position and rotation of a mobile robot more robustly against a featureless scene of a lateral wall which appears in dynamic obstacle avoidance. IV.

CONCLUSIONS

In this paper, we proposed an enhanced view-based navigation robust against featureless scenes for human navigation by mobile robots by the use of front and rear vision sensors. In the experiments at a corner and in a path heading for a lateral wall, we evaluated the accuracy of estimated position and rotation. From the experimental results, it was confirmed that the proposed method was able to estimate position and rotation of a mobile robot more robustly against featureless scenes. Moreover, the average processing time of the proposed method was 630ms. Therefore, by using the proposed method, it is expected that a mobile robot runs on a wide range of an actual environment, and enables human navigation. REFERENCES [1] [2]

[3] [4]

126/278

Y. Matsumoto, M. Inaba, and H. Inoue, “View-Based Approach to Robot Navigation,” JRSJ, vol. 20, no. 5, 2002, pp. 506–514. Y. Hagiwara, T. Shoji, and H. Imamura, “Position and Rotation Estimation for Mobile Robots Straying from a Recording Path Using Ego-motion,” IEEJ-C, vol. 133, no. 2, 2013, pp. 356-364. H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded Up Robust Features,” in ECCV, 2006, pp. 404-417. K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-Squares Fitting of Two 3-D Point Sets,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 9, no. 5, 1987, pp. 698-700.

- chapter 8 -

Geoscience

2013 INTERNATIONAL CONFERENCE ON INDOOR POSITIONING AND INDOOR NAVIGATION, 28-31TH OCTOBER 2013

1

Generation of reference data for indoor navigation by INS and laser scanner Friedrich Keller, Thomas Willemsen and Harald Sternberg Dept. Geomatics HafenCity University, Hamburg, Germany [email protected], [email protected], [email protected]

Abstract—Many indoor applications use maps or building models to improve the determination of the position and navigation. Here, the question arises how such maps can be generated as efficiently as possible. This article shows how kinematic as laser scanning may be used to provide a point cloud. It responds especially to the determination of trajectory and the filter applied technology, plus the external support received by a total station, but not the actual modeling of the data. Keywords—indoornavigation, mobile mapping, totalstation, inertial measurement unit, laserscanning, Kalman filtering

I. I NTRODUCTION Navigation never has been so easy as it is today: outside of buildings, it is possible for anyone to navigate with navigation systems or smartphones. In buildings the use of map data improves the position determination. At the same time the use of maps helps to verify the present position. The base is logically that map data or whole building models are available. A small view over the proceedings of the 2012 IPIN shows, that in many areas maps play a role. [5] deals with the issue to raise the map data as a basis: It is shown how this data can be obtained from a photo of an evacuation plan and how the data will help to improve the position. This article discusses the possibility of using an indoor mobile mapping system to measure a point cloud as the basis for a building plan or model. Other examples are, [4], [1] and [2]. II. M EASURING SYSTEM A quick way to capture point clouds for the creation of building data is the kinematic laser scanning.For kinematic laser scanning mobile mapping systems (MMS) are used. Usually an MMS consists of different components. Main components are normally an IMU, which determines in conjunction with the GNSS trajectory. There both the position and the orientation in space are determined. However, GNSS is an essential element, it provides an absolute position and prevents the drift of the IMU. Depending on the configuration one or several laser scanners are used. Since indoors no GNSS is available, the drift of the system and the absolute position determination must be replaced by other systems. The HCU Hamburg developed a modular measurement platform. This allows an analysis of different measurement configurations and sensors. The main module

consists of a high end IMU and a laser scanner. For outdoor use the system can be expanded by a GNSS module and an odometer. For indoor use exist a larger number of modules. This is due to the fact that various sensors to support the system are tested. At this point the total station module is to be presented. It consists of a 360 degree prism and a Leica 1201 + total station. To determine the suitability of the total station the configuration has been adjusted accordingly. In order to merge the data and to get the optimal trajectory the approach of kalman filtering[3] has been selected, but in this case for nonlinear systems, also extended kalman filter (EKF) called. This allows an optimal estimation of the trajectory. Kalman filter is a set of equations that provides a method to determine the estimated position of a process. This series of equations consists of two steps: prediction (estimating equations) and correction (measurement equations) For further explanation reference is made to relevant literature. [7]. To improve the results of the filter a smoother of RauchTung-Striebel (RTS) is applied. The RTS smoother is an efficient two-pass algorithm for fixed interval smoothing. The first cycle is normal EKF. Then, the respective state vectors and covariances are backward retrieved and smoothed. III.

T IME SYNCHRONIZATION

For the use of the Kalman filter with kinematic measuring applications it is essential to provide an uniform time base. Normally, the approximation of the time systems is provided by GNSS. For the indoor system of the HCU a different solution had to be found. The IMU itself has a time system to reference the connected systems. This is synchronized by the built-in GNSS Receiver with the GPS time. This has to be done, otherwise the time system will drift. But as the time stamp for the laser scanner and the odometer are generated by the IMU, this is not critical. The total station but has its own time system, which cannot be synchronized from the outside via GNSS or other inputs, explicit explanations for total station time system can be found in [6]. To reference time systems without much effort, as additional laptop or specific measurement configuration, the following solution is to be analyzed. The deviations of the clocks can be described by the linear function

c 0000–0000/00$31.00 2013 IEEE

127/278

kT P S = m · kIM U + b

(1)

2013 INTERNATIONAL CONFERENCE ON INDOOR POSITIONING AND INDOOR NAVIGATION, 28-31TH OCTOBER 2013

2

kIM U , kT P S are the same points in time, m is a scale which is generated by the different time drift of the system, and b is the offset of the systems. Firstly, it is assumed that both systems only have one offset and the scale with m = 1 is stable. Only the offset b is to be determined. From totalstation times and measured coordinates, a second velocity profile can be calculated. With the cross-correlation the offset of the twotime systems is determined. IV.

Fig. 1. Differences between Totalstation and GPS with GPS-CoVarianz (95%)

I NDOORTEST

In a building of the HCU several loops were measured with the measuring platform, while the platform was tracked by a total station. Once the time-based systems are synchronized every second point of the total station measurement is marked as a control point and ignored in the Kalman filter. To come to a conclusion about the obtained accuracy, the estimated positions of the filter is compared with the control points. Table I shows an overview of the achieved accuracy. The control points are correlated with the measurement system over the time. Several supporting data were tested for the filter. Sequentially the odometer and total station were added to the IMU. Here, all other parameters such as the first initialization of the filter (measured in idle mode by the total station) remained unchanged. The deviations shown in Table I are to be investigated nearer for the second and third measurement. Noticeable is that the average of the second measurement is still above the standard deviation. This suggests that systematic errors are involved in the measurement. A detailed analysis of the trajectory can explain this system with the drift of the system, equally driven circles become larger or smaller over the measurement. In the third measurement data the average is less than the standard deviation, it seems that the systematic error components fall away by adding the total station. It is assumed that the character of the residual error is random, but this is subject of further investigations. V.

TABLE I. Q UADRATIC DEVIATION ; M EASUREMENT 1 IMU ,M EASUREMENT 2 IMU AND O DOMETRY, M EASUREMENT 3 IMU, O DOMETRY AND T OTALSTATION IMU only[m] 1.8049 0.5820 0.7218 0.4872 0.0760

with Odometry[m] 1.0395 0.3783 0.4140 0.3268 0.001

with Totalstation[m] 0.1105 0.0101 0.0159 0.0197 9.5e-08

VI. C ONCLUSION In summary it can be stated that rapid registration indoors with indoor mobile mapping Systems is possible. The loss of GNSS can be compensated by a total station. The crosscorrelation is a practical method to estimate the offset of the time systems. However, the full potential has not yet been exhausted, in future studies an independent control has to be found for quality assessment, it should also be further investigated whether better results can be reached with special commands via the interface GeoCom. R EFERENCES [1]

[2]

O UTDOORTEST

To obtain an independent control of the system measurements outdoor were recorded. To determine the trajectory measurements from DGPS, the IMU and the odometer were used, it was merged with commercial software (Novatel Waypoint Inertial Explorer). The trajectory of total station, IMU and odometer was merged. Figure 1 shows the comparison of the position coordinates, illustrated is the difference between reference (GPS) and actual (total station). Striking here is the large variation that at the beginning of the measurement at

Max. Median. Mean. Std. Min.

standstill, this must result from the GPS measurement. As a result, the GPS control must be considered with care. If the variation in idle mode is taken as accuracy of the measurement ( 2cm (95%)) only a few significant deviations were found. Which implicates that the total station can achieve the same accuracy class in this configuration as GPS. It shows that the total station is suitable to improve kinematic measurements with the help of IMU.

vs. GPS 0.1274 0.0164 0.0202 0.0149 0.0024

[3]

[4]

[5]

[6]

[7]

128/278

C. Ascher, C. Kessler, R. Weis, G.f. Tromme: Multi-Floor Map Matching in Indoor Environments for Mobile Platforms. In: Proceedings of the Indoor Positioning and Indoor Navigation (IPIN), 2012 International Conference 13-15 Nov. 2012 D. Gotlib, M. Gnat and Jacek Marciniak : The Research on Cartographical Indoor Presentation and Indoor Route Modeling for Navigation Applications In: Proceedings of the Indoor Positioning and Indoor Navigation (IPIN), 2012 International Conference 13-15 Nov. 2012 R. E. Kalman: A New Approach to Linear Filtering and Prediction Problems. In: ASME Journal of Basic Engineering 82 (Series D), S. 3545, 1960 S. Khalifa and M. Hassan : Evaluating mismatch probability of activitybased map matching in indoor positioning. In: Proceedings of the Indoor Positioning and Indoor Navigation (IPIN), 2012 International Conference 13-15 Nov. 2012 M. Peter, D. Fritsch, B. Schaefer, A. Kleusberg, L. Bitsch, A. J and K. Wehrle: Versatile Geo-referenced Maps for Indoor Navigation of Pedestrians. In: Proceedings of the Indoor Positioning and Indoor Navigation (IPIN), 2012 International Conference 13-15 Nov. 2012 W. Stempfhuber, K. Schnaedelbach and W. Maurer: Genaue Positionierung von bewegten Objekten mit zielverfolgenden Tachymetern. In: Proceedings of the Ingenieurvermessung 2000 G. Welch and G. Bishop: An Introduction to the Kalman Filter, Technical Report 95-041, UNC-CH Computer Science, 1995

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Implementation of OGC WFS floor plan data for enhancing accuracy and reliability of Wi-Fi fingerprinting positioning methods Zinkiewicz Daniel

Buszke Bartosz

Wasat Sp. z o.o. Warsaw, Poland [email protected]

Wasat Sp. z o.o. Warsaw, Poland [email protected]

Abstract—The paper presents a method of enhancing accuracy and reliability of Wi-Fi fingerprinting by implementing vector data provided by OGC WFS services. The main concept is based on dynamic zoning of mapped indoor environment by dividing fingerprints modeled data. The working prototype is built with use of the Fraunhofer IIS awiloc® SDK for indoor positioning and an open source GeoServer platform for hosting WFS data. In the proposed method the WFS is adopted as a service for provisioning information in GeoJSON format on buildings while WMS images provide only a background image map. WFS vector data are implemented with information on walls, paths, entrances, doors, open spaces, excluded areas and overall geometry of buildings. Requested GML data are loaded in the form of layers with the described layer data type. WFS content is accessed asynchronously from the GeoServer and retrieved by a built-in Android-based mobile application. The client–server structure of the presented solution introduces flexibility to a static presentation of indoor floor plans. Implementation of WFS information allows to obtain reliable WiFi fingerprinting results by limiting areas where a position is not reachable. Also bordering Wi-Fi fingerprint model data by vector data allows to exclude a position variation close to walls in an indoor environment. As a result, the obtained position of the presented solution does not jump between two sides of the wall. The applied method of merging Wi-Fi fingerprinting model data with vector data from WFS service enhances accuracy of indoor positioning and eliminates influence of big navigation and routing errors. Keywords-component; Wi-Fi, Fingerprinting, WFS, Web Feature Service, Floor plans.

I.

INTRODUCTION

A large scale deployment of indoor location data is difficult due to technical challenges. For Wi-Fi fingerprinting, data fusion with additional information is normally required to achieve high accuracy and resolution. A number of researchers have been working on using Wi-Fi fingerprinting in combination with different technologies to enhance the accuracy and reliability.

The complete article is organized as follows: In Section „System parts architecture‟, we describe a baseline of system prototype components. The details of the proposed methodology of data integration are explained and discussed in Section „Implementation‟. Achievements of modeled results are tested and evaluated in comparison to the baseline method in Section „Tests & Evaluation‟. An overview of the related work is provided in Section „Related work‟. This article concludes in Section „Conclusions and future work‟ with a summary of the primary contributions of this work and an overview of the future work. A. Problem Location and indoor tracking technologies provide a position in the form of two-dimensional coordinates. Single sensor sources (i.e. Wi-Fi fingerprinting) are not precise and accurate enough but the combination of various technologies and different levels of data processing allows a more exact and reliable indoor positioning. B. Motivation The presented work is motivated by the need for spatial support for a user who wants to get more accurate methods of indoor positioning and more reliable systems providing continuous location information. In this article, we propose a Wi-Fi fingerprinting-based indoor positioning system combined with vector data obtained from Web Feature Service (WFS). In the proposed prototype, we define an efficient and robust model of utilization of floor plan data, where the initial position distribution calculated by the positioning system will be compensated before being presented to the user. II.

TECHNOLOGY AND SYSTEM PARTS ARCHITECTURE

The main part of the work presented in this paper we are based on the decentralized system architecture. The prototype system consists of three different parts: the server in form of an open source implementation of GeoServer, the mobile engine for determining a fingerprinting position in form of Android

129/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 application, and the engine for WFS retrieving and manipulation of vector data. A. Use cases Our approach which combines WFS distributed data and Wi-Fi fingerprinting position has a high potential in large open spaces where no routing graphs are implemented. Vector data in form of multiline or polygons define boundaries of different areas which in most cases are not accessible in short time. Based on these objectives we try to resolve the problem of enhancing positioning accuracy in close neighborhood of walls and open spaces in a building. B. Geoserver For the provision of floorplans in form of WFS data, GeoServer (particularly the GeoServer for Windows) is used. GeoServer is an open source project running on different platforms including Microsoft Windows, Linux and Mac OS. It supports a rich set of raster and vector data formats, geographic data sources and OGC standards (among them WMS, WFS, GML) and open format GeoJSON. GeoServer for Windows runs with Apache HTTP Server and Java Tomcat. The shapefiles can be directly used as data sources for GeoServer. C. Mobile client A prototype client side of the system was developed on the Android platform. We decided to build a fat client with many features because time reduction in communication was the most important factor for effectiveness of the system‟s work. The Android application was optimized for running on Samsung Galaxy S2 device what allowed us to run algorithms and obtain position in short time. The decentralized architecture of the positioning system enables natural balancing of the load, high availability of floorplans and robustness of the system. D. WFS data WFS is the service of choice for accessing building data from GeoServer. In addition, WFS can be based on various data sources as a backend. Available implementations usually support various data formats and databases. WFS provides a simple protocol and an interface for accessing geographical features based on HTTP. Its main operations used in our implementation were: getCapabilities(), describeFeatureType(), and getFeature(). To access the complete layer data, a layer can be requested using the getFeature() operation. Thus, a download of layer data can be accomplished stepwise, starting with the layer containing the building outline and the layer representing the current user location. (Fig.1). In addition the layers providing information about positioning technologies supported by a client device can be loaded. Hence, WFS allows the exploration and download of the provided features, i.e. building model layers ensuring extensibility of and flexible access to the provided data based on a standard format and protocol, in a dynamic and selective way

Figure 1. Example of a WFS vector data with floorplan displayed in Android canvas object.

E. Fraunhoffer SDK Fraunhofer IIS awiloc® solution makes possible for mobile devices to independently determine their position in indoor and urban environments based on signal strength measurements. Networks from the IEEE 802.11 family of wireless LAN standards have emerged as a prevalent technology. Hence, they are predominantly used as a basis for indoor positioning. Indoor positions are determined with an accuracy of a few meters. Positioning based on Wi-Fi fingerprinting in communication networks perfectly complements other location data or different approaches developed with use of distributed awiloc® SDKs. III.

IMPLEMENTATION

Most of the implementation work was made on client side of the system. In this case we used a Java based Android environment. The structure of development was divided into parts including algorithms for obtaining Wi-Fi fingerprinting position from awiloc® models, algorithms for downloading and parsing, methods for manipulation of data structures, methods for coordinates transformation. The important element of client side were algorithms for recalculation of position with use of vector data. Each part of application and most important parts of each algorithm run as separate Android services or as asynchronous tasks (e.g. downloading WFS data from GeoServer) A. GeoJSON We decided to use GeoJSON for flexible and highly interoperable access to floor plan models on Android devices.

130/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 In combination with WFS as a service for access, it is possible to support various internal representations of building data, which can be transformed on the fly into GeoJSON format. In our approach we adopt WFS as a service for provisioning of building information in GeoJSON format. Using OGC WFS and GeoJSON standards we ensure high interoperability with Android SDK. In this way, building data providers are not restricted to a particular format but can choose any format as long as it can be transformed into GeoJSON or GML Using WFS in form of GeoJSON as a standard technology requires JSON processing which is challenging for mobile clients. For this reason we implement a set of solutions to combine smaller part vectors obtained through WFS what allows our approach to optimize a working system. B. GeoData structure Downloaded GeoJSON raw data are processed to obtain less complicated objects which can be used in further calculation. For this case we should implement our own GeoJSON parser compatible with Android based structures. The parser is looking only for lines, multilines and polygons which can represent walls of building in a floor plan. In the next step it divides all objects for separate line sections and build ArrayList objects which allow to manipulate data in easy form. In position processing, we also store temporally location data. For this case we use HashMaps objects which catch each broadcasted position with a timestamp. In further parts of algorithms development it allows us to compare actual position with its past information.







Equation (1) plays a role of the geofencing method in our approach. By using this method we compose the most effective set of objects which are examined in the next steps of the running algorithm. D. Algorithms First of all, in algorithms used after parsing of GeoJSON raw data we transform coordinates of each line sections and points to coordinates systems which simplify all mathematical and numerical calculations. The consequent adoption of WGS84 creates some difficulties with the integration of the floorplans data because transformation to the mathematical coordinates was necessary for further calculations. For that we use extended point objects where we store geographical coordinates with their corresponding UTM representation. We also implement our own algorithms for coordinate‟s transformation which are made only once to increase performance of the running algorithm. As a main algorithmic part of our solution we implement decision methods. It allows us to define if two corresponding positions received in minimum time intervals are on the same side of line or not. It represents behavior of user location when it is placed close to the walls. For this reason we implement representation of the math library sign calculation. We use the sign of the determinant of vectors (AB, AM), where M(X, Y) is the query point: 

P = sign((Bx-Ax)*(Y-Ay) - (By-Ay)*(X-Ax)) 



Another type of geodata structure used in our algorithms is representation of an extended point object. From WFS data we obtain coordinates in WGS 84 projection which cannot be used directly in mathematical formulas. Each coordinate should be transformed before calculations and stored separately. For this reason we use a point object implementation where we store coordinates of each point in different formats (longitude, latitude and its UTM representation).

If P in (2) is 0 on the line, and +1 on one side, -1 is on the other side. These values are tested for the whole set of section lines filtered in previous steps. Changing value of P in short intervals shows us that position jumps to the other side of the line. In real situation it means that a location of a user changes quickly to another side of the wall what, in real life, is impossible in most cases.

C. Data filtering Each floor plan in form of vector data can contain a big amount of data and linear objects. To improve performance of algorithms we process only GeoJSON data from a well-defined bounded box area in closed neighborhood of the actual position. This allows us to reduce vector data.

In case of a negative value of the test it is necessary to calculate a new position. For that we implement an algorithm which calculates an average position from the last received position and a virtual value which is the nearest position to the old one but on the right side of the line. That approach allows us to determine the most accurate approximation of a position in those cases.

As a part of data filtering we use algorithms to precisely read each parameter of GeoJSON only once. At this stage we filter only nodes with defined parameter descriptions which point to objects like walls. The next step in data filtering is sorting all structures and choosing only objects closest to the actual position. This allows to get a small set of objects around the current position. For this purpose we used an optimized algorithm for calculating the shortest distance from point to line (1).



IV.

TESTS & EVALUATION

The experiences and practical tests carried out for implementation of the prototype system have demonstrated the feasibility of the major algorithmic solution. As a result of our previous experiments with Wi-Fi positioning we stated that the accuracy of Wi-Fi fingerprinting-based approach only is often too low for precise indoor applications. The conducted tests allow us to determine and verify our approach to join fingerprinting and vector data.

131/278



2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 For this reason we make a couple of test sets in one real object (in one building) but in different places. We choose a location in rooms close to the wall and in corners where two or more lines (walls) are intersected. We also choose one location in open space where the nearest wall is more than 5 meters from the real location. For test purposes we implement a module for data capture which stores location in each epoch with corresponding vector data description to make evaluation. All results are presented in Tab. 1 TABLE I.

The decision to follow WFS and Wi-Fi fingerprinting positioning approach requires higher effort for creating fat clients with a full set of processing algorithms. In addition, processing of floorplans data is performed on the client side. Using WFS in format of GeoJSON as standard technologies requires JSON processing which is challenging for mobile clients. Advantages of the fat client approach are the higher flexibility for processing floorplans data offered as vector data instead of raster images. In addition, a positioning has to be performed on the client side anyway. With a fully implemented client, positioning and visualization can be implemented in a flexible way.

RESULTS OF TESTS Test results description

Set no.

Set description

Right position from Wi-Fi [%]

Right position Wi-Fi + WFS [%]

1

Open space – 378 pos.

98%

98%

2

Long wall – 548 pos.

82%

86%

3

Short wall – 231 pos.

79%

85%

4

Corner – 432 pos.

68%

69%

All tests are made in long periods of time (more than 10 minutes) in static positions in each point. We compare a calculated position coming from Wi-Fi fingerprint module with a position corrected by our algorithms. In each case the application of our methods gives positive results (except for open space where no changes are reached) but changes are not significant. It can be the result of the chosen object where rooms are relatively small. In this case further tests and algorithms improvement should be made. V.

RELATED WORK

A. Crumbs Indoor positioning system which is used for obtaining positioning data was also used in frame of the EUREKA project “CRUMBS: Crumbs, Places and Augmented Reality in Social Network”. For purpose of location module development in the project the core function of fingerprinting model was implemented and tested in real environments. Floor plans were used to visualize user position on the map. Using only WMS data we observed a need to implement vector data to provide more reliable location information. B. HortiGIS mobile For purposes of European Space Agency project “HortiSat: Integrated Satellite Application for High-Value Horticultural Production” a mobile GIS application was developed that retrieved WMS and WFS data. In that case we used only GPS location data and mostly WMS services were utilized to present and distribute different geospatial information for horticulture users. VI.

introduced a decentralized infrastructure of system providing explicitly modeled data about the building geometry and positioning. Floorplans data are offered by open standards, namely WFS and GeoJSON to achieve high interoperability of the system with Android devices. Floor plans data on client side are combined with map data for visualizing indoor locations in a highly precise integrated manner. In addition, information for positioning is exploited at client side for indoor positioning with different technologies.

CONCLUSIONS AND FUTURE WORK

Approaches based on Wi-Fi need further improvements and should be combined with alternative approaches like 2D graphs data, inertial positioning or other positioning methods. Also crowd-sourcing approaches could help in solving the formulated problem but need deeper exploration. The evaluation has shown that major objectives are feasible and WFS data can be adopted to the Wi-Fi fingerprinting model, and combined in one precise localization system. In summary, the presented work is the first step towards the envisioned goal. Our future work will address the challenges to improve the approach. ACKNOWLEDGMENT This paper is based upon research made in the framework of the Celtic-Plus project “CRUMBS: Crumbs, Places and Augmented Reality in Social Network” supported by the Polish National Centre for Research and Development (Grant No. E! CP7-004/35/NCBiR/11). REFERENCES [1]

[2]

[3]

[4] [5]

In this paper a novel approach is presented to integrate technologies for indoor location-based services with vector geodata obtained from the geoserver WFS streams. We

132/278

G. Dedes and A. Dempster, "Indoor gps positioning - challenges and opportunities", in Vehicular Technology Conference, 2005. VTC-2005Fall. 2005 IEEE 62nd, vol. 1, Sept., 2005, pp. 412 - 415. C. Nagel, T. Becker, R. Kaden, Ki-Joune Li, J. Lee, T. H. Kolbe "Requirements and Space-Event Modeling for Indoor Navigation", OGC 10-191r1, OpenGIS® Discussion Paper J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.68–73. H. Lepprakoski, S. Tikkinen, A. Perttula, and J. Takala, "Comparison of indoor positioning algorithms using wlan fingerprints", in Proceedings of European Navigation Conference Global Navigation Satellite Systems ( ENC-GNSS 2009) M. Mabrouk, OpenGIS location services (openls): Core services," Open Geospatial Consortium Inc., Tech. Rep. OGC 07-074, version 1.2, 2008. Gallagher T, Li B, Dempster AG, Rizos C (2010) Database updating through user feedback in fingerprint-based Wi-Fi location systems. In: Proceedings of positioning indoor location based service. pp 1–8

- chapter 9 -

Computing & Processing (Hardware/ Software)

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

On-board navigation system for smartphones Mauricio César Togneri

Michel Deriaz

Institute of Services Science University of Geneva Switzerland [email protected]

Institute of Services Science University of Geneva Switzerland [email protected]

Abstract—Several mobile solutions offer the possibility to download maps to use them offline at any moment. However, most of the time a connection to an external server is still needed in order to calculate routes and navigate. This represents an issue when traveling abroad due to roaming costs. In this paper, we propose a solution to this problem through an engine that stores and manages OpenStreetMap’s data to consult points of interest, calculate routes and navigate without any connection required. The software manages indoor and outdoor information to provide a full navigation service that works in both environments. Therefore, the same system allows navigating in a highway by car and provides indoor navigation for museums, hospitals and airports among others. The result is an on-board engine for smartphones that provides indoor and outdoor navigation services that does not require Internet connection. Keywords: on-board, navigation, indoor, outdoor, smartphones

I.

INTRODUCTION

Nowadays we can find several web mapping services of which we can highlight Google Maps1, Bing Maps2 or Nokia Here3 among others. All these solutions also provide online navigation services and represent a very important tool in several situations. However, we may encounter situations with specific constraints where an Internet connection is not allowed or guaranteed. In this case we cannot rely on online services and we are forced to use a solution where the navigation services work offline. Another important aspect of the web mapping services is their large coverage range, mostly at worldwide level. This characteristic allows us to take a look at almost every corner in the world and calculate routes between two points that are thousands of kilometers away from each other. Nonetheless, most of the biggest providers have a closed source of information that we cannot change or access freely. There are some exceptions like MapShare4 from TomTom or MapMaker5 from Google that allow users to modify certain parts of the map. However, users do not have the rights over the edited maps and all contributions become property of the companies (map information will remain proprietary and not free).

One exception to this problem is OpenStreetMap6, a collaborative project to create a free editable map of the world that provides geographical data to anyone. Thanks to this project, users can access freely to a world map, modify it and create their own maps. Another aspect of the navigation systems is the indoor maps availability. This topic is relatively new and most of the solutions do not provide a large coverage for indoor navigation. Although it is possible to find indoor services in important places (e.g., airports, big shopping malls, etc.), users still rely on providers to have access to indoor navigation. Therefore, our goal is to create a navigation system that uses a source of information that is free, continuously growing and easy to modify, a system that works offline and allows navigating both indoor and outdoor. In this paper we present a solution that takes advantage of OpenStreetMap’s data and format to create a navigation system for smartphones. This approach solves the problem of service unavailability that users can have due to a lack of Internet connection and allows indoor and outdoor navigation. The result is a generic navigation system that can be integrated in different situations, for example: 

For touristic purposes, in a guide application to explore a new city (offline feature).



As a customized navigation system for museums, hospitals, airports, etc. (indoor feature).



Customized car navigation for companies to control their fleet (personalization feature).



As a route planner for emergency situations (high availability feature).

The remaining sections are structured as follows. Section II provides an overview of the related work in the area of mobile navigation systems. Section III describes the main architecture and implementation of the module. In Section IV we present an example application that shows the services provided by the system. Finally we present our conclusions and future work in Section V.

1

Google Maps, http://www.google.com/maps Bing Maps, http://www.bing.com/maps 3 Nokia Here, http://www.here.com 4 TomTom MapShare, www.tomtom.com/mapshare 5 Google MapMaker, http://www.google.com/mapmaker 2

6

OpenStreetMap, http://www.openstreetmap.org

133/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 II.

RELATED WORK

From a market perspective, most of the mapping solutions offer the possibility to download maps to use them offline at any moment. The best known example is Google Maps and its mobile application7 that allows users to download maps on the phone. However, this can help us to locate ourselves over the map but it is not able to provide offline navigation services because a request to the server needs to be done. There are some mobile solutions such as Sygic: GPS Navigation & Maps8 or ROUTE 66 Maps + Navigation9, which allow downloading maps and calculate routes offline. Even so, those kinds of applications are closed, not free and users have no control over the map that they are using. They are allowed to report bugs or problems on the roads but it is not guaranteed that the changes will be applied. Some other mobile applications such as Navfree: Free GPS Navigation10 and OsmAnd Maps & Navigation11 solve the previous problem. Both are a good example of a free application that uses OpenStreetMap as a source of maps. Even though, these applications do not offer indoor navigation.

The final application acquires the user’s location from the position provider and sends it to the navigation module as it is shown in figure 1. This external module provides the device’s position using the World Geodetic System, revision WGS 84. Inside of the navigation module, all points on the map as well as the user’s position are represented using latitude, longitude and altitude following the specified coordinate system. Therefore, every provider should be compatible with this system in order to be used as a valid position provider for the navigation module. B. Map management The system provides the necessary tools to convert the source maps from OpenStreetMap (i.e., files with the osm extension) into a database. Hence, users can customize their application adding personalized maps. There are two types of scenarios: 

From a research perspective, Jiang, Fang, Yao and Wang [1] present a full infrastructure to deploy an indoor & outdoor navigation system. However, the model relies on a network architecture that uses servers to provide the navigation services. This solution also uses a specific handheld device which makes it difficult to implement the system in a real situation. Moreover, in the work of Li and Gong [2] we find another attempt to create a system that integrates indoor and outdoor navigation. Nevertheless, we encountered the same problem of a server connection dependency. In this case the system uses the Google Maps API to acquire the routes for outdoor and it uses a local server to provide the product’s querying services and the indoor route calculation.

Outdoor maps: OpenStreetMap provides outdoor maps for the whole world. Then, users can download specific regions and add them to the applications as exchangeable maps. For example: a user can download a map of a specific city, a country, a continent, etc. The module is able to manage several maps at the same time. After downloading a specific map, users are allowed to perform local modifications that do not want to upload to the OpenStreetMap’s servers. For example, users can modify locally a map to adapt it to a certain kind of social event (e.g., marathons, conferences, expositions, etc.) in order to improve the navigation services. This kind of modifications are not meant to be uploaded to the servers since they are taking place only in a short period of time.

The novelty of our work is a software module called NaviMod (Navigation Module) that solves all the previous problems. In other words, it is a system that works offline, uses an open, free and customizable source of maps and allows both indoor and outdoor navigation. III.

ARCHITECTURE AND IMPLEMENTATION

The entire system consists of an Android library, which means that it can be integrated in a variety of devices, such as smartphones, tablets, smart watches and the future Google Glass among others. A. Requirements The system assumes that the final application has access to a position provider due to the navigation system requiring the user’s location in order to perform certain navigation services. Figure 1. System architecture 7

Google Maps Mobile, http://www.google.com/mobile/maps Sygic: GPS Navigation & Maps, http://www.sygic.com 9 ROUTE 66 Maps + Navigation, http://www.66.com 10 Navfree, http://www.navmii.com/gpsnavigation 11 OsmAnd Maps & Navigation, http://www.osmand.net 8

134/278



Indoor maps: Due to OpenStreetMap not providing indoor maps, the user has to create his own maps for specific buildings. Unlike the research made by Gotlib, Gnat, and Marciniak [3], we decided not to use a

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 elements as well. Figure 3 represents a point of interest using standard tags in OpenStreetMap. Figure 3. A point of interest using standard tags in OpenStreetMap

Figure 2. An indoor map design using JOSM

complex format but to adapt the proposal for indoor mapping for OpenStreetMap [4] for future compatibilities. These maps can be shared between users in order to collaborate and improve the navigation service. Once the map is finished, the tool will convert the source map into a database containing the indoor map and the information related to it (i.e., points of interest). Figure 2 shows an example of an indoor map design for the second floor of the University Computing Center at the University of Geneva. The network map was designed using the JOSM12 tool, which can be used to edit outdoor maps as well. The map contains all the corridors and their connections. Each point of interest (e.g., offices, classrooms, etc.) is represented by a node in the map. The result of the conversion of the source map is a directed graph that contains all the points and their connections inside the map. This network is stored into a database, called “map network”, which will be used to reconstruct the graph and calculate routes. The reason why the module does not use directly the osm source files is that the database approach offers a better performance accessing the data eliminating all the unnecessary information that is not used by the navigation system. C. Indoor maps As previously mentioned, the indoor maps are created using the official OpenStreetMap source format. Therefore, the geographical coordinates related to a node or a point of interest are represented using the same format as the outdoor maps, keeping a strong compatibility between both environments. OpenStreetMap’s format offers a free tagging system that allows the map to contain unlimited data about its elements. A tag consists of a key and a value that are used to describe elements. The community has agreed in a set of standard tags to represent the most common points of interest in a map (e.g., offices, toilets, cafeterias, rooms, elevators, stairs, etc.). Hence, indoor maps can use the same set of tags to represent indoor 12

This set of standard tags does not take into account several elements that can be found in a specific indoor environments such as hospitals, museums or airports. However, thanks to the free tagging system, users can define their own tags and then create any kind of point of interest needed. For example, figure 4 represents a printer as a point of interest. This is a common element that can be found often in offices and is not contemplated in the standard set of tags of OpenStreetMap. However, the rules to describe indoor maps are flexible enough to allow users to define and create their own elements for new scenarios. Figure 4. A custom point of interest using OpenStreetMap’s format

D. Navigation instructions The navigation module is able to provide turn-by-turn navigation instructions in order to follow the calculated route. In order to accomplish this task, the module needs the current user’s position to perform a technique called map matching. This technique is used to merge the data from the position provider and the map network to estimate the user’s location that best matches the calculated route. The reason that this technique is necessary is that the location acquired from position provider is subject to errors. This feature offers a smoother transition between the different positions acquired by the position provider and avoids unexpected variations in the position. Once the user’s position is matched with the map, the navigation module is able to calculate the next turn that the user needs to perform as well as the distance to it (e.g., turn to the left in 15 meters). The navigation module informs about this event to the main application which will be in charge of display it on the screen. E. Route algorithm The module is able to calculate routes between two points that can be separated for a few meters or hundreds of

JOSM, http://josm.openstreetmap.de

135/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 TABLE I.

kilometers. The route algorithm allows to set the mode of traveling (i.e., by car, by foot, by bike or by wheelchair). The chosen algorithm to calculate routes used by the module is A*, a generalization of the Dijkstra algorithm, as explained in [5], [6] and [7]. The only difference is that A* uses a heuristic function (also called h(x)) in order to speed the algorithm up. The A* algorithm offers better performance over the Dijkstra algorithm thanks to the heuristic function that allows to “guide” the process in order to find the target node inside the network. The current implemented version of the A* algorithm is able to use two heuristics: 

Distance (Euclidean): for traveling by foot, by bike and by wheelchair to calculate the shortest route



Time: for traveling by car to calculate the fastest route

Regarding the performance, we encountered a few problems not in the execution of the algorithm itself but loading the information from the database. We faced some latency problems due to managing outdoor maps since the module needs to handle a database of around 500,000 records for a single city. In table I, we show different measurements related to the performance of the algorithm. Each row contains the average information related to different executions of the calculation of a route. We choose four different scenarios that calculate a route between two points separated by 580 m, 790 m, 2 km and 5 km respectively. The result shows a big time consumption loading the information from the database and a small part of the time running the algorithm to process the loaded nodes and links.

Nodes loaded/processed

Total time

640 / 110

165 ms

110 ms

67%

55 ms

33%

900 / 140

260 ms

190 ms

73%

70 ms

27%

4800 / 2300

1240 ms

1070 ms

86%

170 ms

14%

18300 / 13300

5730 ms

5200 ms

91%

530 ms

9%

RESULTS

Due to the system consisting only of an Android library, we have created an example application in order to show all the services that the module can provide. Specifically, it consists of an Android application for smartphones and tablets. As a position provider we used an internal module called GPM (Global Positioning Module) [8], a hybrid positioning framework for mobile device which provides to the final application the user’s location, both indoor and outdoor. However, if the final application is meant to be used only in outdoor environments, the position provider could use only the GPS signal.

Database time

A* time

The example application shows the user’s current location and allows calculating routes between two points. The user can select as a start or end point: his current position, a point in a map (touching the screen) or a point of interest from the catalog. In this case we used the Google Maps viewer to show that the navigation module is independent of the map viewer of the final application. It means that the navigation module only provides the services and it is the responsibility of the final application to show these results (in a 2D or 3D map, using augmented reality, voice instructions, etc.). A. Outdoor In this case the application works as a standard navigation system (e.g., TomTom) that allows navigating in the city. The example application contains the network map of the city of Geneva. However, if the application is meant to be used in another city, the user just needs to generate the map of the correct region and add it to the application. Figure 5 shows:

We can also see that as we increase the distance between the start and end point, the ratio between the processed and loaded nodes changes. Considering the last scenario, a total of 13300 nodes where processed over 18300 nodes that where loaded in memory. This means that 72.6 % of the loaded nodes where actually used in the computation of the algorithm. Quantifying the number of nodes in surface area values, 13300 nodes correspond to 10.9 km2 and 18300 nodes to 15 km2. IV.

ALGORITHM PERFORMANCE



A route by foot from the current user’s position to a point of interest in the city. In this case a static route (with the total distance and the estimated time) is displayed to the user, who can accept it and start the navigation or cancel it.



The user following another route by car. In green the path behind (already done) and in red the path ahead. In this case, the navigation module will monitor the user’s position, perform the map matching and provide the correct turn-by-turn directions in real time to guide the user to his destination.

B. Indoor The example application also contains an indoor map of a building so the user is able to navigate within it. The map network of the building also contains information about the points of interest inside of it (e.g., offices, classrooms, cafeterias, toilets, etc.). Therefore, a user who enters the building for the first time and needs to reach a specific room can use the application to find it in the points of interest catalog and navigate to it.

136/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 On the other hand, OpenStreetMap’s data does not have enough information to perform geocoding in an outdoor environment. Therefore, is not possible to find the associated geographic coordinates (often expressed as latitude and longitude) from other geographic data, such as street addresses, or postal codes. Due to this problem, for the moment the user is only able to select a starting point or destination by choosing a point of interest from the catalog, selecting a point on the map (touching the screen) or using his current position. Another problem is that in the current version the user is only able to travel indoor-indoor or outdoor-outdoor. It means that is not possible to travel from an outdoor position to an indoor place or vice versa due to both environments are in two separated maps. From the performance perspective, the used route algorithm needs to be improved in order to optimize the node processing and reduce the time response. Currently, solutions such as Navfree or OsmAnd implement algorithms that compute the route in half of the time. This is an important point to take into account for future improvements of the module.

Figure 3. Example of outdoor navigation

The final application is able to show the correct floor plan in each moment using the user’s altitude and the map network. Figure 6 shows an indoor path between two rooms and the user navigating by foot thought the same route. In this case the navigation system provides specific turn-by-turn directions for an indoor environment (e.g., no street names or maximum speed indications are shown). V.

Additionally we are looking into creating maps connections to calculate routes between indoor and outdoor environments. Furthermore, we planned to work on the taxonomy of indoor maps to offer a better indoor navigation service. ACKNOWLEDGMENT This work is supported by the AAL Virgilius Project (aal2011-4-046).

CONCLUSIONS AND FUTURE WORK

Thanks to the module implementing all the navigation services, the final application remains small and it is only limited to showing the graphical map, receiving the input parameters from the user and showing the results provided by the navigation system.

REFERENCES [1] [2]

[3]

[4] [5]

[6]

[7] [8]

Figure 4. Example of indoor navigation

137/278

Yali Jiang, Yuan Fang, Chunlong Yao, and Zhisen Wang, “A design of indoor & outdoor navigation system”, proceedings of ICCTA 2011. Hui Li, and Xiangyang Gong, “An approach to integrate outdoor and indoor maps for books navigation on the intelligent mobile device” IEEE 3rd International Conference on Communication Software and Networks (ICCSN), 2011. Dariusz Gotlib, Milosz Gnat, and Jacek Marciniak, “The Research on Cartographical Indoor Presentation and Indoor Route Modeling for Navigation Applications”, International Conference on Indoor Positioning and Indoor Navigation, IPIN 2012. Indoor mapping proposal for OpenStreetMap, as on October 2013, http://wiki.openstreetmap.org/wiki/Indoor_Mapping Ioannis Kaparias, and Michael G. H. Bell, “A Reliability-Based Dynamic Re-Routing Algorithm for In-Vehicle Navigation”, Annual Conference on Intelligent Transportation Systems 2010. Peter E. Hart, Nils J. Nilsson, and Bertram Raphael, “A Formal Basis for the Heuristic Determination of Minimum Cost Paths”, IEEE Transactions of systems science and cybernetics, 1968. T.M. Rao, Sandeep Mitra, and James Zollweg, “Snow-Plow Route Planning using AI Search” 2011. Anja Bekkelien, and Michel Deriaz, “Hybrid Positioning Framework for Mobile Devices”, 2nd International Conference on Ubiquitous Positioning, Indoor Navigation, and Location Based Service (UPINLBS), 2012.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th -31st October 2013

A Gyroscope Based Accurate Pedometer Algorithm Sampath Jayalath Department of Electrical and Computer Engineering Sri Lanka Institute of Information Technology Colombo, Sri Lanka [email protected]

Nimsiri Abhayasinghe Department of Electrical and Computer Engineering Curtin University Perth, Western Australia [email protected]

Abstract—Accurate step counting is important in pedometer based indoor localization. Existing step detection techniques are not sufficiently accurate, especially at low walking speeds that are commonly observed when navigating unfamiliar environments. This is more critical when vision impaired indoor navigation is considered due to the fact that they have relatively low walking speeds. Almost all existing pedometer techniques use accelerometer data to identify steps, which is not very accurate at low walking speeds. This paper describes a gyroscope based pedometer algorithm implemented in a smartphone. The smartphone is placed in the pocket of the trouser, which is a usual carrying position of the mobile phone. The gyroscope sensor data is used for the identification of steps. The algorithm was designed to demand minimal computational resources so that it can be easily implemented in an embedded platform. Raw data from the sensor are filtered using a 6th order Butterworth filter for noise reduction. This is then sent though a zero crossing detector which identifies the steps. A minimum delay between two consecutive zero crossings was used to avoid fluctuations being counted and peak detection was used to validate steps. The algorithm has a calibration mode, in which the absolute minimum swing of data is learnt to set the threshold. This approach demonstrated accuracies above 96% even at slow walking speeds on flat land, above 95% when walking up/down hills and above 90% when going up/down stairs. This has supported the concept that the gyroscope can be used efficiently in step identification for indoor positioning and navigation systems. Index Terms—pedometer algorithms; gyroscopic data; singlepoint sensors; step detection; localization and navigation; vision imapired navigation

I. I NTRODUCTION Accurate step counting is a critical parameter in pedometer based indoor localization systems in improving their accuracy and reliability. Existing step detection techniques, both hardware and software, does not satisfactorily cater the accuracies demanded by localization systems especially at low walking speeds observed in natural walking [1]-[3]. Situation may be worse with vision impaired indoor navigation is considered, especially in an unfamiliar environment. Most of existing pedometers use accelerometer data in detecting steps and are based on threshold detecting [4], [5]. The pedometer algorithm discussed in this paper is based on the proposal of using gyroscopes in human gait identification for indoor localization that was proposed by Abhayasinghe and Murray [6]. This research is a part of an indoor navigation system for vision impaired people.

Iain Murray Department of Electrical and Computer Engineering Curtin University Perth, Western Australia [email protected]

The performance of some existing pedometers are discussed in the “Background” section whereas the novel, gyroscope based pedometer algorithm and its performance are discussed in the “Step Detection Algorithm” section and “Experimental Results” section of this paper. II. BACKGROUND Jerome and Albright [1] have compared the performance of five commercially available talking pedometers with the involvement of 13 vision impaired adults and 10 senior adults, and observed that the step detection accuracy for all of them were poor (41 − 67%) while walking on flat land and the situation was worse when ascending stairs (9 − 28%) or descending stairs (11 − 41%). Crouter et al. [2] have compared 10 commercially available electronic pedometers and confirmed that they underestimate steps in slow walking. Garcia et al. [3] have compared the performance of software pedometers and hardware pedometers and observed that both these types are comparable in all walking speeds and both types have demonstrated poor accurately in slow (58 to 98 steps·min−1 ) walking speeds: 20.5% ± 30% for hardware pedometer and 10% ± 30% for software pedometer. Waqar et al. [4] have used an accelerometer based pedometer algorithm with fixed threshold in their indoor positioning system. They have reported a mean accuracy of 86.67% in their 6 trials of 40 steps each, with a minimum accuracy of 82.5% and a maximum of 95%. The median accuracy was 85%. A Smartphone pedometer algorithm based on accelerometer is discussed by Oner et al. [5] and their algorithm demonstrated sufficient accuracies at walking speeds higher than 90 beats per second (bps), but its performance degrades as speeds fall below 90 bps. Their algorithm has over counted steps and the error was approximately 20% at 80 bps, 60% at 70 bps and 90% at 60 bps. Lim et al. [7] have proposed a foot mounted gyroscope based pedometer, but the authors have not mentioned the accuracy of their system. Further, they use force sensitive resisters (FSR) to detect the toe and heel contacts, and hence the accuracy of step detection should be higher as they can easily detect the Initial Contact using the FSR. Ayabe et al. [8] have examined the performance of some commercially available pedometers in stair climbing and bench

978-1-4673-1954-6/12/$31.00 ©2012

138/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th -31st October 2013 stepping exercises and recorded that the pedometers could count steps with an error of ±5% at speeds of 80 to 120 steps·min−1 . However, the accuracy was poor for low step sizes and lower stepping rates (> ±40% at 40 steps·min−1 ). Most of the examples discussed here used accelerometer data to detect steps and they perform poorly at slow walking speeds. The main reasons for this poor performance at low speeds are the static value (gravitational acceleration) present in the accelerometer, slow response of accelerometer and that most of these algorithms cannot adopt their threshold levels to suit with the pace of walking. This raises the requirement of an accurate step detection technique at slow walking speeds. III. S TEP D ETECTION A LGORITHM A. Introduction The work presented in this paper is based on the proposal made in [6] that the gyroscopic data can be exclusively used for gait recognition in indoor navigation applications. The authors have proposed that the output of a single point gyroscope sensor located in the pants pocket gives sufficient information to track the movement of the thigh and hence detect the steps. B. Relationship Between Gyroscopic Data and Movement of the Thigh

Orientation of Thigh (degrees), Gyro X Reading (×7 rad/sec)

A stride cycle is measured from the Initial Contact of one heel to the next Initial Contact of the same heal [9]. At the Initial Contact, the deflection of the thigh in the forward direction is a maximum. Fig. 1 shows the orientation of the thigh computed using gyroscopic data and low-pass filtered (with a 6th order Butterworth low pass filter with cutoff frequency of 5 Hz) gyroscopic X axis reading. Initial Contact points and the stride cycle identified based on the orientation are marked on the graph. The initial orientation when the leg is at rest was calculated by fusing accelerometer and the compass data. For this computation, the static value of the gyroscopic data was removed by deducting the average. It can be clearly seen that the filtered gyroscopic data is close to zero at the Initial Contact point of the particular leg and has a negative gradient. Hence, the period from one negative gradient zero crossing point to the next of the filtered gyroscope reading is a stride cycle as shown in the figure. Orientation of Thigh with Filtered Gyro-X Reading when Walking on Flat Land

It was also observed that the negative gradient zero-crossing corresponds to the Initial Contact of that leg when walking on stairs and on an inclined plane too. Therefore it is clear that zero crossing detection of filtered gyroscopic data may be used in detecting the stride cycle, hence the steps, even if the person is walking on stairs or on an inclined surface. In line with these observations, the device is assumed to be in vertical placement where forward and backward rotation of the thigh is read as gyroscopic X reading. Hence the real time processing is limited to gyro-X only. C. Pre Processing of Data Before attempting to identify zero crossings, the gyroscopic X axis data is filtered with a 6th order discrete Butterworth lowpass filter with cutoff frequency of 3 Hz. 3 Hz was selected as the cutoff frequency because the mean speed of fast gait is in the range of 2.5 steps per second [10]. The cutoff frequency was lowered as much as possible for better smoothness of the waveform so that the unwanted oscillations around zero are minimal, but still the stride cycle is visible in the waveform. D. Zero-Crossing Detector A simple 2-point zero-crossing detection was used to simplify the algorithm. Both positive and negative zero-crossings were detected by alternating the polarity of the zero-crossing detector because the positive zero-crossing corresponds to the starting point of Pre Swing of the particular leg, or the Initial Contact of the other leg. Hence, the total count of zerocrossings is the number of steps the person has walked. E. Avoiding False Detections As indicated by the circle in Fig. 1, the filtered gyroscopic signal may cross zero with a negative gradient for more than one time during the period from Initial Contact to Loading Response. However, because this period is between 0 − 10% of the gait cycle [9] a timeout mechanism was used to avoid this unwanted zero-crossing being detected. Once a zero-crossing is detected, the zero-crossing detector remains disabled for 100 ms to avoid detecting these multiple zero crossings. 100 ms was selected as 15% of the stride cycle assuming a step frequency of 1.5 steps per second for slow gait [10]. This time delay is 30% of the stride cycle of average fast gait of 3 steps per second and hence it will not disturb the detection of the next zero-crossing of fast gait.

30 25

Initial Contact

Orientation of Thigh Gyro-X Reading

20

F. Validating the Detected Zero Crossings

Stride Cycle

15 10 5 0 -5 -10 -15 2

2.5

3

3.5

4

4.5

5

Time (s)

Figure 1. Orientation of the thigh with filtered gyroscope-X axis reading when walking on flat land

A threshold detection mechanism was used in the algorithm to validate each zero-crossing detected. As shown in Fig. 1, the gyroscopic reading reaches the corresponding peak after the zero-crossing point. However, in the area marked by the circle, the relative maximum is well below the peak of the signal and that relative maximum does not correspond to the middle of the swing of a leg, hence need to be eliminated. The algorithm includes a calibration mode where the user has to walk with the slowest possible speed so that the smallest deflection of the gyroscope signal is learnt by the algorithm. After detecting a

139/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th -31st October 2013 zero-crossing, the algorithm checks for the peak that follows the zero-crossing, and checks if it is larger than the threshold. The counter is incremented only if the peak is larger than the threshold. G. The Step Detection Algorithm A flow chart illustrating the step detection algorithm is depicted in Fig. 2. It should be noted that both positive and negative zero-crossings are detected by the algorithm and the polarity to be checked is toggled after each detection. However, the polarity toggling is not indicated in the figure to reduce graphical complexity. H. Implementation of the Algorithm The algorithm was implemented in Matlab® for simulation purposes and after confirming the outcomes of the algorithm using prerecorded data, it was implemented in an Apple iPhone 4S. During the implementation it was noticed that the algorithm could count the movements of the phone while in the hand, when placing the phone in the pocket before the trial and taking out of the pocket after the trial. Because Apple license does not allow use of some phone features [11], such as ambient light sensor to detect placement in the pocket, a time out mechanism and a manual correction was used at the beginning and at the end of the trial respectively. After pressing the start button, the application allows a timeout to allow user to place the phone in the pocket. The algorithm starts detecting steps only after the timer has timed out. Manual decrement of the total count by one was done to compensate the false count at the end when the phone is taken out of the pocket. IV. E XPERIMENTAL R ESULTS The simulations indicated that the accuracy of step counting of the algorithm on prerecorded data was 100%. The algorithm

was tested in the real world for five different activities: walking on flat land, upstairs, downstairs, uphill and downhill, with the involvement of 5 male and 5 female volunteers. They were asked to place the phone vertically in the pants pocket and perform the relevant activity. The tests were conducted in two stages: first with normal walking speed and then with five different stepping rates (50, 75, 100, 125 and 150 steps·min−1 ). The actual number of steps that the subject traveled was counted for each trail by a note taker. Table I shows sample results of a single subject performing different activities with normal stepping rate. In that set of trials, the algorithm showed above 95% accuracy in every activity. Table II shows statistics of actual number of steps, number of steps counted by the algorithm and the accuracy in all trials. It can be seen that the algorithm has shown a minimum mean accuracy of 94.55% for going downstairs and the minimum reported accuracy for all the trials of 90.91% for stair climbing (both up and down). However, the minimum accuracy reported by the algorithm for walking on flat land is 96.00% with a maximum of 100%. The algorithm has reported accuracies greater than 95% for walking on an inclined surface with a mean accuracy of 97.17% for going down and 98.18% for going up. The second set of experiments were conducted for walking on flat land and on stairs only, where the subjects were asked to walk with five stepping rates: 50, 75, 100, 125 and 150 steps·min−1 . For walking on flat land, the minimum accuracy of 94.59% was reported at 75 steps·min−1 whereas the mean accuracy for that speed was 97.89%. The statistics are shown in Table III. However, the minimum accuracy reported at 50 steps·min−1 was 96% and the accuracy was greater than 96% at all other stepping speeds. The minimum accuracy reported in going up stairs and down stairs was 90.91% where the total number of steps considered in each case was 11. Although this is the absolute minimum, the lowest mean accuracy reported when walking up stairs was 96.36% and that is at 75 and 125 steps·min−1 . For walking down stairs, the lowest mean accuracy reported was 95.45% for the stepping speeds of 50 and 125 steps·min−1 . V. D ISCUSSION AND F UTURE W ORK Trials of walking on stairs had to be limited to 11 steps per trial due to unavailability of long stairways. Due to this reason,

Table I S AMPLE R ESULTS OF O NE S UBJECT Actual No. of Steps

Activity

Figure 2. Flow Chart of the Step Detection Algorithm

Walking Walking Walking Walking Walking Walking

140/278

slowly on flat land faster on flat land up stairs down stairs up hills down hills

27 49 11 11 40 43

Steps Counted by Algorithm 26 49 11 11 40 41

Accuracy (%) 96.30 100.00 100.00 100.00 100.00 95.35

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th -31st October 2013 Table II S TATISTICS OF THE P ERFORMANCE OF THE A LGORITHM FOR D IFFERENT ACTIVITIES Actual No. of Steps

Activity Walking slowly on flat lands (100 steps·min−1 ) Climbing up stairs Climbing down stairs Walking on inclined plane(up) Walking on inclined planes(down)

Steps Counted by Algorithm

Accuracy (%)

Mean

Var

Mean

Var

Mean

Var

Min

Max

28.50

2.45

27.60

2.64

96.82

1.16

96.00

100.00

49.10

1.29

48.50

0.65

98.80

1.73

96.08

100.00

11.00 11.00 43.30 42.20

0.00 0.00 2.01 1.36

10.70 10.40 42.50 41.00

0.21 0.24 1.45 1.20

97.27 94.55 98.18 97.17

17.36 19.83 1.87 2.02

90.91 90.91 95.45 95.24

100.00 100.00 100.00 100.00

Table III S TATISTICS OF THE P ERFORMANCE OF THE A LGORITHM FOR WALKING ON F LAT L AND WITH D IFFERENT S TEPPING R ATES Activity 50 steps·min−1 75 steps·min−1 100 steps·min−1 125 steps·min−1 150 steps·min−1

Actual No. of Steps

Steps Counted by Algorithm

Accuracy (%)

Mean

Var

Mean

Var

Mean

Var

Min

Max

25.90 37.80 51.00 62.50 74.50

1.09 0.96 1.00 0.65 0.65

25.50 37.00 49.90 62.00 73.90

0.85 1.20 1.29 0.40 1.69

98.49 97.89 97.85 99.21 98.92

3.43 2.58 1.89 0.63 0.66

96.00 94.59 96.00 98.39 97.26

100.00 100.00 100.00 100.00 100.00

the false count at the end of the trail is large as a percentage to the total number of steps. This is the main reason for low accuracy. Although the number of steps will be less in real application too, the phone will not be taken out of the pocket by the end of the stair case and hence the aforementioned error count will not occur. In addition to that, the vendor restrictions have restricted us using some facilities of the phone to detect whether the phone is in the pocket. This reason has caused the accuracy of the algorithm for other activities also to drop below 100%. Implementing the algorithm in other platforms will be the next step to see the real performance of the algorithm with all features. The algorithm discussed in this paper assumes defined and fixed orientation of the phone in the pants pocket. Currently the authors are working on improving the algorithm so that it can be used with different orientations in the pocket. The focus is to include an orientation correction into the algorithm such that the correct gyroscopic axis or combination of axes is used. However, the placement is still limited to the pants pocket as the authors have identified the pants pocket as the most suitable place for device placement for step detection [6]. VI. C ONCLUSIONS This paper presented a single-point gyroscope based pedometer implemented in a Smartphone as a component in the development of an indoor way finding system for people with vision impairment. From the testing conducted for different activities and different stepping speeds, the algorithm gave promising results and high step detection accuracy even at low walking speeds. The gyroscope based step detection can be easily used as an accurate step counting technique for indoor localization and navigation systems not only on level terrain, but also on tilted terrains and on stairs.

R EFERENCES [1] G. J. Jerome and C. Albright. (2011, June). “Accuracy of five talking pedometers under controlled conditions,” The Journal of Blindness Innovation and Research [On-line], vol. 1(2), Available: www.nfbjbir.org/index.php/JBIR/article/view/17/38 [Oct. 27, 2011]. [2] S. E. Crouter, P. L. Schneider, M. Karabulut and D. R. Bassett, “Validity of 10 electronic pedometers for measuring steps, distance, and energy cost,” Medicine & Science in Sports & Exercise, vol.35 no. 8, pp.14551460, Aug., 2003. [3] E. Garcia, Hang Ding, A. Sarela and M. Karunanithi, “Can a mobile phone be used as a pedometer in an outpatient cardiac rehabilitation program?,” in IEEE/ICME International Conference on Complex Medical Engineering (CME) 2010, Gold Coast, QLD, 2010, pp.250-253. [4] W. Waqar, A. Vardy and Y. Chen. “Motion modelling using smartphones for indoor mobilephone positioning,” in 20th Newfoundland Electrical and Computer Engineering Conference [Online], Newfoundland, Canada, 2011, Available: http://necec.engr.mun.ca/ocs2011/viewpaper.php?id=55&print=1 [5] M. Oner, J.A. Pulcifer-Stump, P. Seeling and T. Kaya, “Towards the run and walk activity classification through Step detection - An Android application,” in 34th Annual International Conference of the IEEE Engineering in Medicine and Biology, San Diego, CA, 2012, pp.1980-1983. [6] K. Abhayasinghe and I. Murray. (2012, Nov.). “A novel approach for indoor localization using human gait analysis with gyroscopic data,” in Third International Conference on Indoor Positioning and Indoor Navigation (IPIN2012) [Online], Sydney, Australia, 2012. Available: http://www.surveying.unsw.edu.au/ipin2012/proceedings/submissions/ 22_Paper.pdf [Mar. 5, 2013]. [7] Y. P. Lim, I. T. Brown and J. C. T. Khoo, “An accurate and robust gyroscope-based pedometer,” in 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS 2008), Vancouver, BC, 2008, pp.4587-4590. [8] M. Ayabe, J. Aoki, K. Ishii, K. Takayama and H. Tanaka “Pedometer accuracy during stair climbing and bench stepping exercises,” Journal of Sports Science and Medicine, vol. 7, pp.249-254, June, 2008. [9] J. Perry, Gait Analysis: Normal gait and pathological function. Thorafare, NJ Slack, 1999, ch. 1-2. [10] T. Oberg, A. Karsznia and K. Oberg, “Basic gait parameters: Reference data for normal subjects, 10–79 years of age,” J. Rehabil. Res. Dev., vol.30 no. 2, pp.210–223, 1993. [11] Apple Inc. (2010, Aug. 10) “Ambient Light Sensor”[Weblog entry]. Apple Developer Forums. Availabel: https://devforums.apple.com/message/274229 [July 8, 2013].

141/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th -31st October 2013

Bluetooth Embedded Inertial Measurement Unit for Real-Time Data Collection Ravi Chandrasiri Department of Electrical and Computer Engineering Sri Lanka Institute of Information Technology Colombo, Sri Lanka [email protected]

Nimsiri Abhayasinghe Department of Electrical and Computer Engineering Curtin University Perth, Western Australia [email protected]

Abstract—Inertial Measurement Units (IMUs) are often used to measure motion parameters of human body in indoor/outdoor localization applications. Most of commercially available low-cost IMUs have limited number of sensors and are often connected to a computer by a wired connection (usually by USB). The disadvantage of using wired IMUs in human gait measurement is that, the wires disturb the natural gait patterns. The existing IMUs with wireless connectivity solve that problem, but are relatively high cost. This paper describes the development and testing of a miniature IMU that can be connected to a Windows based computer or an Android based mobile device through Bluetooth. The IMU consists of a 3-axis accelerometer, 3-axis gyroscope, 3axis magnetometer, a temperature sensor, a pressure sensor and an ambient light sensor. Sensors are sampled at a frequency configurable by the user with a maximum set at 100 Hz. Raw sensor data are streamed through the integrated Bluetooth module to the host device for further processing. The IMU is also equipped with a microSD card slot that enables on-board data logging. The power usage of the Bluetooth transmitter is optimized because only the sampled sensor data are transmitted. The windows application can be used to view sensor data, plot them and to store them into a file for further processing. Android application can be used to view data as well as to record data into a file. The small size of the device enables it be attached to any part of lower or upper human body for the purpose of gait analysis. Comparison of the performance of the device with a smartphone indicated that the output of the IMU is comparable to the output of smartphone. Index Terms—indoor localization; IMU; 3-axis inertial sensors; human gait analysis

I. I NTRODUCTION Inertial Measurement Units (IMU) are often used in indoor/outdoor localization applications and robotic applications to measure inertial parameters of human body or the robot. Most of commercially available low-cost IMUs are wired to a computer usually using USB [1]. IMUs with wireless connectivity to a computer are costlier [1], [2]. Commercially available IMUs are usually equipped with a 3-axis accelerometer, a 3axis gyroscope and a 3-axis magnetometer but they do not include an ambient light sensor or a barometer [1], [2] that may also be important in indoor localization applications as the ambient light sensor may be used to detect different light levels in different areas and the barometer to identify different floor levels. A temperature sensor may also be important in

Iain Murray Department of Electrical and Computer Engineering Curtin University Perth, Western Australia [email protected]

indoor localization applications to detect different temperature levels in different areas (e.g. higher temperature around fire place) and are sometimes included in commercially available IMUs for the purpose of temperature compensation of inertial sensors [1]. The IMU discussed in this paper consists of an accelerometer, a gyroscope, a magnetometer, a temperature sensor, an ambient light sensor and a barometer. This IMU was developed as a part of an indoor navigation system for vision impaired people. Features of two commercially available IMUs with wireless connectivity and their prices are compared in “Related Work” section and the development of the IMU, its hardware/software features and performance are discussed in “Construction of the IMU” and “Performance of the IMU” sections. II. R ELATED W ORK A series of IMUs have been developed by YEI Technologies with different features [1]. All these IMUs are equipped with a 14-bit 3-axis accelerometer, a 16-bit 3-axis gyroscope and a 12-bit 3 axis magnetometer. Key technical details of these are shown in Table I. They also include a temperature sensor that is used for temperature compensation of inertial sensors. The cheapest of them (US$ 163) that comes as a standalone IMU, has USB and RS232 connectivity only whereas the others that have wireless connectivity are costlier (US$ 304 for Bluetooth version and US$ 247 for Wireless 2.4 GHz DSSS version). The version with on-board data logging (US$ 202) has USB connectivity only. The processor in all these devices is a 32-bit RISC processor running at 60 MHz. Two data modes, IMU mode and Orientation mode, are available in all these versions. Kalman filtering, Alternating Kalman filtering, Complementary filtering or Quaternion Gradient Descent filtering can be selected as orientation filter when not in IMU mode where processed orientation is made available as the output. A maximum sampling rate of 800 Hz is available in IMU mode. The IMU of x-io Technologies [2] is equipped with a 12bit 3-axis accelerometer, a 16-bit 3-axis gyroscope and a 12bit 3-axis magnetometer of which the key technical details are given in Table I. This IMU too, has a temperature sensor for

978-1-4673-1954-6/12/$31.00 ©2012

142/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th -31st October 2013 Table I T ECHNICAL D ETAILS OF IMU S OF YEI T ECHNOLOGIES AND X - IO T ECHNOLOGIES Sensor

Parameter Bit size Max. Range Bit size Max. Range Bit size Max. Range

Accelerometer Gyroscope Magnetometer

YEI Technologies 14 bits ±8 g 16 bits ±2000 º/sec 12 bits ±8.1 G

x-io Technologies 12 bits ±8 g 16 bits ±2000 º/sec 12 bits ±8.1 G

temperature compensation of inertial sensors. It has a maximum sampling rate of 512 Hz and has USB, Bluetooth and UART connectivity as well as SD card slot for on-board data logging. This device has an IMU algorithm and Attitude and Heading Reference System (AHRS) algorithm running on-board for realtime orientation calculations. The price of this device is £ 309 (~US$ 460). III. C ONSTRUCTION OF THE IMU

Figure 1. Architecture of the IMU

for uniformity all sensor data were converted to 16-bit format so that manipulation of data is easier.

A. Architecture The IMU discussed in this paper consists of a 3-axis accelerometer, a 3-axis gyroscope, a 3-axis magnetometer, a temperature sensor, a barometer sensor and an ambient light sensor. It is equipped with a USB port and a Bluetooth module to communicate with a computer and an SD card slot for on-board data logging. The processor used in the IMU is an 8-bit AVR microcontroller running at 8 MHz on a 3.3 V supply. This microcontroller is a cheaper low power (~10 mW at 8 MHz on 3.3 V [5]) processor, yet powerful enough to cater the requirements of the IMU. Observations of [3] and [4] indicate that normal gait frequency is approximately 2 steps·s−1 and fast gait frequency is approximately 2.5 steps·s−1 and hence, 100 Hz sampling rate is sufficient to extract features of human gait using inertial sensors and the maximum sampling rate of the IMU was selected as 100 Hz. Fig. 1 shows the architecture of the IMU. Raw sensor data collected are first scaled appropriately (as different scales are available for most of the sensors) and then they are organized into frames as discussed under “Data Acquisition and Transmission” section. These frames are streamed out through USB 2.0 interface and Bluetooth interface without any further processing. The scope was to present sensor data to the user without performing complex computations, so that the flexibility is given to the user to perform any computations/analysis with these data in an external processing device: either a personal computer or a smartphone. This avoids any “unknown” on-board data processing to be there which keeps full control to the user. B. Sensors All the sensors have digital data output with I2 C interface. Vendor details, bit size, resolution and maximum range of each sensor are shown in Table II. All these sensors have technical specifications comparable with sensors used in [1] and [2]. Because all sensors have output data width of 16 bits or less (except for barometer which has both 16 and 19-bit modes),

C. Data Acquisition and Transmission Although the accelerometer, gyroscope and magnetometer are sampled at 100 Hz, the barometer, light sensor and temperature sensor need not to be sampled at a high rate; hence they are sampled at 20 Hz. With these sampling rates, two types of data frames are created according to the availability of sensor data. The first type of data frame consists of all sensor data while the second consists of accelerometer, gyroscope and magnetometer only. Fig. 2 depicts the two frame patterns in the actual sequence of data. Baud rate was set to 115.2 kbits·s−1 to achieve reliable communication with the Bluetooth module. It should also be noted that there are data losses in practice and time to recover/retransmit such packets has to be accommodated, hence a faster baud rate is selected. As the data of sampled sensors only are transmitted during that sampling cycle, the Bluetooth transmitter can stay idling for a longer time when only the inertial sensors are sampled allowing the module to consume lower power. Data acquired from sensors are scaled and biased appropriately before accumulating them into frames. No other modification to the data is done in the controller to keep the flexibility to the user to perform required processing once the data is collected. D. Windows Application The windows application is the main interface that allows user to view sensor data transmitted from the IMU. Once the COM port and the baud rate are selected, the user can connect the IMU with the computer through Bluetooth link. The main graphical user interface (GUI) of the application has two parts: one shows the row data of each sensor while the second part shows processed values. Accelerometer values (in G) and gyroscopic values (in rad·s−1 ), compass heading (in degrees), pressure (in Pa), altitude (in meters w.r.t. sea level),

143/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th -31st October 2013 Table II T ECHNICAL D ETAILS OF S ENSORS [6]–[11] Sensor Accelerometer Gyroscope Magnetometer Barometer Temperature Ambient Light

Part Number ADXL345 ITG3200 HMC5883L BMP085 TMP102 TSL2561

Vendor Analog Devices InvenSense Honeywell Bosch Sensortec Texas Instruments TAOS

Bit resolution 10/11/13 bits 16 bits 12 bit 16/19 bits 12 bits 16 bits

Output bits 16 bits 16 bits 12 bit 16/19 bits 12 bits 16 bits

Resolution 4 mg/LSb 0.0696 °/sec/LSb 4.35 mG/LSb 0.01 hPa 0.0625 °C 4 counts/lx

Range ±2, ±4, ±8, ±16 g ±2000 °/sec ±8 G 300 – 1100 hPa −40 – +125 °C 0.1 – 40,000 lx

Figure 2. Data Frame Patterns

Figure 3. The main GUI of the Windows Application

standard atmosphere, temperature (in °C), and light level (in lux) are shown in the second part of the GUI (Fig. 3). The windows application can also be used to view sensor data in graphical form and to log data. Viewing data in graphical form is to get an understanding of the fluctuation of sensor values. Data logging is important so that the data can be used for further offline analysis. Windows application can also be used to select among 1 Hz, 5 Hz, 25 Hz, 50 Hz and 100 Hz as the sensor sampling rate of the IMU which then changes the sampling frequency of the IMU. E. Android Application The android application was designed for mobile platform, tablet or phone, with fewer features, that can be used to view sensor data and log them. However, one can improve the application to do an advance processing if necessary as row data are streamed from the IMU. F. The IMU Board The final board of the IMU is a 50 mm × 37 mm double sided printed circuit board (PCB) with all sensors and other resources on-board as shown in Fig. 4. The target was to build it as small as possible so that it is highly portable and very convenient to carry. The accelerometer and the gyroscope are placed close to each other with X-axis of them fall on the same line, so that

the relative error in the readings is minimal. The magnetometer is also placed close to those so that the three sensors form a 9-axis IMU. The board is equipped with an on-board battery charging circuit to allow the battery to be charged using the USB port. The full charging time is about 100 minutes. The IMU consumes approximately 42 mW with data logging only and 138 mW with both data logging and Bluetooth streaming and hence with a 3.7 V, 800 mAh Li-Po battery it can operate for about 85 hours with only data logging and about 35 hours with both data logging and Bluetooth streaming. The complete IMU with the battery and the enclosure has an approximate weight of 50 g and a size of 55 mm × 45 mm × 20 mm. IV. P ERFORMANCE OF THE IMU The output of the IMU discussed in this paper was compared against the data recorded in a smartphone. A graph of IMU accelerometer output with the smartphone data for a walking trial while keeping the devices in the same pants pocket is shown in Fig. 5. This comparison indicated that the output of the IMU goes close to the data of the smartphone which indicate that the performance of the IMU is satisfactory. It should be noted that no additional computations are done to the output of the IMU other than the conversion form row values to the actual units.

144/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th -31st October 2013

Figure 4. The PCB of IMU

V. D ISCUSSION AND F UTURE W ORK At the beginning of developing this IMU, the goal was set to achieve a sampling rate of 100 Hz as it is sufficient to track natural walking paces of 2 – 3 steps·s−1 for fast gait [4]. However, with the data rates supported by Bluetooth version 2.0, higher sampling rates can also be achieved. Authors are currently working on improving the sampling rate of the IMU. However, with the limitations of Bluetooth bandwidth, the number of IMUs that can be connected with a single Bluetooth receiver will be limited as sampling rate increases. Another factor that limits the sampling rate is the speed of I2 C supported by the microcontroller and the sensors. Although the microcontroller supports I2 C baud rates up to 400 kHz, some sensors do not support rates higher than 100 kHz. The authors are working on finding alternative sensors that support higher I2 C baud rates. Further, the data transfer rates supported by the SD card also imposes a bottleneck for achieving higher sampling rates. The Windows application was written to receive data from a single IMU only. However, it is possible to receive data from multiple IMUs so that they can be used to track motion of a part or the full body. The authors are also looking at improving the Windows application to accommodate multiple IMU data in a time synchronized manner. The cost of the IMU with the battery and the enclosure in single units comes below US$ 100. However, if this is implemented in mass scale, the cost will be lower. It should be noted that the IMU discussed in this paper consists of most

Vertical Acceleration (y-Axis) for a Walking Trial 10

IMU Phone

8 6

Accel-X (m/s2)

4 2 0 -2 -4 -6 -8 -10 0

1

2

3

4 Time(s)

5

6

7

8

of the sensors necessary for indoor navigation and localization as well it has USB and Bluetooth connectivity and data logging features. Separate accelerometer, gyroscope and compass sensors were used in the IMU discussed in this paper to keep the cost lower as possible. However, it gives a small error in the relative sensor readings due to the fact that they have slightly offset coordinate systems (X-axis and/or Y-axis have offsets). As a future work, the authors are looking at using a 9-axis motion sensor developed by InvenSense [12] which includes a 3-axis accelerometer, a 3-axis gyroscope and a 3-axis magnetometer in a single chip. This will minimize the error due to offset of sensor axes to a greater extent. VI. C ONCLUSIONS This paper presented the design and development of a low cost IMU that consists of most of the sensors needed for indoor localization and with Bluetooth and USB data streaming and data logging in an SD card for real time data collection for human gait analysis. The IMU gave satisfactory output that is sufficient for real time data capturing for human gait analysis. R EFERENCES [1] YEI Technology. “YEI 3-Space Sensor” [Online]. Available: http://www.yeitechnology.com/yei-3-space-sensor, [July 17, 2013]. [2] x-io Technologies. “x-IMU” [Online]. Available: http://www.xio.co.uk/products/x-imu/, [July 17, 2013]. [3] K. Abhayasinghe and I. Murray. (2012, Nov.). “A novel approach for indoor localization using human gait analysis with gyroscopic data,” in Third International Conference on Indoor Positioning and Indoor Navigation [Online], Sydney, Australia, 2012. Available: http://www.surveying.unsw.edu.au/ipin2012/proceedings/submissions/ 22_Paper.pdf [Mar. 5, 2013]. [4] T. Oberg, A. Karsznia and K. Oberg, “Basic gait parameters: Reference data for normal subjects, 10–79 years of age,” J. Rehabil. Res. Dev., vol.30 no. 2, pp.210–223, 1993. [5] Atmel Corporation. (2009, Oct.). “8-bit AVR® Microcontroller with 4/8/16/32 K Byte In-System Programmable Flash” [Online]. Available: http://www.atmel.com/Images/doc8161.pdf [July 17, 2013]. [6] Analog Devices. (2013, Feb.) “3-Axis, ±2 g /±4 g /±8 g /±16 g Digital Accelerometer” [Online]. Available: http://www.analog.com/static/imported-files/data_sheets/ADXL345.pdf [July 17, 2013]. [7] InvenSense Inc. (2011, Feb. 08) “ITG-3200 Product Specification Revision 1.7” [Online]. Available: http://invensense.com/mems/gyro/documents/PS-ITG-3200A.pdf [July 17, 2013]. [8] Honeywell. (2013, Feb.) “3-Axis Digital Compass IC HMC5883L” [Online]. Available: http://www51.honeywell.com/aero/common/documents/ myaerospacecatalog-documents/Defense_Brochuresdocuments/HMC5883L_3-Axis_Digital_Compass_IC.pdf [July 17, 2013]. [9] Bosch Sensortec. (2009, Oct. 15) “BMP085 Digital Pressure Sensor” [Online]. Available: https://www.sparkfun.com/datasheets/Components/General/BSTBMP085-DS000-05.pdf [July 17, 2013]. [10] Texas Instruments. (2012, Oct.) “Low Power Digital Temperature Sensor with SMBusTM /Two-Wire Serial Interface in SOT563” [Online]. Available: http://www.ti.com/lit/ds/symlink/tmp102.pdf [July 17, 2013]. [11] Texas Advanced Optoelectronic Solutions. (2005, Dec.) “TSL2560, TSL2561 Light-to-Digital Converter” [Online]. Available: http://www.adafruit.com/datasheets/TSL2561.pdf [July 17, 2013]. [12] Invensense. “Nine-Axis (Gyro + Accelerometer + Compass) MEMS MotionTrackingTM Devices” [Online]. Available: http://www.invensense.com/mems/gyro/nineaxis.html [July 17, 2013].

Figure 5. Accelerometer Outputs of IMU with Smartphone Data

145/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

WiFi localisation of non-cooperative devices Christian Beder and Martin Klepal Nimbus Centre for Embedded Systems Research Cork Institute of Technology Cork, Ireland Email: {christian.beder,martin.klepal}@cit.ie

Abstract—Some WiFi enabled devices, such as certain smartphones and tablets, do not allow reading RSSI measurements directly on the device due to API restrictions and are therefore excluded from most currently available WiFi localisation systems. As these restrictions are arbitrary and not due to technological limitations, mainstream research did not focus on the issue so far outside the application space of intrusion detection, where very accurate localisation is usually not essential. However, other applications might require such localisation services for a wider variety of devices, too, especially if the service or application provider has limited control over the user’s device preferences. We will present a WiFi localisation system able to handle such non-cooperative devices using dedicated sniffers made from offthe-shelf components distributed around the environment to measure signal strengths on their behalf. This approach allows to provide WiFi localisation based applications not only to Android devices but equally so to devices running iOS or Windows Phone operating systems. Assuming the signal strength measured on the sniffers instead of the devices is symmetric, normal RSSI based localisation algorithms can be applied. However, some challenges arise: monitoring all channels simultaneously can be impractical so that only a subset of the channel spectrum is visible at any given time and therefore communication packets are likely to be missed by some sniffers, in particular in the presence of very irregular communication patterns. We will show how to address these issues and compare the localisation performance for these non-cooperative devices with the performance achievable by classical approaches based on active scanning on the device itself. Index Terms—WiFi localisation

I. I NTRODUCTION Fingerprinting based WiFi localisation has been around for quite some time [1] and is a well-established technique for indoor localisation by now [2]. One very common assumption is, though, that localised devices themselves actively scan for visible access points in order to measure the mutual signal strengths. While it seems that from an algorithmic point of view this assumption does not make any difference, popular smartphone operating systems like iOS and Windows Phone do not allow taking these measurements due to API restrictions and must therefore be excluded from most currently available WiFi localisation systems, which limits their commercial applicability in certain scenarios. One possible way of enabling indoor location based applications to such non-cooperative devices is to take the measurements not on the device itself but to create a system architecture where the measurement is taken by a number of sniffer devices as part of the infrastructure instead. This

Fig. 1. Two possible inexpensive off-the-shelf WiFi sniffer devices. Left: three OpenWRT access points bundled together in order to be able to monitor 3 channels simultaneously in one location. Right: USB Hub with WiFi dongles for monitoring 7 channels simultaneously.

approach has been proposed by [3], however the challenges arising out of this have not been the focus of mainstream localisation research so far. For example the well-known overview on the state-of-the-art of indoor localisation presented in [2] explicitly distinguishes between the two categories of WiFi based systems and infrastructure systems. Instead, determining the location of non-cooperative WiFi devices by the infrastructure has been looked at in the context of security applications [4] and there are commercial systems available today, for instance Cisco’s Wireless Location Appliance [5] or Airtight Networks’s Wireless Intrusion Prevention System [6], which is based on the patents [7] and [8], to name two. However, these systems usually depend on expensive dedicated hardware and focus on security applications, i.e. the detection and handling of intrusions, rather than location accuracy. In this paper we will present a system providing accurate continuous fingerprinting based WiFi localisation based on very simple sniffer devices like for instance the ones depicted in figure 1 comprising of inexpensive off-the-shelf WiFi components. Several challenges need to be addressed when considering WiFi sniffing [9], however the most problematic restriction encountered in this infrastructure based approach compared to actively scanning on the devices themselves is the fact that a single WiFi chip can only monitor one out of the thirteen licensed WiFi channels at a time. This issue has been identified by [3] and addressed in there by trying to estimate the client’s communication channel. However, in a large scale deployment all of the channels will be in use, therefore the system cannot be restricted to a single channel if many devices are to be tracked at once. In case the sniffer device does not

146/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 contain a WiFi chip for all thirteen channels it therefore needs to cycle through the channels, which makes each individual sniffer device sensitive only to a subset of tracked devices at any point in time. Further to that, lacking control over the tracked device itself and also assuming a lack of control over the communication infrastructure means that the channels used are usually unknown to the localisation system as well. The implication this architecture has to have on the localisation algorithm is, that in case the sniffer devices are not synchronised, i.e. do not always scan exactly on the same channels at any point in time, it has to cope with partially missing RSSI measurements from sniffers not currently listening to the device’s current communication channel. We addressed the issue of dealing with missing RSSI measurements in [10], however this work was based on the assumption that the probability of missing an RSSI reading is related to the signal strength itself, i.e. weak signals are more likely to be missed than strong signals. This assumption, however, obviously does not apply here as even a strong signal can be missed if the sniffer device is listening to different channels at that time. In the following section we will therefore show how the likelihood observation function presented in [10] can be augmented in order to extend the localisation algorithm to accommodate the additional requirement arising from a system architecture based on monitoring different subsets of channels in each sniffer location. We will then show what effect sniffer devices with different number of simultaneous channels have on the localisation accuracy benchmarking this against the case where all thirteen channels are monitored, which is equivalent to a likelihood function designed for systems based on cooperative devices being able to activly scan the whole channel spectrum themselves.

restricted to only a subset of channels it monitors. We will therefore augment the approach taken in [10] by introducing the channel monitoring indicator function κ : {1, ..., K} × {1, ..., M } → {0, 1}

which tells us for each sniffer and each channel if it is currently listening there or not. The major contribution of this paper is to demonstrate how this additional information can be used in the likelihood function enabling standard localisation algorithms to cope with an only partially monitored channel spectrum on unsynchronised sniffer devices. Similar to the measurements we will denote the previously recorded known fingerprint by a vector of location dependent signal strength functions for each sniffer T F [x] = F1 [x] · · · FK [x] (5) together with a coverage indicator function φ[x] : {1, ..., K} → {0, 1}

ˆ = argmaxx p{x|s, τ , κ, F , φ} x

p{x|s, τ , κ, F , φ}

(8)

p{x|F , φ} = p{s, τ , κ|x, F , φ} p{s, τ , κ|F , φ} Although all parts on the right hand side of this equation can be considered by an appropriate localisation algorithm (and will be to a certain extend by the motion model of the particle filter used in the evaluation section), we will focus in this section on the likelihood factor only and show how it can be designed to model all aspects of the problem. First we note that by applying basic rules of probability the likelihood can be decomposed as follows into three factors p{s, τ , κ|x, F , φ} =

(9)

p{s|x, F , φ}p{κ|s, x, F , φ}p{τ |s, κ, x, F , φ} (2)

together with a boolean pickup indicator function τ : {1, ..., K} → {0, 1}

(7)

As usual applying Bayes’ theorem this posterior can be rewritten into a product of a likelihood factor and a priorto-evidence ratio factor as follows

having (slightly abusing notation) the inverse covariance −2 C −1 diag[τ ] ss = σ

(6)

which tells us for each sniffer if an area is covered by it or not. The problem of localising a non-cooperative device can now be stated as finding the the most likely position of a single signal picked up by a subset of sniffers listening to a subset of channels given the previously known fingerprint

II. F ORMAL PROBLEM DEFINITION We will now show how to design a likelihood observation function able to deal with missing RSSI measurements due to monitoring only a subset of channels on each sniffer at a time. The notation used here follows the rigorous Bayesian approach to modeling the likelihood function for fingerprinting based localisation presented in [10]. Let’s assume we have K sniffers distributed across the environment picking up signal strength measurements T s = s1 · · · sK (1)

(4)

(3)

which tells us whether a particular sniffer picked up some signal or not. Note that values si can only be defined if the corresponding τi = 1 and will not have any impact on any of the following in case τi = 0. Unlike systems that are actively scanning the whole channel spectrum on the device itself we assume that each sniffer is

Each of these factors models an aspect of the problem, so we will discuss them in turn. We start with the most commonly modeled first factor. Assuming a Gaussian distribution of the received signal strength around the fingerprint value in log-energy space and compensating for the bias introduced by differing antenna attenuation as discussed in [10] it is given by h i PK ω2 exp − 12 i=1 τi φi [x] σ2i ω p{s|x, F , φ} = q (10) PK τ φ [ x ] (2πσω2 ) i=1 i i

147/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 using the bias compensated signal strength residual ˆ ωi = si − Fi [x] − λ

(11)

PK 1 + i=1 τi φi [x] 2 σ = PK i=1 τi φi [x]

(12)

having the variance σω2

and containing the estimated antenna attenuation bias PK τi φi [x](si − Fi [x]) ˆ = i=1 P λ K i=1 τi φi [x]

(13)

1

p{τ |s, κ, x, F , φ} (15) M X = p{τ |µ, s, κ, x, F , φ}p{µ|s, κ, x, F , φ} µ=1

making the transmission channel available explicitly to the algorithm and thereby enabling us to take the channel spectrum monitored by the sniffers into account. We will now show how to augment the approach presented in [10] where we proposed to use the Gibbs distribution exp [−βci [τi ]] exp [−βci [0]] + exp [−βτi ci [1]]

as pickup probabilities penalising the missed energy ci [t] = φi [x](1 − t)αFi [x] + t(1 − φi [x])αsi

K Y

κiµ gi + (1 − κiµ )

(16)

(17)

with α denoting the energy unit and β the inverse temperature parameter. In order to introduce the additional knowledge about the monitored channels κ now we propose to only apply this lost energy Gibbs distribution gi in case the transmission channel µ was actually monitored by a particular sniffer, i.e. κiµ = 1. In the other case that the sniffer was not monitoring the transmission channel, i.e. κiµ = 0, we use a relaxed zeroone pickup probability to reflect the fact that in this case nothing should have been picked up at all. This means that depending on an inverse temperature control parameter γ the signal pickup probability is close to one in case it is not picked

e−γ(1−τi ) (18) 1 + e−γ

If we now also make the assumption of a uniform transmission channel distribution p{µ|s, κ, x, F , φ} = p{µ} =

(14) 2KM Finally the third and in this context most interesting factor is the pickup probability, which allows introducing assumptions on the sniffers ability to make a measurement at all. The particular contribution of this paper is to also take the known monitored channel spectrum κ into account there. In order to do this we first expand the equation by explicitly marginalising over the unknown transmission channel µ as follows

gi =

p{τ |µ, s, κ, x, F , φ} =

i=1

The second and third factor of the likelihood function are not commonly considered by most people but rather assumed to be uniformly distributed. For the second factor, which is the probability of the sniffers being in a given configuration at a point in time, we also don’t want to introduce any further assumptions, therefore we assume it to be independent of any measurements and each channel configuration to be equally likely, hence p{κ|s, x, F , φ} = p{κ} =

up but close to zero in the impossible case some transmission was picked up on the channel despite not monitoring it. Putting all this together yields the following augmented conditional pickup probability

1 M

(19)

we finally end up with the new pickup probability function p{τ |s, κ, x, F , φ} (20) M K 1 XY e−γ(1−τi ) = κiµ gi + (1 − κiµ ) M µ=1 i=1 1 + e−γ which is able to also take the monitored channel spectrum into account. Note that this is a direct generalisation of the approach presented in [10] being identical in case all channels are monitored, i.e. κiµ = 1 for all possible transmission channels µ. Also note, that this is the case, too, if all the sniffers are synchronized, i.e. κiµ = κjµ , which means that in this case the presented extension does not yield any benefit over the classical approach. We will show in the following what influence the restriction to only a subset of the channel spectrum has on achievable localisation performance. III. E VALUATION In order to evaluate the presented approach we built a system able to monitor all thirteen WiFi channels simultaneously in each sniffer location all the time. By simply considering only a subset of these measurements this allows us to easily study the effect that limiting the number of simultaneously monitored channels has on localisation performance, as it would occur with more realistic sniffer devices like for instance the ones depicted in figure 1 relying on random cycling through subsets of channels instead. As mentioned in the previous section we also assume that this channel selection is unsynchronised, meaning each sniffer chooses its channels independently, which makes the architecture much more flexible as it does not rely on a central synchronisation entity. Figure 2 shows the sniffer placement on the floor plan as well as the test path we walked with a smartphone independently connected to our Cisco enterprise WiFi system allowing for seamless handovers while pinging a server from the device in order to generate regular network traffic. We chose this connected approach, because it is more realistic than having the device continuously scanning for access points as in this case it would cycle through all channels itself and be visible to every sniffer all the time, therefore not showcasing the properties of the algorithm we want to evaluate.

148/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 sniffer device will potentially help to improve the accuracy and the presented approach does allow to mix multiple different types of sniffer devices and account for this in the likelihood observation function. IV. C ONCLUSION

Fig. 2. Top: Test-building floor plan with the black dots indicating sniffer placement. Note that the coverage in open plan area on the right hand side of the building is limited and therefore quite challenging for any WiFi localisation system. The green line indicates the test path walked from one end of the building to the other. Bottom: Area of the floor plan covered by the fingerprint for the experiment. 1 1 3 7 10 13

0.8

ACKNOWLEDGMENT

[%]

0.6

A system for accurate, continuous, fingerprinting based WiFi localisation has been presented that does not rely on the ability of the tracked devices to actively participate by scanning and reporting signal strengths themselves. The proposed approach therefore enables the provision of location based services to non-cooperative or API restricted devices such as smartphones based on the iOS or Windows Phone operating systems by taking the necessary RSSI measurements on an infrastructure level instead. The major contribution of this paper has been the derivation of a rigorous likelihood observation function modeling restrictions in the monitored channel spectrum as they occur when taking such an infrastructure centric approach. It was shown that our approach allows to localise non-cooperative devices by placing simple WiFi sniffers built from inexpensive off-theshelf components into the environment achieving comparable accuracy to systems designed for actively scanning cooperative devices. This work has been supported by Enterprise Ireland through grant IR/2011/0003.

0.4 0.2 0 0

R EFERENCES 10

20

30

40

50

60

70

[m]

Fig. 3. Cumulative histogram of residual localisation errors while walking along the path depicted in figure 2 for different sniffer configurations. As expected localisation accuracy increases with increasing number of simultaneously monitored channels.

The extended likelihood function proposed in the previous section was implemented in the particle filter based localisation system presented in [11] and [10] and the residual errors between reference locations form a controlled walk along the test path shown in figure 2 and the resulting estimated positions based on the sniffer measurements were recorded for different sniffer configurations monitoring increasing numbers of channels simultaneously while cycling through these at random. Figure 3 shows the cumulative histogram of the results. As expected localisation accuracy increases with the number of channels simultaneously monitored in each location. It can be seen that relying on a single WiFi monitor per location and cycling through channels on there yields very inferior performance to sniffer devices being able to monitor more than one channel at once. However, it can also be observed that monitoring all 13 channels simultaneously is not necessary to achieve reasonable results and sniffer devices like the ones depicted in figure 1 can be sufficient. However, any additional

[1] P. Bahl and V. Padmanabhan, “Radar: an in-building rf-based user location and tracking system,” in INFOCOM 2000. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies., vol. 2, 2000, pp. 775–784. [2] R. Mautz, “Indoor positioning technologies,” 2012, habilitation thesis, ETH Z¨urich. [3] S. Gami, A. S. Krishnakumar, and P. Krishnan, “Infrastructure-based location estimation in wlan,” in Wireless Communications and Networking Conference, 2004. WCNC. 2004 IEEE, vol. 1, 2004, pp. 465–470 Vol.1. [4] A. Hatami and K. Pahlavan, “In-building intruder detection for wlan access,” in Position Location and Navigation Symposium, 2004. PLANS 2004, 2004, pp. 592–597. [5] Cisco. Wireless location appliance. [Online]. Available: http://www.cisco.com/en/US/prod/collateral/wireless/ps5755/ps6301/ ps6386/product data sheet0900aecd80293728.html [6] AirTight Networks. Wireless intrusion prevention system. [Online]. Available: http://www.airtightnetworks.com/home/products/AirTightWIPS.html [7] M. Kumar and P. Bhagwat, “Method and system for location estimation in wireless networks,” Patent US 7 406 320 B1, 2008. [8] R. Rawat, “Method and system for location estimation in wireless networks,” Patent US 7 856 209 B1, 2010. [9] J. Yeo, M. Youssef, and A. Agrawala, “A framework for wireless lan monitoring and its applications,” in Proceedings of the 3rd ACM workshop on Wireless security, ser. WiSe ’04. New York, NY, USA: ACM, 2004, pp. 70–79. [Online]. Available: http://doi.acm.org/10.1145/1023646.1023660 [10] C. Beder and M. Klepal, “Fingerprinting based localisation revisited - a rigorous approach for comparing rssi measurements coping with missed access points and differing antenna attenuations,” in 2012 International Conference on Indoor Positioning and Indoor Navigation (IPIN), 2012. [11] A. McGibney, C. Beder, and M. Klepal, “Mapume smartphone localisation as a service - a cloud based architecture for providing indoor localisation services,” in 2012 International Conference on Indoor Positioning and Indoor Navigation (IPIN), 2012.

149/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Creation of Image Database with Synchronized IMU Data for the Purpose of Way Finding for Vision Impaired People Chamila Rathnayake

Iain Murray

Department of Electrical and Computer Engineering Curtin University Perth, Western Australia [email protected]

Department of Electrical and Computer Engineering Curtin University Perth, Western Australia [email protected]

Abstract—This paper describes an image database which includes synchronized inertial measurement unit (IMU) data with the meta data of the captured images [1]. Images are taken under a range of conditions including low light, shadow conditions and controlled blurring. Physical locations are fixed and repeatable, and include accurate GPS positioning. The standardized images are synchronized over the exposure time with multiple sensor data (accelerometer, gyroscope and ambient light). This database will be used for a research project currently being undertaken at Curtin University which proposes a form of “crowd sourcing” to construct maps for use in mobility and navigation for people with vision impairments.

Sensor Data Gyroscopic Data

INTRODUCTION

The inertial measurement unit (IMU) data and other sensor data is captured synchronized with the time of the image has been taken and image meta data has been recorded as an additional information. The captured images and data are stored in a conventional database as described in section V. In a typical digital image processing process below steps are carried out to obtain any final result [2].

Accelerometer Data

Proximity Data

Magnetometer Data

Ambient light Data

Orientation Data

GPS Data

Meta Data

Keywords-GPS, image processing, way finding, vision impaired, IMU, Meta data

A “standardized” image databases is an important tool in the comparative assessment of image processing techniques in the field of indoor and outdoor navigation. Databases of these types have been used in a wide variety of applications such as geographical information systems, computer-aided design and manufacturing systems, multimedia libraries, and medical image management systems.

Barometer Data

Shutter Speed

ISO

Aperture

Flash Fired

Exposure Program

Resolution

Date and Time

Compression Type

A modern smart phone is used as the IMU for the initial experiments and a subset of above data types has been captured using an Android application while taking the images. RELATED WORK There is less number of researches are carried out on this area. Reference [4] describes a general image database model which is mainly focus on image attributes and query optimizations on the image database and it doesn’t not address any of the synchronized sensor data during the image capturing process.

IMPORTANCE OF IMAGE DATABASE WITH SYNCHRONIZED IMU DATA

The acquisition and preprocessing steps are one of the most important steps in above process and some of the IMU data and meta data can be used during these steps.

A. Image Stabilization IMU data can be used to identify the most stable position of the capturing device or the application while it is in a moving position or an unstable position. A real-time feedback from the

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

150/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 IMU can be used for capturing device after analyzing gyroscopic and accelerometer data at of most stable position. B. Image De-blurring A most common problem in image capturing is blurring which is caused by movements of the capturing device, movements of object or the long exposure time of the capturing device. If the blurring occurs due to the movements of the capturing device, then de-blurring techniques can be applied to the image using gyroscopic and accelerometer data[3].

is performing different activities such as walking on sidewalks and stairways. This is to simulate the real walking patterns of the vision impaired people. The stability of capturing device and how its positions relative to the ground level throughout the activity DATABASE MODEL The captured images, IMU data and meta information are stored in a relational normalized database model as shown below figure.

C. Algorithm training testing and performance analysis These IMU and meta data can be used for any image processing research project to test their algorithms and applications to identify the optimal results and solutions.

EnvironmentalConditions

IMUData

PhysicalConditions ENC_IDPkey

PK

SND_IDPkey

PK

PHC_IDPkey

PK

ENC_Name

FK

PHC_Name ENC_Description

IMG_IDFkey SND_DateTime

PHC_Description

SND_AccellerationX

Images ImageModes

IMAGE TYPES

SND_AccellerationY IMG_IDPkey

PK

IMD_IDPkey

PK

Each image is captured under different environmental conditions, with different formats at different location. Also images are saved in different modes such as RGB and grayscale. The following table lists all the types of images which are considered.

SND_AccellerationZ FK

IMD_IDFkey

FK

ENC_IDFkey

FK

PHC_IDFkey

FK

LCN_IDFkey

FK

FTR_IDFkey

SND_GyroscopeX

IMD_Name

MetaData ATR_IDPkey

PK

FK

IMG_IDFkey ATR_DateTime

SND_GyroscopeY SND_GyroscopeZ SND_OrientationX SND_OrientationY

IMG_Name

SND_OrientationZ

ATR_Resolution SND_MagneticFieldX

Locations

ATR_Longittude

Features SND_MagneticFieldY

Environmental Conditions

Locations

Morning

Side Walks

Noon Evening Night Cloudy

Straight Path Curvy Paths

Image Conditions

Modes

Blurred

PK

LCN_IDPkey

PK

FTR_IDPkey SND_MagneticFieldZ

ATR_Altitude

Normal Shadowy

ATR_Latitude

LCN_Name

FTR_Name

RGB

CONCLUTION

Grayscale Black & White

The main advantages of the proposed database can be summarized as follows: Over 600 of images can be used as a training set, testing set and performance analysis for the image processing techniques. The synchronized IMU data and meta data can be used to enhance the images in image processing and computer vision applications.

Corridor Stairs

A sample set of captured images are shown below

ACKNOWLEDGMENT The authors would like to thank Mr. Nimsiri Amarasinghe and Mrs. Nimali Rajakaruna from Department of Electrical and Computer Engineering, Curtin University for providing assistant in capturing images and testing the image processing applications.

Differenet Environmental Conditions

REFERENCES

Different Locations [1] [2] Different Image Conditions

[3] [4] [5]

Different Modes

To capture the images for experimental purposes, the capturing device is mounted on a tripod at chest height which is the most suitable placement in human gait while the subject

151/278

Dougles Hackney “Digital Photography Meta Data Overview”, 2008, unpublshed. S.Annadurai,R.Shanmugalakshmi “Fundamentals of Digital Image Processing”, Pearson Education India, Chapter 1.. Ondrej Sindelar, Filip Sroubek “Image deblurring in smartphone devices using build-in inertial measurement sensors”. Peter L. Stanchev, “General Image Database Model”, Institute of Mathematics and Computer Science, Bulgarian Academy of Sciences R. Nicole, “Title of paper with only first word capitalized,” J. Name Stand. Abbrev., in press.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013

Relevance and Interpretation of the Cram´er-Rao Lower Bound for Localisation Algorithms Marcel Kyas, Yubin Zhao, Heiko Will Freie Universit¨at Berlin AG Computer Systems & Telematics Takustr. 9, 14195 Berlin, Germany e-mail: {marcel.kyas,yubin.zhao,heiko.will}@fu-berlin.de

Abstract—We show that using Cram´er-Rao-Lower-Bound (CRLB) is inadequate for indoor localisation. The mathematical assumptions necessary to formulate the Fisher information of the indoor localisation problem and for calculating the CRLB do not generally hold. This is caused by non-Gaussian distributions of measurement errors. These distributions also give rise to involved calculations if a CRLB is to be computed. Finally, the CRLB gives a lower bound of the mean squared error (MSE) of any potential estimator for a problem, but makes no statement about the existence of such an optimal estimator or whether a position estimation algorithm can be improved by using the CRLB. The mathematical results justify the use of simulation, as done with our FU Berlin Parallel Lateration-Algorithm Simulation and Visualization Engine (LS2 ). Using simulation, we can analyse algorithms without relying on the CRLB, especially if its calculation proves infeasible or even impossible. Index Terms—Indoor localisation, Cram´er-Rao Lower Bound

I. I NTRODUCTION The Cram´er-Rao-Lower-Bound (CRLB) is used as a measure for the quality of an estimation method in localisation algorithms [1], but its use has become controversial [2], [3]. Our contribution is a rigorous calculation discussion of the CRLB for indoor localisation assuming Gamma distributed errors, a careful interpretation of this bound and its application to maximum likelihood estimator (MLE) using this error distribution, and a comparison of simulation results of real algorithms with this bound. As we show, the CRLB is of limited use, because most algorithms are actually much better than published CRLB in scenarios of interest. We aim to explain this phenomenon. II. R ELATED WORK The earliest use of the CRLB for localisation problems has been by Torrieri [1]. Applications of the CRLB are to decide whether an algorithm is optimal [4], to place anchors optimally [5], or to select presumably optimal anchors and range measurements [6]. There also have been differing attempts on calculating the CRLB, depending on the method of measurement [7]– [9]. There is uncertainty about the correct formulation of the CRLB, as different solutions are proposed for the same

measurement error and measurement error distributions [10]– [12]. The geometric dilution of precision (GDOP) has been shown to be equivalent to the CRLB for normal distributed errors [2]. III. C RAM E´ R -R AO L OWER B OUND We summarise the and theorems. From   important definitions T

now on, let ∇θ = ∂θ∂ 0 , . . . , ∂θ∂k−1 . Fisher information (FI) and the CRLB can be used to establish optimality of an unbiased estimator. Especially, the CRLB is reached by MLE [13]. We use MLE to establish a counter example in Section VI. A. Fisher Information

The CRLB is derived from the FI matrix. ~ = (X0 , . . . , Xn−1 ) form a random Definition 1: Suppose X sample from a distribution for which the probability density function (p.d.f.) is f (~x; θ), where the value of the parameter θ = (θ0 , . . . , θk−1 ) must lie in an open subset of a kdimensional real space. Let fn (~x; θ) denote the joint p.d.f. of ~x. Assume that {~x | fn (~x; θ) > 0} is the same for all θ and that log fn (x; θ) is twice differentiable with respect to θ. The FI matrix In (θ) in the random sample ~x is defined as the k × k matrix with (i, j) element equal to   ∂ ∂ In,i,j (θ) = Cov log fn (~x; θ), log fn (~x; θ) . ∂θi ∂θj Recall Cov R θ (X, Y ) = E(XY ; θ) − E(X; θ)E(Y ; θ) and E(X; θ) = X(ω)f (dω; θ) with f the p.d.f. that admits X. B. The Cram´er-Rao Lower Bound The CRLB expresses a lower bound on the variance of estimators of deterministic parameters [14], [15]. Next theorem is proved as Theorem 6.6 in Lehmann and Casella [13, p. 127]. Theorem 1 (Cram´er-Rao Information Inequality): Suppose ~ = (X0 , . . . , Xn−1 ) form a random sample from a distriX bution for which the p.d.f. is f (x; θ), where the value of the parameter θ = (θ0 , . . . , θk−1 ) must lie in an open subset of ~ be a statistic with a k-dimensional real space. Let T = δ(X)

c 2012 IEEE 978-1-4673-1954-6/12/$31.00

152/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013 finite variance. Let m(θ) = Eθ (T ). Assume that m(θ) is a differentiable function of θ. Then: Varθ (T ) ≥ ∇θ m(θ)T · In (θ)−1 · ∇θ m(θ)

(1)

IV. E RRORS IN D ISTANCE M EASUREMENT The position will be estimated from measurements of distances to anchors, e.g. by time of arrival (TOA). This roughly corresponds to an indoor localisation scenario with non-lineof-sight (NLOS) errors and abstracts from radio ranges, i.e., we assume the radio reaches every place in a building. Indoors, the major contributors to measurement errors are NLOS effects and the inaccurate clocks. Fading and multipath effects affect the accuracy to a lesser degree, because the measurements are independent of the received signal strength and distances are usually short. Only that a message was received is relevant. If it is lost, no value can be measured. Our experiments [16] show that the error of a measurement can be approximated by a Gamma distribution. The p.d.f. of the Gamma distribution is ( βα xα−1 e−βx if x > 0 , (2) Γ(α, β)(x) = Γ(α) 0 if x ≤ 0 R∞ where Γ(α) = 0 tz−1 e−t dt. We call the parameter α the shape of the distribution and the parameter β its rate. The Gamma distribution is defined for non-negative α, β, and x. Given shapes α1 , . . . , αn and rates β1 , . . . , βn , a range measurement Xi to an anchor position Ai at the position θ is modelled by: Xi = d(Ai , θ) + i − bi with i ∼ Γ(αi , βi ), where bi is an offset of the measurement error, i is the ith measurement error sampled from the p.d.f. of Γ(α, β), and d(Ai , θ) isp the Euclidean distance between Ai and θ, i.e. d(a, b) = (ax − bx )2 + (ay − by )2 . Definition 2 (Measurement p.d.f.): The probability of measuring x at position θ from anchor A is: f (x; θ) = Γ(α, β)(x − d(A, θ) + b), βα = (x − d(A, θ) + b)α−1 e−β(x−d(A,θ)+b) Γ(α) where α, β > 0, b ∈ R is an offset, and x > d(A, θ) − b. Referring to Definition 1, we can establish whether the Fisher information is defined for such a T distribution. n Proposition 2: {~x | pi (~x; θ) > 0} = i=1 {~x | d(Ai , θ) − bi < xi }. From this proposition, we immediately conclude that bi > 0 for all 1 ≤ i ≤ n is a sufficient condition for a non-empty set {~x | pi (~x; θ) > 0}. Otherwise, this set might be empty and no CRLB can be derived. Geometrically speaking, Proposition 2 describes the intersection of discs around anchors. For using MLE with this p.d.f., we must find a point inside this set, otherwise the target function is already undefined at the start. V. A PPLICATION TO I NDOOR L OCALISATION We establish the non-existence of a CRLB for the indoor localisation problem using a Gamma-distributed distance measurement error.

A. Location Estimation Problem We analyse the use of the Cram´er-Rao Lower Bound in the theory of localisation. It is important to define all used terms rigorously. The problem of localisation is defined as follows: Definition 3: Let N ∈ N be a number of anchors (participants with known locations) and {Ai ∈ R2 ; i ∈ N0 θ ∈ R2 the true location and the p.d.f. pi (xi ; θ) as given in Definition 2. Because theQmeasurements are independent, the n−1 joint p.d.f. is fn (~x; θ) = i=0 pi (xi ; θ). Consequently,   αi n−1 X βi ˜ (xi − ˜bi )αi −1 e−βi (xi −bi ) log fn (~x; θ) = log Γ(αi ) i=0 (3) where ˜bi = d(Ai , θ) + bi . The gradient of log fn (~x; θ) is shown in Eq. (4). It is defined for all θ satisfying Proposition 2. ∇θ log fn (~x; θ) = n−1 X i=0

(θx −Ai,x )(β(d(θ,Ai )−(Xi +bi ))+α−1) d(θ,Ai )(d(θ,Ai )−(Xi +bi )) (θy −Ai,y )(β(d(θ,Ai )−(Xi +bi ))+α−1) d(θ,Ai )(d(θ,Ai )−(Xi +bi ))

! (4)

For θ → Ai it is easy to show that Eq. (4) does not have a unique limit, thus the gradient is not defined for θ = Ai . For d(θ, Ai ) = Xi + bi , i.e. at the border of the domain, and bi > 0 the summand diverges (except for α = 1, where the T term converges to β/d(θ, Ai ) (θx − Ai,x , θy − Ai,y ) ). Next, the entries of the FI matrix are:   ∂ log fn (~x; θ) ∂ log fn (~x; θ) Cov , = ∂θ ∂θj   i ∂ log fn (~x; θ) ∂ log fn (~x; θ) ;θ − E ∂θi ∂θ  j   ∂ log fn (~x; θ) ∂ log fn (~x; θ) E ;θ E ;θ . ∂θi ∂θj fn (~ x;θ) Since ∂ log∂θ is not integrable for α 6= 1, the FI matrix is i not defined. Thus, we cannot give the CRLB for this problem.

VI. E VALUATION OF THE CRLB We calculated the CRLB for a variety of anchor set-ups and compared the results of FU Berlin Parallel LaterationAlgorithm Simulation and Visualization Engine (LS2 ) to the CRLB. The CRLB was computed according to Eq. (2.157) and Eq. (2.158) in H.C. So [12], which gives a CRLB for TOA based range measurements with Gaussian range measurement

153/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013

(a) NLLS (a) CRLB

(b) NLLS

(c) MLE-Gauss

(d) MLE-Gamma

(b) MLE-Gauss

(c) MLE-Gamma

Fig. 2. Difference between CRLB and a simulated algorithm

Fig. 1. The RMSE of the CRLB and three algorithms

error. We use a standard deviation of 50 distance units. We simulated 5000 measurements for each discrete location on a grid of 500 × 500 distance units. Five anchors are placed in a quite hard configuration, where 4 anchors are almost on a line. The algorithms we simulated are non-linear least squares (NLLS) [17], [18], MLE assuming a normal distributed error with standard deviation 25, and a MLE assuming a Gamma distributed error with shape 4, rate 0.08, and offset 25. The MLE use the estimate of NLLS as the starting point and proceeds by Broyden-Fletcher-Goldfarb-Shanno algorithm to maximize the likelihood. A. Normal Distribution In Fig. 1, we display the distribution of the root mean squared error (RMSE) to compare the variance of the estimation error to the CRLB. We perform this comparison based on a normal distributed error with standard deviation of 25 units. This setting is best understood. We visualise the differences between the algorithms to the CRLB in Figure 2. We display the difference of the CRLB and the RMSE of the simulated algorithms. Areas coloured in green indicate that the RMSE is at least 2 distance units worse than the CRLB, areas coloured in yellow indicate that the RMSE is within 2 distance units of the CRLB (we use this interval to account for numerical inaccuracies in the implementation), and areas coloured in a red display areas in which the simulated algorithm is at least 2 simulation units better than the CRLB. We should not expect to see any red areas. The red areas in Fig. 2 correspond to significant improvements on So’s CRLB. Especially that NLLS and MLE-Gauss manage to improve the CRLB close to the anchors and along the diagonal axis indicates, that the CRLB refers to a different

metric and cannot be applied to bound the RMSE of the position estimate. A correction is not relevant to us, since we are interested in error distributions relevant to indoor scenarios. B. Gamma distribution A normal distributed measurement error occurs seldom in indoor localisation. The measurements are also affected by NLOS effects. We also can never measure a distance that is too short. A too short measurement indicates a systematic error. Figure 3 displays the RMSE of each algorithm for a Gamma distributed error. In comparison to Fig. 1, we notice that the larger average error decreases the performance, especially inside the convex hull of the anchor nodes. The performance of MLE-Gamma is better than the performance of NLLS and MLE-Gauss. MLE-Gauss still performs slightly better than NLLS, especially in the red areas in Fig. 3. Figures 4 shows the distribution of localisations for a node at the location marked in yellow. This position was chosen where the CRLB suggests a large variance. Darker shades indicate that this position was estimated more often, white positions are never estimated. The size of the area corresponds to the variance. The magenta circle marks the centre of all estimates. The green circle has a 50 units radius. For NLLS we see the largest shaded area, i.e. the highest variance. For MLE-Gamma, the area is more concentrated to the centre of estimates. However, this considers only successful estimates. MLE-Gamma fails, when search starts outside of the domain of log fn (see Proposition 2). The success rate is shown in Figure 4d: We observe failure rates between 10 % (lighter, white areas) and 45 % (darker, green areas), especially in areas where the CRLB suggests poor performance. Most other localisation algorithms can always estimate a position. The initial position is estimated using NLLS, so in case of failure, this estimate can be used in practice with good results.

154/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013 localisation error, but the CRLB provides a lower bound on the variance of the estimated position. Both values are related by the mean squared error (MSE), but the relation is seldom exploitable. (4) MLE fitted to the measured error distribution gives excellent results in simulation. Future work includes testing MLE on real data and exploring improvements on fitting actual error distributions. In practice, finding and modelling the error distribution seems to be the main obstacle. (a) NLLS

(c) MLE-Gamma

(b) MLE-Gauss

R EFERENCES

(d) Comparing NLLS to MLEGauss

Fig. 3. Comparing RMSE for gamma distributed errors

(a) NLLS

(b) MLE-Gauss

(c) MLE-Gamma

(d) Failure distribution of MLEGamma

Fig. 4. Distribution of locations

VII. C ONCLUSION The lessons are: (1) Experiments seem to refute published CRLB, especially in areas close to anchors. (2) The indoor localisation problem is hard to model and analyse using statistical methods, since calculations are involved or not possible. Simulations with LS2 avoids these problems and provides results that more useful in practice. (3) The results of a statistical analysis don’t convey the information of interest. We are usually interested in minimising the expected absolute

[1] D. Torrieri, “Statistical theory of passive location systems,” IEEE Transactions on Aerospace and Electronic Systems, vol. 20, no. 2, pp. 183–198, Mar. 1984. [2] J. Chaffee and J. Abel, “GDOP and the Cramer-Rao bound,” in PLANS. IEEE, 1994, pp. 663–668. [3] R. M. Vaghefi and R. M. Buehrer, “Cooperative sensor localization with nlos mitigation using semidefinite programming,” in WPNC. IEEE, 2012, pp. 13–18. [4] I. Zisking and M. Wax, “Maximum likelihood localization of multiple sources by alternating projection,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 36, no. 10, pp. 1553–1560, Oct. 1988. [5] S. O. Dulman, A. Baggio, P. J. Havinga, and K. G. Langendoen, “A geometrical perspective on localization,” in MELT’08. ACM Press, 2008, pp. 85–90. [6] B. Yang and J. Scheuing, “Cramer-rao bound and optimum sensor array for source localization from time differences of arrival,” in Acoustics, Speech, and Signal Processing, 2005. Proceedings. (ICASSP ’05). IEEE International Conference on, vol. 4. IEEE, 2005, pp. iv/961–iv/964. [7] R. Malaney, “A location enabled wireless security system,” in Global Telecommunications Conference, 2004. GLOBECOM ’04. IEEE, vol. 4, 2004, pp. 2196–2200. [8] H. Shi, X. Li, Y. Shang, and D. Ma, “Cramer-rao bound analysis of quantized rssi based localization in wireless sensor networks,” in Parallel and Distributed Systems, 2005. Proceedings. 11th International Conference on. IEEE, 2005. [9] T. Jia and R. M. Buehrer, “A new cramer-rao lower bound for toa-based localization,” in Military Communications Conference, 2008. MILCOM 2008. IEEE, 2008, pp. 1–5. [10] M. L. McGuire and K. N. Plataniotis, Accuracy Bounds for Wireless Localization Methods. Hershey, NY: Information Science Reference, 2009, ch. 15, pp. 380–405. [11] L. Cheng, S. Ali-L¨oytty, R. Pich´e, and L. Wu, Mobile Tracking in Mixed Line-of-Sight/Non-Line-of-Sight Conditions: Algorithms and Theoretical Lower Bound. Hoboken, NJ, USA: John Wiley & Sons, 2012, ch. 21, pp. 685–708. [12] H. C. So, Source Localization: Algorithms and Analysis. Hoboken, NJ, USA: John Wiley & Sons, 2012, ch. 2, pp. 25–66. [13] E. L. Lehmann and G. Casella, Theory of Point Estimation, 2nd ed. New York: Springer, 1998. [14] C. R. Rao, “Information and the accuracy attainable in the estimation of statistical parameters,” Bulletin of the Calcutta Mathematical Society, vol. 37, pp. 81–89, 1945. [15] H. Cram´er, Mathematical Methods of Statistics. Princeton, NJ, USA: Princeton University Press, 1946. [16] T. Hillebrandt, H. Will, and M. Kyas, “The membership degree MinMax localisation algorithm,” 2013, accepted for publication in Journal of Global Positioning Systems. [17] S. Venkatesh and R. M. Buehrer, “A linear programming approach to NLOS error mitigation in sensor networks,” in IPSN, J. A. Stankovic, P. B. Gibbons, S. B. Wicker, and J. A. Paradiso, Eds. ACM Press, 2006, pp. 301–308. [18] I. G¨uvenc¸, C.-C. Chong, and F. Watanabe, “Analysis of a linear leastsquares localization technique in LOS and NLOS environments,” in VTC. IEEE, 2007, pp. 1886–1890. [19] S. A. Zekavat and R. M. Buehrer, Eds., Handbook of Position Location. Hoboken, NJ, USA: John Wiley & Sons, 2012.

155/278

2013 International Conference on Indoor Positioning and Indoor Navigation

Efficient and Adaptive Generic Object Detection Method for Indoor Navigation Nimali Rajakaruna

Iain Murray

Department of Electrical and Computer Engineering Curtin University Perth, Western Australia [email protected]

Department of Electrical and Computer Engineering Curtin University Perth, Western Australia [email protected]

Abstract—Real time object detection and avoidance is an important part of indoor and outdoor way finding and navigation for people with vision impairment in unfamiliar environments. The objects and their arrangement in both indoor and outdoor settings occasionally change. Even stationary objects, such as furniture, may move occasionally. Additionally, providing detailed geometric models for all objects in a single room can be a very difficult and computationally intensive task. When another of similar function replaces an object, completely new models may have to be developed. Hence, there is a need of highly efficient method in detecting generic objects, which will help in detecting objects in a changing environment. This paper, presents an image-based object detection algorithm based on stable features like edges and corners instead of appearance features (color, texture, etc.). Probabilistic Graphical Model (PGM) is used for feature extraction and generic geometric model is built to detect object by combining edges and corners. Furthermore, additional geometric information is employed to distinguish doors from other objects with similar size and shape (e.g. bookshelf, cabinet, etc.). Current research shows that generic object recognition is one of the most difficult and least understood tasks in computer vision. Keywords-Generic Objects; Hidden Markov Models; Probabilistic Graphical Models

I.

INTRODUCTION

The problem of generic object recognition (object categorization) has traditionally been a difficult problem for computer vision systems. The main reason for this difficulty is the variability of shape within a class: different objects vary widely in appearance, and it is difficult to capture the essential shape features that characterize the members of one category and distinguish from another. Early vision systems [1-3]could perform specific object recognition reasonably well but did not fare as well on identifying the natural class of an object. Research work done recently [4] have led to systems that can learn a representation for different object classes and achieve good generic object class recognition. Most of the research work on object detection was dominated by the use of appearance-based methods for object recognition. Among the most popular of these was the eigenface method [1]

which forms the basis of numerous appearance-based object recognition schemes. Pentland et al. [5] approach the problem of face recognition under general viewing conditions with a view-based multiple-individual eigenspace technique. Maximum-likelihood estimation framework was introduced by Moghaddam et al. [5] with the idea of using probability densities to formulate a for visual search and target detection. Black and Jepson address general affine transformations [6] by defining a subspace constancy assumption for eigenspaces. They formulate a continuous optimization problem to obtain reconstructions having the same brightness as the corresponding image pixels. In addition, the authors proposed a multi-scale eigenspace representation and a coarse-to-fine matching strategy in order to account for large affine transformations between eigenspace and the image. This work was extended in which proposes a robust principal component analysis method that can be used to automatically learn linear models from data that may be contaminated by outliers. Current literature for object recognition has two main approaches: 1) feature-based, which involve use of spatial arrangements of extracted features such as edge elements or junctions, and 2) brightness based, which make direct use of pixel brightness. Early work for feature-based methods used Fourier descriptors. Huttenlocher et al. developed methods for shape matching based on edge detection and Hausdorff distance. This paper, focus on feature-based implementation, which uses wavelet transformation to deal with feature extraction. A standard approach for multi-class object detection is exemplified by where one detector is used per object class being detected. In, the statistics of both object appearance and “non-object” appearance are represented using a product of histograms. Each histogram represents the joint statistics of a subset of 2D wavelet coefficients and their position on the object. The detection is performed by exhaustive search in scale and position.

156/278

2013 International Conference on Indoor Positioning and Indoor Navigation The proposed method employs a wavelet transformation for feature extraction conjunction with a probabilistic (hidden Markov) model to estimate contour position, deformation, color and other hidden aspects. It generates a maximum a posteriori estimate given observations in the current frame and prior contour information from previous frames. HMM provides globally optimal solutions for contour adjustment with the Viterbi Algorithm. II.

BACKGROUND

A. Wavelet Transformation Using wavelets for feature-based extract and representation of images provides an efficient solution to the given problem. It involves a number of features extracted from raw images based on [7] wavelet. The features of images, such as edges of an object, can be projected by the wavelet coefficients in Lowpass and high-pass sub-bands [7]. Even though there are multiple approaches to object classifications in images, the view representation provides and efficient global optimum solution.. Features and their spatial relationship among them play more important roles in characterizing image contents, because they convey more semantic meanings[8].

Hidden Markov Models are a widespread approach to probabilistic sequence modeling: they can be viewed as stochastic generalizations of finite-state automata, where both transitions between states and generation of output symbols are governed by probability distributions [9]. Originally, these models were almost exclusively applied in the speech recognition context, and it is only in the last decade that they have been widely used for several other applications, as handwritten character recognition, DNA and protein modeling, gesture recognition, and behavior analysis and synthesis. Even if HMMs have been largely applied for classifying planar objects, HMM’s use in the generic object recognition has been poorly investigated, and only few papers exploring this research direction are appeared in the literature.

In this paper, a method based on wavelet coefficients in lowpass bands is proposed for the image Classification. This decision was taken due to the application of the system for indoor navigation with IMU’s. After an image is decomposed by wavelet, its features can be characterized by the distribution of histograms of wavelet coefficients.

In this paper a HMM based approach is proposed, which explicitly considers all the information contained in the object. The image is scanned using the raster method fashion with a squared window of fixed size, obtaining a sequence of overlapping sub-images. For each sub-image, wavelet coefficients are computed, discarding the less significant ones. The collected wavelet features connected with each sub-image is then modeled using a HMM. Here the observation layer and hidden layers are programmed accordingly the difficulty with generic object identification. Weak classifiers are used to model the hidden layers. In the modeling, particular care is devoted to the training procedure initialization, which represents a crucial factor because of the locality of the optimization procedure, and to the model selection issue, which represents the problem of choosing the topology and the number of states of the HMM.

The coefficients are respectively projected onto x and y directions. For different images, the distribution of histograms of wavelet coefficients in low-pass bands is substantially different. However, the one in high-pass bands is not as different, which makes the performance of classification not reliable. This paper presents a method for image classification based on wavelet coefficients in low-pass bands only.

A strategy similar to that proposed in this paper has been recently applied by the authors in the context of face recognition[10], showing promising results. Assuming a priori equiprobable classes, an unknown sequence is classified into the class whose model shows the highest probability (likelihood) of having generated this sequence (this is the wellknown maximum likelihood (ML) classification rule).

The nodes can then be represented by the distribution of histograms of these wavelet coefficients. Most of the applications represent images using low-level visual features, such as color, texture, shape and spatial layout in a very high dimensional feature space, either globally or locally. However, the most popular distance metrics, for example, Euclidean distance, cannot guarantee that the contents are similar even their visual features are very close in the high dimensional feature Space.

A discrete-time Hidden Markov Model can be viewed as a Markov model whose states cannot be explicitly observed: each state has associated a probability distribution function, modeling the probability of emitting symbols from that state. More formally, a HMM is defined by the following entities[11], • •

B. Hidden Markov Model based Contour detection The proposed method is based on a Hidden Markov Model (HMM). This model has two advantages. Firstly it is no longer necessary to select training data, a new method to generic object recognition is proposed.

157/278



𝑆 = {𝑆! , 𝑆! , … , 𝑆! } the finite set of the possible hidden states; the transition matrix 𝐴 = {𝑎!" , 1 ≤ 𝑗 ≤ 𝑁} representing the probability to go from state 𝑆! to state 𝑆! , 𝑎!" = 𝑃 𝑞!!! = 𝑆! 𝑞! = 𝑆!      1 ≤ 𝑖, 𝑗 ≤ 𝑁(1) with 𝑎!" ≥ 0 and ! !!! 𝑎!" = 1 the emission matrix 𝐵 = {𝑏 𝑜 𝑆! } , indicating the probability of the emission of the symbol 𝜊 when

2013 International Conference on Indoor Positioning and Indoor Navigation



system state is 𝑆! ; in this paper continuous HMM were employed : 𝑏(𝜊 𝑆! ) is represented by a Gaussian distribution, i.e. 𝑏 𝜊 𝑆! = 𝑁(𝜊 𝜇! , ∑! ) (2) where 𝑁(𝜊 𝜇  , ∑) denotes a Gaussian density mean  𝜇 and covariance Σ , evaluated at 𝜊; 𝜋 = 𝜋! , the initial state probability distribution, representing probabilities of initial states, i.e. 𝜋! = 𝑃 𝑞! = 𝑆!    1 ≤ 𝑖 ≤ 𝑁 (3) with  𝜋! ≥ 0 and ! !!! 𝜋! = 1.

compression. As for image compression, the retained coefficients represent the more significative information. Video Stream

Image Level

Color Conversion to Gray Format

Sub Image Aquisition

Wavelet transform for each sub image

For convenience, we denote an HMM as a triplet  𝜆 = 𝐴, 𝐵, 𝜋 . Image Feature Extraction using Wavelet Transformation

The training of the model, given a set of sequences [12], is usually performed using the standard Baum-Welch reestimation, able to determine the parameters 𝐴, 𝐵, 𝜋 that maximize the probability 𝑃(𝑂! |  𝛌  )  In this paper thetraining procedure is stopped after the convergence of thelikelihood. The evaluation step, i.e. the computation of theprobability 𝑃  (O|  𝛌)  given a model 𝜆  and a sequence O to be evaluated, is performed using the forward-backward procedure. The value of the system is that its performed in a satisfactory way even if the number of views per object used for training is drastically reduced. This will help the intended application in an indoor navigation setup. HMMs have been largely applied in several computer vision and pattern recognition problems, whereas a systematic analysis of its behavior in this context is missing in literature. III.

PROPOSED METHOD

The strategy used to obtain the data sequence from an object image consists of three steps. First, the image is converted from the color format to the grey level format. This is important to assess the capability of the proposed approach in capturing the geometry of the object, rather than the color. In the second step, a sequence of sub-images of fixed dimension is obtained by sliding over the object image; in a raster scan fashion, a square window of fixed size, with a predefined overlap. In this way we could capture relevant information about the local geometry of the object to be encoded: the sequence of subsequent windows summarizes the local object structure. Finally, the third step consists in applying the wavelets transform to each gathered sub-image. The proposed algorithm calculates the coefficients representing the image with a normalized two-dimensional Haar basis, sorting these coefficients in order of decreasing magnitude. Subsequently, the first M coefficients (i.e., the coefficients with higher magnitude) are retained, performing a lossy image (sub-image)

Object Level HMM for Contour Detection

Weak Classifier

Figure 1: Algorithmic Components for the proposed architecture

Hence, we use them to recognize the objects. In particular, the number of retained coefficients determines the dimensionality of the observation vector (i.e. the local descriptor), while its length is determined by the number of sub-images gathered. By applying this step to all the sub-images of the sequence, we finally get the actual sequence observation. Its dimensionality will be M · T, where M is the number of the wavelet coefficients retained, and T is the number of sub-images gathered in the sample scanning operation of retained coefficients determines the dimensionality of the observation vector (i.e. the local descriptor), while its length is determined by the number of sub-images gathered. By applying this step to all the sub-images of the sequence, we finally get the actual sequence observation. Its dimensionality will be M · T, where M is the number of the wavelet coefficients retained, and T is the number of sub-images gathered in the sample scanning operation problem: to identify an object given an aspect. The basic idea is to perform a ‘‘decreasing’’ learning, starting each training session from an informative situation derived from the previous training phase. More specifically, the procedure consists in starting the model training using a large number of states, run the estimation algorithm, and, after convergence, evaluate the chosen model selection criterion for that model. In this case the Bi-criterion was used. Then, the importance of each model state is determined, using the stationary distribution of the Markov Chain associated to the HMM. Finally, the ‘‘least probable’’ state is pruned, and this

158/278

2013 International Conference on Indoor Positioning and Indoor Navigation configuration is taken as initial situation from which to start again the training procedure.

5.

In this way, each training session is started from a ‘‘nearly good’’ estimate. And the use of weak classifiers will help to made generic object differ from the specific objects. The key component of the object representation is the weak classifiers. A weak classifier can be regarded as a conjunction of a set of single feature classifiers, where a single feature classifier is defined by an edge feature (a location and orientation) along with a tolerance threshold and its parity. A single feature classifier returns true if the distance from the specified location to the closest edge with the specified orientation is within tolerance (i.e. it should be sufficiently small if the parity is positive and sufficiently large if the parity is negative). A weak classifier returns true if all its constituent single feature classifiers return true. This permits to obtain better estimates for the model, increasing the efficacy of the proposed approach. Moreover, by starting from a good situation, the number of iterations required by the training algorithm to converge is reduced, resulting in a less computationally demanding procedure. Learning is finally performed using standard Baum Welch procedure, stopping the procedure after likelihood convergence.

6. 7. 8.

9. 10.

11. 12.

IV.

CONCLUSION

The proposed method delivers a novel approach in generic object detection. The next step is to validate and test the algorithm using real images.

REFERENCES

1.

2.

3.

4.

Manjunath, B.S., R. Chellappa, and C. von der Malsburg. A feature based approach to face recognition. in Computer Vision and Pattern Recognition, 1992. Proceedings CVPR'92., 1992 IEEE Computer Society Conference on. 1992. IEEE. Mel, B.W., SEEMORE: combining color, shape, and texture histogramming in a neurally inspired approach to visual object recognition. Neural computation, 1997. 9(4): p. 777-804. Olvera-López, J.A., J.A. Carrasco-Ochoa, and J.F. MartÃ-nez-Trinidad, A new fast prototype selection method based on clustering. Pattern Analysis and Applications. 13(2): p. 131-141. Fe-Fei, L., R. Fergus, and P. Perona. A Bayesian approach to unsupervised one-shot learning of object categories. in Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on. 2003. IEEE.

159/278

Moghaddam, B., T. Jebara, and A. Pentland, Bayesian face recognition. Pattern Recognition, 2000. 33(11): p. 1771-1782. Black, M.J. and A.D. Jepson, Apparatus and method for identifying and tracking objects with view-based representations, 2003, Google Patents. Antonini, M., et al., Image coding using wavelet transform. Image Processing, IEEE Transactions on, 1992. 1(2): p. 205-220. Li, H., B. Manjunath, and S.K. Mitra, Multisensor image fusion using the wavelet transform. Graphical models and image processing, 1995. 57(3): p. 235245. Juang, B.H. and L.R. Rabiner, Hidden Markov models for speech recognition. Technometrics, 1991. 33(3): p. 251-272. Bicego, M., U. Castellani, and V. Murino. Using Hidden Markov Models and wavelets for face recognition. in Image Analysis and Processing, 2003. Proceedings. 12th International Conference on. 2003. IEEE. Rabiner, L.R., A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 1989. 77(2): p. 257-286. Arandiga, F., et al., Edge detection insensitive to changes of illumination in the image. Image and Vision Computing, 2010. 28(4): p. 553-562.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Hidden Markov Based Hand Gesture Classification and Recognition Using an Adaptive Threshold Model Jeroen Mechanicus∗ , Vincent Spruyt∗† , Marc Ceulemans∗ , Alessandro Ledda∗ and Wilfried Philips† ∗



Faculty of Applied Engineering, Electronics-ICT University of Antwerp Paardenmarkt 92, 2000 Antwerpen, Belgium web: http://www.cosys-lab.be/

Abstract—Traditional approaches to gesture recognition often experience an inherent time delay, as temporal gestures, such as a waving hand, are only recognized once the gesture has been completed. Furthermore, most systems use specialized hardware or depth cameras in order to detect and segment hands. We propose a robust and complete, real-time gesture recognition system that can be used in unconstrained situations without any time delay, and that only uses a simple, monocular webcam. Gestures are recognized during their execution, allowing for realtime interaction. Furthermore, our system can cope with changing illumination, moving backgrounds, and is able to automatically recover from tracking errors. Gestures can vary in shape, duration and velocity, and are recognized with low computational cost. In this paper, we show the robustness of our algorithm by comparing our results with traditional gesture recognition approaches, and illustrate its effectiveness in real-life situations by using it to control the volume of an Arduino based car radio.

I.

I NTRODUCTION

Gesture recognition and hand pose detection are amongst the most challenging tasks in current human-computer interaction (HCI) research. With the advent of low cost depth sensing devices, research has mostly shifted from monocular object detection to interpretation of depth maps. However, due to their dependency on infrared signals, these devices tend to fail in direct sunlight, and can not be used in critical environments where other infrared hardware co-exists. Therefore, a robust and real-time gesture recognition system, that is able to recognize gestures using a simple monocular camera, would be able to overcome these problems, and could yet be combined with depth-sensing information, when available. Gestures can either be static or dynamic. Static gestures occur when the user assumes a certain pose or hand configuration. Recognizing the exact configuration reduces to a classification problem, and can be solved by means of spatial classification and traditional pattern recognition approaches. Dynamic gestures, on the other hand, represent a temporal motion pattern, such as a waving hand. While the problem of static hand pose recognition has received a lot of attention in the research community [1], dynamic gesture recognition is still considered a challenging task [2] due to the difficulties of Research funded by a PhD grant of the Institute for the Promotion of Innovation through Science and Technology in Flanders (IWT-Vlaanderen)

TELIN-IPI-iMinds Ghent University St. Pietersnieuwstraat 41, 9000 Gent, Belgium web: http://telin.UGent.be/ipi/ isolating meaningful gestures from continuous hand motion. Gesture spotting is a difficult challenge, mainly due to the spatial variability of hand gestures, since the same gesture could have a very distinct appearance when executed by different users, or even when executed by the same user at different instances in time. Most gesture recognition techniques therefore employ a list of empirically defined constraints and rules to aid in the task. Yang and Ahuja [3] proposed the well known motion template technique, in which a time delayed neural network is used for gesture classification. The main disadvantage of their approach, however, is the introduced time delay, which is undesirable in real-time solutions. Yoon et al. [4] proposed the use of a Hidden Markov Model (HMM) to model the spatio-temporal variance that is inherent to human gestures. They employ simple color and motion detection to extract the hand locations, which are then clustered to obtain hand trajectories. The resulting trajectories are classified by an HMM. While their results illustrate the potential of an HMM based approach, the proposed method requires the user to intentionally stop moving his hands for several seconds, right before and after the gesture. Furthermore, due to their dependency on a simple color based blob detector, the proposed solution tends to fail in uncontrolled lighting situations. In speech recognition, word spotting is usually accomplished by classifying the sequence of words by a separate HMM, called a garbage model, that is trained with acoustic non keyword patterns [5]. If the likelihood of this garbage model is higher than the likelihood of the normally trained models, the keyword is discarded. Lee and Kim [6] introduced a technique to automatically detect the beginning and end of a gesture in continuous hand motion sequences. Such temporal segmentation is often referred to as “gesture spotting”, and is one of the most important tasks of a robust gesture recognition system. However, their proposal can be classified as a backward spotting technique, which inherently introduces a delay in detection. Backward spotting methods start classifying a gesture by means of the Viterbi algorithm [7], as soon as a gesture endpoint has been detected, while forward spotting algorithms continuously try to classify the current gesture, until an endpoint has been detected.

c 978-1-4673-1954-6/12/$31.00 2012

160/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 Since an almost infinite set of non-gesture patterns could be obtained, it is difficult to train a garbage model for gesture recognition purposes. Instead, Lee and Kim propose to rely on the internal segmentation property of Hidden Markov Models, which says that the states and transitions in a trained HMM represent sub patterns of a larger gesture. Therefore, the garbage model is an HMM that is trained on all states copied from the individual gesture models. This garbage model then yields an increased likelihood for any kind of gesture and nongesture that is a combination of any of the states in any of the trained Hidden Markov Models, in any order. In order to achieve this, the threshold model is defined as an ergodic HMM, in which all states are fully connected to each other. However, the main disadvantage of this approach is the inherent time delay between the start of a gesture and the recognition of the gesture. Only after the gesture endpoint has been spotted, the Viterbi algorithm can be used to find the model that best explains the sequence of observations between the detected start point and endpoint. Elmezain and Al-Hamadi [8] proposed an HMM based gesture recognition system that is able to automatically determine the start points and endpoints of a temporal gesture. However, their method assumes that each gesture ends with a straight line that can be used as a zero-codeword. Furthermore, as many gestures contain straight lines, a constant velocity assumption is made, to avoid breaking up single gestures into their parts. Finally, they use a depth map that is obtained by a stereo camera setup, in order to aid hand detection and segmentation. Recently, Elmezain et al. [9] proposed an improvement upon the threshold model idea, by introducing a forward gesture spotting method that is able to execute hand gesture segmentation and recognition simultaneously, therefore eliminating any time delay. Once a start point of a gesture is detected, the segmented part of the gesture, up till the current hand location, is recognized accumulatively. Once the endpoint of the gesture has been detected, the complete segment is classified again by the Viterbi algorithm. However, they use a depth sensor to aid in hand detection and segmentation. This greatly increases tracking stability and thus simplifies the gesture recognition problem. Kurakin et al. [10] proposed an action graph based technique for gesture recognition. Action graphs share similar robust properties with the standard HMM, but require less training data. On the other hand, inference in action graphs can be slower than inference in their HMM counterparts. In their work, Kurakin et al. use a depth sensor to obtain a depth map, which is thresholded to obtain an accurate hand segmentation. In this paper, we propose a complete, real-time gesture recognition system, that is able to detect and track hands in unconstrained environments, using a simple monocular camera. Inspired by the work of Elmezain et al. [9], we propose several enhancements to current gesture spotting and classification methods, resulting in a robust and real-time gesture recognition system that can be used in unconstrained situations without any time delay. Furthermore, our system can cope with changing illumination, moving backgrounds, and is able to automatically recover from tracking errors. Gestures can vary in shape, duration and velocity, and are recognized with low computational cost.

In order to test our proposed solution in a real-life situation, we implemented a hardware module that allows a vehicle driver to control his radio using natural gestures. Zobl et al. [11] suggested that such a system could reduce the distractions of a driver, resulting in less vehicle crashes. Our radio module combines an Si4703 FM tuner module with an embedded Arduino platform. The remainder of this paper is organized as follows: Section II describes the hand detection and tracking framework used to construct motion paths. Section III explains our HMM based approach to classify the motion trajectories into distinct gestures. In Section IV, the hardware setup and integration with an FM radio module is described. Finally, Section V discusses the evaluation and results of our approach. II.

H AND DETECTION AND TRACKING

A. Hand detection Hand detection in monocular video poses a challenging problem because of the high number of degrees of freedom in a human hand [12]. Due to its articulated nature, hands can take on almost any shape, preventing traditional object detection methods, such as Haar classifiers, to accurately learn a general hand shape. In this paper, we build upon our earlier work as described in [13], where we proposed a random forest based hand detector, capable of detecting human hands in real-time video sequences. The detector is scale and rotation invariant, and can be used to generate hand hypotheses to be tracked by a particle filter. A scale invariant feature detector [14] is used to obtain a vector of image patches. These image patches represent areas of maximum entropy within the image, and therefore contain the most information. Image patches are then divided by a 3 × 3 grid, and six feature descriptors are calculated for each cell in this grid. Three of these descriptors are color based, while the other three are texture descriptors. The color based descriptors represent histograms in a non-linear color space that is suited for skin detection [15], while the texture based descriptors consist of a Local Binary Pattern (LBP) descriptor [16], a normalized orientation histogram, and a simplified FREAK descriptor [17]. All feature descriptors are normalized by the scale of the image patch that resulted from the feature detector, and are rotated by the dominant gradient orientation within this patch. Similarly, the 3×3 grid is rotated by this dominant orientation, in order to obtain a scale and rotation invariant descriptor. During training of the random forest classifier, fifteen decision trees are learned to classify these image patches. Each node within each decision tree, splits the dataset based on a randomly selected descriptor. By accumulating the classification result of all trees, a low-bias, low-variance classifier is obtained. For each image patch in the training set, the offset vector to the centroid of the hand is stored. During classification of a new image patch, all decision trees then cast a probabilistic vote on the hand’s centroid location. The resulting Hough voting map, as illustrated in Figure 1, can then be used to obtain hand location hypotheses.

161/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 Hidden Markov Model (HMM), in which each state contains a conditional probability distribution over all output values.

(a) Original image Fig. 1.

(b) Hough voting map

Thus, a Hidden Markov Model tries to model the joint distribution of states and observations. Relying on Bayesian theory, this corresponds to modeling the prior distribution of the hidden states, and the conditional distribution of observations, given states. The former distribution represents the transition probabilities, while the latter represents the emission probabilities. Therefore, an HMM can be defined as follows [21]:

Illustration of the Hough voting process [13]



A set of N states {s1 , . . . , sN };

B. Hand tracking



A set of T observations O = {o1 , . . . , oT };

Each hand hypothesis, generated by the hand detector, is tracked using a particle filter framework. During tracking, a simple linear classifier continuously rejects false positive detections, based on temporal information such as the particle filter’s variance and average velocity.



An N × N state transition matrix A = {aij }, where aij is the transition probability to transit from state si to state sj ;



An observation probability matrix B = {bjk }, where bjk is the emission probability of generating symbol ok from state sj ;



Initial state probabilities Π = {πj }, j = 1, . . . , N .

Hand tracking is based on our earlier work, as described in [18], where the Hough probability map is directly incorporated into the observation model of the particle filter. A Bayesian skin classifier is used to generate a skin likelihood map, as described in [19]. This skin likelihood is combined with a simple motion detection by means of frame differencing. Furthermore, color distributions for skin and nonskin regions are updated online, to allow the particle filter to adapt to changing lighting conditions. While the offline trained Bayesian skin classifier is calculated in RGB space, online color statistics are calculated in HSV space. Combining both color spaces increases robustness to non-uniform illumination [18]. Finally, optical flow [20] is incorporated into the motion model of the particle filter, to increase robustness in case of rapid, non-linear motion. A partitioned sampling method is used to efficiently sample the search space, defined by the state partitions S1 = {x, y} and S2 = {width, height}. By solving two two-dimensional problems instead of a single four-dimensional problem, our tracking solution only needs about fifty particles to robustly track multiple hands. III.

G ESTURE CLASSIFICATION

A. Hidden Markov Models If a stochastic process has the property that the conditional probability distribution of its future state, conditioned on the current state, is independent of previous states, it can be modeled with a Markov Model, such as a discrete Markov chain. In these traditional Markov Models, the previous and current states are readily observable, and the model is simply defined by its state transition probabilities. However, in many situations, such as speech-recognition or gesture recognition, the states themselves are not readily observable. Instead, the output, depending on the state, is visible, while the state itself remains hidden. These situations can be modeled by a

Training the HMM corresponds to estimating the HMM parameters, namely the transition and emission probabilities, based on training data which consists of nothing but observation sequences. Two widely known methods to train an HMM are the Baum-Welch algorithm and the Viterbi training algorithm. The former is an implementation of a generalized Expectation-Maximization method that results in a maximum likelihood estimate of the parameters, while the latter updates the HMM parameters such that the probability of the best HMM state sequence for each training sample is maximized. While both methods have their merits, literature shows that the Baum-Welch algorithm often outperforms the Viterbi training method [22]. In order to classify a gesture, assuming we obtained a sequence of observations, the Forward-Backward algorithm [8] is used. The Forward-Backward algorithm is a dynamic programming based inference algorithm for HMMs, that calculates the posterior probabilities of each state, given a sequence of observations. Therefore, this method can be used to evaluate the probability that a particular sequence of symbols is produced by a particular model. Finally, several HMM topologies exist, two of which are used in this paper. A fully connected structure, in which any state can be reached from any other state, is called an ergodic HMM. A second widely used topology is the left-right model, where each state can go to itself or any of the following states. The third topology is the left-right banded model, in which the current state can only reach itself, or the next state. In this paper, only the first and the third topology are of interest. B. Feature extraction Based on the tracked hand location, several features could be used to train the HMM for gesture classification. Three widely used features are the hand location itself, the motion’s orientation, and the hand velocity. Earlier research showed that orientation yields the best results in terms of accuracy and performance [21], [4].

162/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 A hand trajectory can be described as a sequence of centroid locations C = {c0 , c1 , . . . , ct }, each defined as positions ct = (xt , yt ) (at time t). The orientation between such centroid locations can then easily be calculated as y −y  t t−1 θt = arctan (1) xt − xt−1 Due to the discrete nature of the HMM, the orientations are discretized into bins. A large number of bins would allow for very fine grained gestures, but increases the dimensionality of the problem, and thus the risk of overfitting. A small number of bins on the other hand, would allow for better generalization, which decreases the risk of overfitting, but also decreases the discriminative power of the model.

state equals the current state, can be defined as shown by equation (2). 1 di = (2) 1 − aii where i is the index in the state transition matrix A. Furthermore, we assume that each state is equally represented by the observation sequence, such that di can be calculated as shown by equation (3). T (3) N where T is the average length of the gesture path (training sequences) and N is the number of states in the model. di =

In literature, the number of bins used varies from five [6] to eighteen [9]. Theoretically, the maximum number of bins allowed is only limited by the amount of training data available, as an infinite amount of training data would completely overcome the overfitting problem.

Combining equations (2) and (3) then allows us to calculate the initial self-transition probabilities as

In our work, eight bins were empirically determined to be a robust choice, while small changes to this parameter do not seem to have any significant impact on the performance of the system.

Finally, since each row of the state transition matrix represents a probability mass function and should therefore sum to one, the transition probabilities for the next state, according to the left-right banded model, can be found as follows:

Due to noise in the image, changing illumination or cluttered backgrounds, the centroid location as given by the hand tracker, tends to fluctuate between frames. To increase robustness of the gesture recognition system, we apply a simple averaging filter to the tracking result before calculating the orientation feature. In our work, a window size of three was found to sufficiently smooth the resulting centroid coordinates.

aii+1 = 1 − aii

Furthermore, in order to decrease computational complexity, and to ensure invariance against hand velocity and distance from the camera, a centroid location is only recorded if its offset from the previous centroid location is large enough. To decide when this offset is large enough, an adaptive threshold is calculated, based on the dimensions of the bounding box of the hand. When a hand is close to the camera, the amount of motion is significantly larger than when the hand is further from the camera, while making the same gesture. Therefore, by adapting the threshold, our gesture recognition method becomes invariant to z-translations. C. Hidden Markov Model training To train the model, we use a multi-sequence variable length Baum-Welch algorithm. An important aspect during the training is parameter initialization. If the transition, emission and initial matrices are not correctly initialized, the BaumWelch algorithm will get stuck in a local maximum in stead of finding the global optimum, thereby resulting in incorrect parameters. To get the best result, we initialize the self-transition values based on the left-right banded model. This model assumes that a state can only be reached from a previous state or from the current state itself, while it can never be reached by a future state. Therefore, the left-right banded model represents a sequential flow from the original state towards the final state. Given the self-transition probabilities aii , the equation state duration, i.e. the time spend in a state given that the previous

aii = 1 −

1 T /N

(4)

(5)

The state transition matrix of a left-right banded model for training is then:   a11 1 − a11 0 0 a22 1 − a22 0   0 (6) A= 0 0 a33 1 − a33  0 0 0 1 When we use an average training sequence length of T = 20 and an HMM with N = 4 states, the self-transition probabilities are aii = 0.8. D. Gesture spotting A gesture consists of a sequence of hand locations or motion orientations, and can be classified by the HMM. However, before classification the startpoint of the gesture has to be detected. Temporally segmenting a meaningful gesture from a a sequence of hand locations, is called ‘gesture spotting’ [9], [23]. Gesture spotting is a difficult challenge, mainly due to the spatial variability of hand gestures, since the same gesture could have a very distinct appearance when executed by different users, or even when executed by the same user at different instances in time. Most gesture recognition techniques, therefore employ a list of empirically defined constraints and rules to aid in the task. Yoon et al. [4] ask the user to intentionally stop moving his hands for several seconds, right before and after the gesture, in order to aid their gesture spotting algorithm. In [8], Elmezain et al. assume that each gesture ends with a straight line that can be used as a zero-codeword. Furthermore, as many gestures contain straight lines, a constant velocity assumption is made, to avoid breaking up single gestures into their parts.

163/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 For gesture spotting without empirically defined rules or specific constraints, the likelihood of a gesture model for a sequence of hand locations, should be distinct enough. While each HMM model reports the likelihood that a given gesture explains the observed sequence, simply applying a fixed threshold to this likelihood, often does not yield reliable results while trying to distinguish gestures from non-gestures. Our own technique, used in this paper, is based on Lee and Kim’s method [6] and similar to techniques that are used in speech recognition. In speech recognition, word spotting is usually accomplished by classifying the sequence of words by a seperate HMM, called a garbage model, that is trained with accoustic nonkeyword patterns [5]. If the likelihood of this garbage model is higher than the likelihood of the normally trained models, the keyword is discarded.

farther from each other. By applying adaptive smoothing, this difference is minimized, resulting in a better spotting system, invariant to different speeds of movement. In Figure 2, the concept of the spotting system is visualized. If the probability of the gesture model becomes larger than the probability of the threshold model, a start point has been found. If the probability of the gesture model then becomes lower than the probability of the threshold model, an endpoint has been detected.

Similarly, we proposed the use of a so called threshold model, that is based on the garbage model concept. The threshold model is a separately trained HMM that yields a likelihood value to be used as an adaptive threshold. Therefore, a gesture is only recognized, if the likelihood of the gesture model is higher than the likelihood of the threshold model. The threshold model is an ergodic model, obtained by copying all states from all the gestures models and then fully connecting them. The observation probability matrix of the threshold model is simply obtained by adding together the observation matrices from all the HMM models, while the transition probability matrix of the model is obtained by adding together the self-transition values aii of the individual HMM transition matrices. The unknown transition probabilities are then calculated as follows: 1 − aii aij = , with i 6= j (7) N −1

Fig. 2.

Schematic of the spotting system

The spotting algorithm uses a sliding window, the size of which was empirically defined as three in our system. E. Gesture recognition After the start point has been detected, we accumulate the observations and evaluate the accumulated observation sequence at every step to find the endpoint. This process can be seen in Figure 3. After the endpoint is detected, the probability

However, this model only works well as long as the number of states is limited, otherwise the model becomes unreliable in real-time applications. To elevate this problem, we use a simplified ergodic model, by defining two dummy states (start and end state). The threshold model transition matrix contains the self-transition values from the gesture models, while the other values are zero, except for the transition from the dummy start state and the transition to the dummy end state. The transition from start state S to state j is given by: 1 (8) N and the transition from state j to the end state E is given by: aSj =

Fig. 3. Gesture recognition: observations are accumulated and evaluated at each step

It is important to note that the dummy states observe no symbol, so they are passed without time delay, which is an important factor in real-time applications. Because of our proposed smoothing system, the spotting becomes more reliable and this results in less incorrect start points. Without smoothing, start points would be recognized even if the hand were not moving, because of noise and other influences.

of the full sequence at the gesture model is compared to the probability of the full sequence at the threshold model. Some gestures can be split into several shorter gestures, e.g., the letter W gesture consists of two consecutive letter V gestures. In order to be able to detect the longer gesture, we check wether any other model has a higher probability than the threshold model after the short model has ended. If not, then the short model is chosen, otherwise we discard the first endpoint and continue the endpoint detection process.

Furthermore, our adaptive smoothing method allows for gesture spotting that is invariant to the motion velocity. Slowly moving gestures result in a lot of observed symbols, close to each other, while fast gestures result in fewer symbols, spaced

To separate non-gestures from gestures better, a prior is used after the endpoint detection. It is possible that a short gesture can be wrongly classified as correct. To aid the classification, we base the prior on the gesture sequence length.

ajE = 1 − aii

(9)

164/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 The prior is the current observation length divided by the average length of the training sequences. Finally, a minimum sequence length is defined in order to filter out short sequences that accidently have a high probability. IV.

G ESTURE CONTROLLED RADIO

The radio is built using an Si4703 FM tuner chip, which is capable of carrier detection and filtering, processing Radio Data Service (RDS) and Radio Broadcast Data Service (RBDS) information. It is also possible to retrieve data, such as the station’s name and song name, so it can be displayed to the user. The device has a 100 mW stereo amplifier, so it can only be used with earphones if no additional amplifier is available. Using this board we are able to pick up multiple radio stations and it is easy to control using i2c (Inter-IC). To control the FM receiver we use an Arduino Mega 2560. The Arduino can communicate using i2c, but the voltage level of the Arduino is 5 V and that of the FM receiver chip is 3.3 V, so for communication between both we need a bidirectional voltage converter. The voltage converter circuit is built using an N-channel, enhancement-mode MOSFETs for low-power switching applications, and three resistors. This circuit can be seen in Figure 4. The circuit works as follows: when the low

Fig. 5.

Schematic of the radio

V.

R ESULTS

To evaluate the algorithms we use five gestures, visualized in Figure 6. The meaning of the different gestures is, respectively: volume up, volume down, previous station, next station, and close. The videos used for testing can be

Fig. 6. The five gestures used for evaluation (left to right: volume up, volume down, previous station, next station, close)

obtained freely for research purposes by sending a request to [email protected]. For evaluation, we used 10 training samples per gesture and 86 test gestures in total. In the following paragraphs, we compare our algorithm with an implementation of the state-of-the-art method proposed by Elmezain et al. [9]. This method is an HMM gesture recognition system based on the orientation feature, similar to our technique. Furthermore, they use a depth sensor to facilitate the hand detection and segmentation process. A. Observation training sequence

Fig. 4.

As an example, Table I lists a few observation sequences generated by our training algorithm. The first column shows the observation sequence, written as a series of consecutive observations (quantized orientations). The number 0 refers to a movement right, the number 1 to a movement right up, the number 2 to a movement up, and so on. All sequences have different lengths, and have little errors in observation orientations which improves for generality of the trained classifier while reducing overfitting.

Logic of the voltage level converter

side (3.3 V) transmits a logic one, the MOSFET is tied high (off) and the high side sees 5 V through the pull-up resistor. When the low side transmits a logic zero, the MOSFET source pin is grounded and the MOSFET is switched on and the high side is pulled down to 0 V. When the high side (5 V) transmits a logic one, the MOSFET substrate diode conducts pulling the low side down 0.7 V, which turns the MOSFET on. This process continues. We need three level converting circuits, one for Serial Data Line (SDA), one for Serial Clock (SCL) and a third one for a reset pin on the FM receiver board. After the FM receiver picks up a signal, we need to amplify it before sending it to the speaker. The connection cable between the amplifier and FM receiver also serves as an antenna. The radio receives its commands through a Bluetooth connection with the computer where the gesture recognition algorithm is running. The simplified schematic concept can be seen in Figure 5.

The number of states used to represent the gesture is given in the second column. Finally, in the last column the corresponding gesture action is given. TABLE I.

O BSERVATION TRAINING SEQUENCE FROM OUR PROPOSED ALGORITHM

Observation sequence 5, 5, 5, 5, 5, 5, 7, 7, 7, 7, 7, 7, 7, 1 4, 5, 5, 5, 5, 7, 7, 7, 7, 0 0, 7, 7, 7, 7, 0, 7, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 1, 2, 1, 1, 1, 1, 1 4, 1, 1, 1, 1, 7, 7, 7, 7, 7, 7, 7, 7, 7 5, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 7, 7, 7, 7, 7, 7, 7

# states 4 4 4 4 4 4

Gesture Previous Station Previous Station Volume Down Volume Down Volume Up Volume Up

Table II lists a few observation sequences that are generated by the training algorithm as proposed by Elmezain et al. All sequences have different lengths, and have a lot of errors in observation orientations, as can be seen in the first column. Therefore, this method is more prone to overfitting and thus

165/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 needs more training data to achieve similar recognition results. TABLE II.

O BSERVATION TRAINING SEQUENCE IMPLEMENTATION FROM LITERATURE

Observation sequence 1, 4, 5, 2, 5, 5, 5, 4, 6, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 4, 6, 5, 7, 7, 7, 7, 6, 6, 7, 6, 7, 7, 7, 7, 7, 7, 0, 0, 6, 7, 0, 4, 7 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 4, 4, 5, 4, 5, 7, 7, 7, 6, 0, 6, 7, 0, 6, 7, 7, 7, 7, 0, 0, 0, 4, 6, 6, 6, 7, 5, 6, 7, 1, 6, 0 7, 4, 2, 3, 5, 2, 2, 1, 6, 3, 2, 2, 2, 2, 2, 2, 1, 2, 3, 3, 4, 5, 5, 4, 4, 4, 4, 2, 4, 4, 4, 3, 2, 4, 4, 4, 4, 3, 4, 4, 4, 2, 4, 5, 4 1, 7, 5, 1, 6, 0, 1, 0, 7, 4, 7, 7, 7, 6, 7, 6, 6, 1, 2, 7, 7, 2, 7, 6, 0, 1, 6, 1, 7, 7, 6, 7, 0, 1, 6, 7, 0, 6, 0, 3, 0, 5, 1, 5, 0, 2, 6, 2, 3, 0, 1, 1, 6, 1, 0, 2, 1, 0, 4, 1, 2, 0, 1, 4, 0, 6, 2, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 2, 0, 1, 2, 0, 1, 7, 1, 1, 1, 2, 1, 2, 1, 0 3, 7, 4, 4, 4, 4, 3, 3, 5, 3, 4, 4, 4, 3, 0, 2, 2, 4, 2, 0, 2, 4, 6, 0, 3, 4, 3, 5, 1, 5, 3, 5, 2, 3, 0, 2, 3, 3, 2, 2, 3, 2, 2, 3, 1 5, 1, 1, 3, 0, 0, 0, 1, 1, 1, 0, 2, 1, 1, 1, 2, 1, 1, 1, 2, 2, 7, 2, 2, 3, 1, 0, 0, 1, 1, 2, 6, 5, 0, 5, 2, 1, 7, 6, 0, 7, 7, 0, 7, 0, 7, 7, 7, 7, 7, 6, 7, 7, 7, 7, 7, 7, 7, 7, 6, 7, 0, 7, 7

# states 4

Gesture Previous Station

5, 7,

4

Previous Station

5, 4,

4

Volume Down

7, 7, 2, 1, 0,

4

Volume Down

5, 1,

4

Volume Up

0, 1, 7,

4

Volume Up

5, 7,

C. Movement threshold In Figures 7 and 8, the results of the different sizes of movement threshold for the same hand size and distance from the camera are displayed in a graph. The graph starts from no movement threshold to a movement threshold of five pixels, i.e., the next point needs to be further away than a Euclidean distance of five pixels from the current pixel in order to be taken into account. In Figure 7, the sensitivity of the system for every movement threshold can be seen, this is the proportion of actual gestures that are correctly identified as such. A movement threshold of three pixels gives the best results.

B. Evaluation of sequences To decide which algorithm to use for the evaluation of the observation sequences, we tested the recognition rate and the processing time of the Viterbi and the Forward algorithm. Both algorithms have the same recognition rate, but the Viterbi algorithm takes longer to process. The difference in processing time is caused by the Viterbi path calculation which is used for the decoding part. This path calculation is not necessary for our purpose, so by reducing the Viterbi algorithm we effectively reduce the needed processing time. In Table III, the results of the processing time of the Forward, Viterbi and our proposed reduced Viterbi algorithms can bee seen. The processing time is the average time needed to calculate a sequence of 40 observations for a four-state HMM. In most TABLE III.

Fig. 7.

Effect of the movement threshold on the recognition rate

Fig. 8.

Effect of the movement threshold on the false positive detections

P ROCESSING TIME FOR OBSERVATIONS

Forward 0.044 ms

Viterbi 0.142 ms

Reduced Viterbi 0.042 ms

systems, Viterbi is used. The Viterbi algorithm only guarantees the maximum likelihood over all state sequences, instead of the sum over all possibilities like the Forward algorithm, which results in an approximation. However, for most applications this is sufficient. The number of multiplications needed for the Forward algorithm is N 2 (T − 1) + N T . The Viterbi algorithm also needs N 2 (T − 1) + N T multiplications, but by changing the computation to log space, the multiplications in the Viterbi algorithm can be changed to N 2 T additions, where the Forward algorithm still needs scaling and therefore still needs multiplications. Taking this into account, we opted for the Viterbi algorithm. Furthermore, in the method of Elmezain et al., the Viterbi algorithm is used again to evaluate the sequence after the endpoint of the gesture was detected, wheareas in our system, we classify the detected endpoint directly.

D. Confusion matrix The results of the tested video sequences, using a movement threshold of three pixels, are listed in the confusion matrix in Table IV. For the No Gesture class we have noted an ‘x’ in the table, because our gesture test sequences contain random movements which are indeed recognized as No Gesture, but we can not place a value on this. Roughly one third of the frames contain random movements. Our implementation has an overall recognition rate of 92%. To evaluate the effect of gesture speed on the recognition rate, we decimated the sampling points two and three times.

166/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Actual class

TABLE IV.

C ONFUSION MATRIX FOR THE FIVE SELECTED GESTURES

Previous Station Volume Up Volume Down Next Station Close No Gesture

Previous Station 21 0 0 0 0 0

Volume Up 0 14 0 0 0 0

Predicted class Volume Next Down Station 0 0 0 0 14 0 0 22 0 0 0 0

Close 0 0 0 0 9 0

No Gesture 1 2 2 0 1 x

If the hand detection is distorted in some way, the obtained coordinates and consequently the orientation of the hand movement will have a relatively large error for slow moving gestures, as can be seen in Figure 10(a). For fast moving gestures, an incorrect hand detection leads to less error in orientation, because of the distance between the data points, as can be seen in Figure 10(b).

This way we obtain gestures with a simulated speed increase of two and three times, respectively. The recognition rate did not change with the increase in speed. The tests for the algorithm from literature are performed using our hand detection algorithm in combination with the hand gesture detection algorithm from Elmezain et al. We found that their implementation only works for their tracker. If used with a different tracker and different test sets, the recognition rate is 0%. The following paragraphs explain why. First, Elmezain et al. use a depth sensor in order to obtain an accurate hand detection and segmentation. This greatly increases tracking stability, but consequently the gesture recognition system is not adapted for other trackers. We perform tracking and segmentation using only a 2D video frame, resulting in more noise in the obtained observation sequence. While our method is able to cope with inaccurate tracking results, the method described by Elmezain et al. is not. Second, the more time the hand detection algorithm needs to detect a hand, the less data points can be processed in real-time. This results in an indirect smoothing of the gesture, as can be seen in Figure 9. The dots are hand coordinates received from the hand detection stage and the line is the orientation between two succeeding points. The same effect can be obtained by decimating the number of received points from the hand tracker. Our hand detection system takes an average of 250 ms to detect a hand on an I7 with 4 GB of RAM. In our proposed algorithm we use a movement threshold, which results in a stable system, independent on the processing time needed for hand detection, whereas the system proposed by Elmezain et al. is not invariant to the processing speed and the number of observed datapoints.

(a) Slow gesture Fig. 10.

(b) Fast gesture

The impact of gesture speed on orientation accuracy

In our proposed algorithm we use a movement threshold which results in a stable system, independent on the movement speed of the gestures. Fourth, in the system proposed by Elmezain et al., the assumption is made that the gesture moves continuously at a constant speed. Our test gestures have different speeds and have sometimes a small pause during the gesture. In Figure 11, the result can bee seen of a gesture with a small pause during the movement. The dots in the figure are hand coordinates received from the hand detection stage, the lines are the orientations between two succeeding points. While our algorithm ignores similar locations, the method proposed by Elmezain et al. can not cope with such changes in velocity.

Fig. 11.

Continuity of the gesture

The algorithm from Elmezain et al. works well for fluent, continuous gestures in combination with a depth sensor for the hand detection and tracking. In our proposed algorithm we use a movement threshold which results in a stable system, independent on the movement speed of the gestures or pauses in the gestures. VI.

Fig. 9.

C ONCLUSION

This paper proposes a complete, real-time gesture recognition system, that is able to detect, track and recognize gestures in unconstrained environments. For the hand detection and tracking system we use a previously designed algorithm that can work with a cheap monocular camera. We suggest several enhancements to a Hidden Markov Model system to increase the robustness of the gesture recognition.

Indirect smoothing occurs when less data points are used

Third, slow moving gestures lead to consecutive hand coordinates with a small Euclidean distance between them.

The newly developed system is able to recognize, in realtime and with low computational cost, gestures that vary in

167/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 shape and duration. Our proposed hand gesture recognition system has several enhancements, which makes it more stable and better usable with different trackers. For spotting and recognition, an adaptive threshold model is used in combination with an accumulative sliding window. The system is able to recognize fast gestures, slow gestures, gestures of different sizes and gestures that are not continuous movements. During evaluation, 92% of the tested gestures were recognized. As a proof of concept, we operate a radio using hand gestures. A PowerPoint presentation, or any other device or application, can also be manipulated using gestures. A demonstration of the working system can be seen at http://www. youtube.com/watch?v=ErwUSSdnc4E. The system can thus be used as a human computer interface to control hardware devices or software applications.

[16]

[17]

[18]

[19]

[20]

[21]

R EFERENCES [1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14] [15]

A. Erol, G. Bebis, M. Nicolescu, R. D. Boyle, and X. Twombly, “Visionbased hand pose estimation: A review,” Computer Vision and Image Understanding, vol. 108, no. 1–2, pp. 52–73, 2007. S. Mitra and T. Acharya, “Gesture Recognition: A Survey,” Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, vol. 37, no. 3, pp. 311–324, 2007. M.-H. Yang and N. Ahuja, “Recognizing hand gesture using motion trajectories,” in Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on., vol. 1, 1999. H.-S. Yoon, J. Soh, Y. J. Bae, and H. S. Yang, “Hand gesture recognition using combined features of location, angle and velocity,” Pattern Recognition, vol. 34, no. 7, pp. 1491–1501, 2001. L. Wilcox and M. Bush, “Training and search algorithms for an interactive wordspotting system,” in Acoustics, Speech, and Signal Processing, ICASSP-92, 1992 IEEE International Conference on, vol. 2, 1992, pp. 97–100. H.-K. Lee and J. Kim, “An HMM-based threshold model approach for gesture recognition,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 21, no. 10, pp. 961–973, 1999. L. Rabiner, “A tutorial on hidden Markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, no. 2, pp. 257–286, 1989. M. Elmezain and A. Al-Hamadi, “A Hidden Markov Model-Based Isolated and Meaningful Hand Gesture Recognition,” in World Academy of Science, Engineering and Technology 41, vol. 31, 2008, pp. 394–401. M. Elmezain, A. Al-Hamadi, and B. Michaelis, “Hand trajectory-based gesture spotting and recognition using HMM,” in Image Processing (ICIP), 2009 16th IEEE International Conference on, 2009, pp. 3577– 3580. A. Kurakin, Z. Zhang, and Z. Liu, “A real time system for dynamic hand gesture recognition with a depth sensor,” in Signal Processing Conference (EUSIPCO), 2012 Proceedings of the 20th European, 2012, pp. 1975–1979. M. Zobl, M. Geiger, K. Bengler, and M. Lang, “A usability study on hand gesture controlled operation of in-car devices,” in Abridged Proceedings, HCI. New Orleans, LA, USA: Lawrence Erlbaum Ass., pp. 166–168. G. ElKoura and K. Singh, “Handrix: animating the human hand,” in Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation, ser. SCA ’03. Aire-la-Ville, Switzerland, Switzerland: Eurographics Association, 2003, pp. 110–119. V. Spruyt, A. Ledda, and W. Philips, “Real-time, long-term hand tracking with unsupervised initialization,” in Image Processing (ICIP), 2013 20th IEEE International Conference on, 2013, p. In Press. T. Kadir and M. Brady, “Saliency, Scale and Image Description,” Int. J. Comput. Vision, vol. 45, no. 2, pp. 83–105, Nov. 2001. G. Gomez, “On selecting colour components for skin detection,” in Pattern Recognition, 2002. Proceedings. 16th International Conference on, vol. 2, 2002, pp. 961–964.

[22]

[23]

168/278

M. Heikkil¨a, M. Pietik¨ainen, and C. Schmid, “Description of interest regions with local binary patterns,” Pattern Recognition, vol. 42, no. 3, pp. 425–436, Mar. 2009. R. Ortiz, “FREAK: Fast Retina Keypoint,” in Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), ser. CVPR ’12. Washington, DC, USA: IEEE Computer Society, 2012, pp. 510–517. V. Spruyt, A. Ledda, and W. Philips, “Real-time hand tracking by invariant hough forest detection,” in Image Processing (ICIP), 2012 19th IEEE International Conference on, 2012, pp. 149–152. V. Spruyt, A. Ledda, and S. Geerts, “Real-time multi-colourspace hand segmentation,” in Image Processing (ICIP), 2010 17th IEEE International Conference on, 2010, pp. 3117–3120. V. Spruyt, A. Ledda, and W. Philips, “Sparse optical flow regularization for real-time visual tracking,” in Multimedia and Expo (ICME), 2013 IEEE International Conference on, 2013, p. In Press. N. Liu, B. Lovell, P. Kootsookos, and R. Davis, “Model structure selection training algorithms for an HMM gesture recognition system,” in Frontiers in Handwriting Recognition, 2004. IWFHR-9 2004. Ninth International Workshop on, 2004, pp. 100–105. L. J. Rodriguez and I. Torres, “Comparative Study of the Baum-Welch and Viterbi Training Algorithms applied to Read and Spontaneous Speech Recognition,” in Pattern Recognition and Image Analysis, ser. Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2003, vol. 2652, pp. 847–857. M. Elmezain, A. Al-Hamadi, and B. Michaelis, “Real-Time Capable System for Hand Gesture Recognition Using Hidden Markov Models in Stereo Color Image Sequences,” Journal of WSCG, vol. 16, pp. 65– 72, 2008.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Pedestrian Detection and Localization System by a New Multi-Beam Passive Infrared Sensor Raphaël Canals, Peng Ying

Thierry Deschamps, Joseph Zisa

PRISME Laboratory _ University of Orleans Orleans, France [email protected]

Technext Society Cannes, France [email protected]

Abstract—To reduce the consequent algorithmic power required by an image processing solution into the framework of indoor pedestrian positioning, a new sensor, the SPIRIT (Smart Passive InfraRed Intruder sensor, locator and Tracker) based on the passive infrared sensor technology, was designed. This technology, associated with a specific and low-power electronics and an innovating optics, provides the angular person positioning relating to the sensor attitude. Thus her detection and angular positioning are optical and require only a tiny embedded computing power. In this study, we propose a 3D geometric modelling of the sensor, thus allowing, by projection, to obtain a 2D cartography of the beam boundaries. When considering the successive numbers of beams activated by the person during her displacement and the timing, the distances between beam boundaries and assuming minimal and maximal walking speed, it is possible to define the probable directions of origin and so the plausible pathways as the person moves.

invasive aspect, their low-power consumption and low detectability. But their electronics and optics make them binary with a large field of view (FOV) and low resolution [13]. This is why all research works using this technology implement a PIR sensors wireless network in order to criss-cross fairly accurately the coverage area and associate a data supervision and localization processing algorithm with it [14], [15], [18], [19], [21], [22] or even sensory fusion algorithm when combining with another technology [12], [20].

Keywords-pyroelectric infrared sensor (PIR); multi-beam; multi-boundary; angular positioning; positioning refinement.

This article presents the first stage of the study on the SPIRIT as an isolated sensor to track a single human target. But this detector has been designed to be network capable, so with the view to determine the exact positioning of the person, a solution would merely employ several networked SPIRIT installed such a way their beams intersect, furnishing many precise points of positioning. As to the issue of detecting, locating and tracking multiple persons, a chronological data management algorithm should permit to determine the presence of multiple humans and their separate coherent pathways.

II.

INTRODUCTION

Techniques of human tracking aim to detect his presence and then determine his position in space as and when he moves. They must be able to manage the complex interactions and dynamics in sequences, such as occlusions, relative movement of the person in relation to the sensor, changes in lighting. The versatile range of applications of tracking extends from human-machine interaction via video communication with compression, to computer vision, robotics, surveillance, industrial automation and other specific applications [1]-[6]. Most vision-based approaches to moving object detection and tracking require intensive real-time computations and expensive hardware [2] [4] [5] [7] [8]. RFID technology can be employed to support indoor and outdoor positioning, but its proximity and absolute positioning needs imply many infrastructures [9]-[11]. RF technology equally permits to realize localization by combining the received signal strength and link quality [16] [17]. Because of its imprecision when used alone, it is used in cooperation with another technology such as pyroelectric one [12]. Pyroelectric infrared sensors (PIR) permit to detect human motion thanks to their sensitivity to changes in heat flux. They are commonly used because of their low costs, their non-

In this context, a new PIR sensor has been developed to counteract this added complexity. Thus the SPIRIT sensor (Smart Passive InfraRed Sensor Intruder, Locator and Tracker) is introduced: thanks to its coded Fresnel lens array, it constitutes a multi-boundary sensor in 3D+time that provides the angular coordinates with a resolution of 4° of the person and allows her temporal tracking in the coverage zone.

The remainder of this paper is organized as follows. Section II gives the SPIRIT sensor description. Section III discusses its 3D modeling and 2D projection on the ground. In Section IV, we show our simulation and experimental positioning results, and discuss the strength and weakness of our system. A conclusion with outlook finalizes this article in Section V. III.

THE SPIRIT SENSOR

A. General presentation The SPIRIT sensor uses the reliable and inexpensive technology of passive infrared detectors (PIR), but its internal specific signal processing coupled with an innovative and patented optics [23], [24] identifies the beam having seen the person in its FOV (Fig. 1). It is passive and therefore undetectable and harmless to people, animals and environment.

169/278

Each beam has its identification number. This allows to

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 track the target displacement and to continuously position it in the sensor FOV, even in total darkness. The FOV of the SPIRIT module, with an aperture of 60°, is segmented into 15 viewing beams that are the solid angles within which any moving temperature source is detected. Its beam detection is optical: the embedded computing power is therefore minimal, that allows a low-cost product with a total average power consumption of about 150 A in its low-power version, guarantying a 5-year autonomy with a 3V lithium battery. The SPIRIT is RS485-RS232 wired but can be equipped with a Wavenis radio transmitter by Coronis to facilitate its integration. Pyroelectric sensor signals are proportional to the change in temperature of the crystal rather than the environment temperature. To aid in motion sensing, a specific Fresnel lens array has been designed so that the visible space is divided into zones. The detection is greatly improved by creating separate visibility regions. Several lenses on the Fresnel lens array contribute in the creation of a single cone of visibility on a pyroelectric sensor. So one SPIRIT beam consists of two sub-beams collecting information from several lenses on the lens array. The positive and negative sub-beams correspond to each of the two sensitive elements in an electronic dualelement detector and the gap between them is due to the insensitivity region of the sensor. All this implies that in order to be detected, a person must circulate through the two subbeams, in one direction or the other, and its detection is established after the rise and the fall (or vice versa) in the detector response. Fig. 2 shows the visibility pattern of the Fresnel lens array. The array is made of light-weight low-cost plastic material with good transmission characteristics in the 8 to 14 m wavelength range. The lens array has a FOV of 60° (top view) and a lateral FOV of 1.8°.The width of one beam is 2° while the angular separation between beams is 2°, which separates the axis of symmetry of beams of 4°. So a person walking along the path indicated and crossing the SPIRIT FOV is detected and her successive angular positions are determined. The SPIRIT sensor is therefore a multi-boundary detector that should allow us to locate the person who crosses its beams. B. SPIRIT Modelling As a way of simplifying calculus and explanations, we represent the horizontal and vertical FOV of beams with their axes of symmetry that we call SPIRIT boundaries (Fig. 3). We consider a single person to be located whose height is denoted h. Horizontal planes respectively have the equations z = 0 and z = h. The SPIRIT sensor is positioned on the z-axis at a height H and its optical axis remains at an angle 0 with the zaxis. Each boundary F is marked with two subscripts that are two angles: the first angle i indicates the SPIRIT plane, and the second one n gives its orientation in this same plane. Boundaries are symmetrical relative to the bisector of the total FOV. Points I are the intersections of boundaries with a plane parallel to xOy. These points have the same suffixes than the boundary to which they belong but they have a higher index corresponding to the height of the plane of intersection.

Figure 1. Sensor module called SPIRIT, its optics and beams.

Figure 2. Characterization of the FOV of the Fresnel lens array.

The base is to determine the coordinates of the intersection points I of the boundaries with the horizontal plane z = h passing through the top of the person of height h on the moving plane xOy. The coordinates of Ih0, /4+n0 are: 2 2

𝐻 − ℎ tan 𝛼0 (1 − tan 𝑛𝜑0 ,

2 2

𝐻 − ℎ tan 𝛼0 (1 − tan 𝑛𝜑0 , ℎ

 

where n is the signed number of the beam in relation to the bisector. So, using the example of n = 1, the distance between the two points Ih0,/4 and Ih0, /4+0 is equal to: 

𝐼𝛼ℎ

𝜋 0 ,4

𝐼𝛼ℎ

𝜋 0 , 4 +𝜑 0

= 𝐻 − ℎ tan 𝛼0 tan 𝜑0 



If the person is detected at the point Ih0, /4 at time t=t0 and she moves along a straight line to the point Ih0, /4+0 she reaches at time t=t0+t, its speed v will be:

𝑣



= 𝐻 − ℎ tan 𝛼0 tan 𝜑0 ∆𝑡



Considering the vertical plane that contains I00, /4+0, O, H, and P(Ih0, /4+0), the projection of Ih0, /4+0 on the plane xOy, a person having a height h, wherever it comes from, will be detected when she crosses the boundary F0, /4+0 at point I00, /4+0 and will not be detected beyond the point P(Ih0, /4+0) (Fig. 4). These two points are defined by these two distances: person will be detected if she moves between these two points: 𝑂𝑃 𝐼𝛼ℎ

𝜋 0 , 4 +𝜑 0

170/278

= 𝜔𝐼𝛼ℎ

𝜋 0 , 4 +𝜑 0

= 𝐻 − ℎ tan 𝛼0 cos 𝜑0 



2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 5. 2D projection of the SPIRIT boundaries on the ground.

OR is the distance that separates the SPIRIT sensor from the person on the ground plane. As her angular position is known, the person can be located. But as we do not know her speed, we propose to define a minimum walking speed vmin=3.5km/h and a maximum one vmax=5.5km/h [25], that allows us to delimit an area between the beam projections with two parallel lines and where the person is localized.

Figure 3. Simplified geometrical characterization of the SPIRIT.

III.

RESULTS

The detection and positioning system was implemented in a 20 x 15m hall. We also ran numerical simulations to investigate the sensor deployment and positioning precision. Modeling evaluation was also performed via simulations.

Figure 4. SPIRIT boundaries.



𝑂𝐼𝛼0

𝜋 0 , 4 +𝜑 0

= 𝐻 tan 𝛼0 cos 𝜑0 



C. 2D SPIRIT boundaries projection These previous equations are defined in relation to the 3D model of the SPIRIT. We must now perform a 2D projection of this model on the ground plane z = 0; the axes of symmetry of the beams that were all the same angle n = 0 = 4° away will now be separated by different angles  n with the relation: 

𝛽𝑛 = tan−1 tan 𝑛𝜑0

sin 𝛼0





Based on Fig. 3 & 4, we obtain the projection of the SPIRIT boundaries on the plane z = 0 (Fig. 5). The path of the person is assumed to be piecewise straight and at steady speed, and crosses three successive boundaries. Let suppose the maximal person height h = hmax; the thick black segments represent the 2D projection of portions of beam boundaries which have detected the person. Knowing that the distances d1=v.t1 and d2=v.t2, and by observing the geometry rules, the distance OR is determined by:

A. Simulation results The suitability of our SPIRIT boundary model and the accuracy of projecting this model on the ground plane were simulated. The sensor can be installed at any location in the room but its installation height is usually underneath the ceiling (2.5m), with an angle 0 of about 70-80°. At this time, the path of the person is straight and at constant speed, between vmin and vmax; its starting and final points are configurable. The person is basically represented by a cylinder and we suppose that the maximum height hmax is 1.8m. Therefore the person crossing the SPIRIT beams can only be 2D-located in the green ground boundaries (room sizes: 20 x 15 x 2.5m) (Fig. 6). In Fig. 7, the person crosses all the SPIRIT beams. For easy viewing, the SPIRIT beams are not represented. The ground boundaries are all activated in the area determined using the speed limits. This area is nearly 3.85-meter wide and the limits are parallel to the person path: her direction is therefore known. And in the case the path was not straight and/or the speed not constant, she would still be located in this area.

When some beams have not detected the person, it is possible to reduce the width of the area and thus enhance the positioning. In Fig. 8, the last beam has not been activated because its second sub-beam has not been crossed. The area width is 3.757m wide between the two limits, and in the projected beam, the width is 3.887m at departure and 4.693m at arrival. We find that the maximum area limit is out of the 2D 1 𝑂𝑅 = 𝑣  boundaries: here the blue line then becomes the maximum limit 1 2 𝑡 2 sin 2 ∅𝑛 +∅𝑛 +1 −2𝑡 1 𝑡 1 +𝑡 2 sin ∅𝑛 +∅𝑛 +1 sin ∅𝑛 cos ∅𝑛 +1 and the area width is reduced as the person moves (width of 1+ 2 𝑡 1 +𝑡 2 2 3.752m at departure and 1.834m at arrival in the projected beam). Moreover, if we consider the path is straight and the with t1 and t2 the intervals of time between the boundaries. speed constant, this area width can be further reduced:

171/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 6. Simulation of the 3D and 2D SPIRIT models (SPIRIT(10,0); 0=80°).

Figure 8. Simulation example with some beams non-crossed by the person (SPIRIT(10,0); 0=75°; departure(0,5.5); arrival(20,10.5); v=4.2km/h).

Figure 7. Simulation example with all beams crossed by the person (SPIRIT(10,0); 0=80°; departure(0,5.5); arrival(20,10.5); v=4.0km/h).

Figure 9. Simulation example with some beams non-crossed by the person (SPIRIT(10,0); 0=80°; departure(5,14.5); arrival(20,4.5); v=4.2km/h).



With the orange maximum limit, at an early stage of the detection, since the two limits would be parallel and the two sub-beams of the last boundary crossed by the person would be activated;



With the orange minimum limit, after the last detection, since the latter beam has not been activated. Concerning this point, the limit can cross part of the beam boundaries but has not to cross the projection of the second sub-beam.

In this case, the area width would only be 0.153m. A similar example is given in Fig. 9. The area width is 4.441m wide between the two limits. Similarly, it is possible to reduce the area in which the person is located and thus to obtain a better localization. The width is initially 6.964m at departure and 4.673m at arrival, it is equal to 2.606m at departure and 4.673m at arrival after modification. If the path is straight and the speed constant, the area is bounded by the two orange limits (width=0.769m); if no condition is defined, the person is located anywhere in the area, but with a restriction at departure as the third boundary has not been activated. In addition, the person cannot of course be located near a limit at a time and close to the other limit at the next time: her speed would be beyond the maximum allowed speed. B. Experimental Results The SPIRIT sensor was equipped with a camera in order to integrate in the acquired images the position of the person derived from detector data collection. The SPIRIT sensor integrates a microcontroller performing acquisition, signal processing and communication tasks: pyroelectric data acquisition is done every 10ms. Many experiments were needed to determine boundaries on ground: a 1.82m-height person was walking at a short distance

and at a long distance from the SPIRIT following a path orthogonal to the symmetry axis of the sensor, in one direction and the other, with a view to determining some points of all the boundaries. Similarly, the person has crossed the sensor FOV diagonally to obtain the detection limits (Fig. 10). Despite some obstacles and implementation problems, we therefore tried to approximate the configuration in Fig. 7: a line was plotted on the ground and the person has trained to walk at the same constant speed with a pedometer. Data obtained by simulation and by experiment are given in Table I. It may first be noted that the distance for the two last boundaries is not defined because our method needs three detection times to calculate the distance between the SPIRIT and the person. Secondly, there are some differences between the simulated and experimental data. This can be due to the fact that it is not obvious to walk at the correct speed and to keep constant this speed but also to the fact that the sensor attitude settings must be precise. Moreover, each measured time introduces an additional error in the distance formula. Finally, some SPIRIT manufacturing shortcomings may exist in electronics, optics and its geometry, that is a point we have seen during experiments since some ground boundaries were slightly offset from the other ones. II.

CONCLUSION

In this paper, a new PIR sensor has been presented for human detection and positioning. The numerical infrared technology implemented is reliable and makes SPIRIT a position analyze element not very expensive, even in total darkness. We modelled the sensor characteristics to simulate its functioning and to compare simulation data with real ones. The data obtained from the SPIRIT sensor allows us to extract position information as well as the direction of motion. Some small differences appear between simulations and experiments but

172/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 [5] [6]

[7]

[8]

[9]

Figure 10. Image with projected-beam symmetry axis overlay. TABLE I.

SIMULATED AND EXPERIMENTAL DETECTION DISTANCES Simulation

Projected beam number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 & 15

Experiment

Min. distance (m) between the person and the sensor

Max. distance (m) between the person and the sensor

Min. computed distance between the person and the sensor (m)

Max. computed distance between the person and the sensor (m)

6,894 6,791 6,726 6,695 6,699 6,738 6,812 6,925 7,078 7,277 7,527 7,835 8,214 Not defined

10,833 10,673 10,569 10,521 10,527 10,588 10,705 10,882 11,123 11,435 11,828 12,313 12,908 Not defined

6,333 6,125 6,128 6,003 6,066 6,215 7,264 7,414 7,21 7,693 8,102 8,27 8,977 Not defined

9,951 9,625 9,629 9,433 9,532 9,767 11,416 11,65 11,332 12,089 12,731 12,995 14,106 Not defined

[16]

[17]

REFERENCES

[3]

[4]

[13]

[15]

Refined position can also be obtained either by using several networked SPIRIT installed such a way their beams intersect, permitting to have precise locations, or by making complex geometric processing with hypothesis, but this second solution appears to require internal analogue SPIRIT information other than beam number for positioning quality. Our future work includes multiple person detection and positioning by using a single SPIRIT then networked ones.

[2]

[12]

[14]

have not large incidence on results. Our modelling might be completed by taking into account the minimum and maximum distances between two boundaries in order to restrict possible motion directions and so to result in a better positioning.

[1]

[10] [11]

[18] [19]

[20]

[21]

[22]

[23]

P. Turaga, R. Chellappa, VS. Subrahmanian, and O. Udrea, “Machine recognition of human activities: a survey”, IEEE Trans Circuits Syst Video Technol 18(11), pp. 1473–1488, 2008. P. Turaga, R. Chellappa, VS. Subrahmanian, and O. Udrea, “Machine recognition of human activities: a survey”, IEEE Trans Circuits Syst Video Technol 18(11), pp. 1473–1488, 2008. S. Saeedi, A. Moussa, and Dr. N. El-Sheimy, “Vision-aided contextaware framework for personal navigation services”, Intern Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B4, pp. 231-236, 2012. X. Ji, and H. Liu, “Advances in view-invariant human motion analysis: a review”,. IEEE Trans Syst Man Cybern Appl 40(1), pp. 13–24, 2010.

[24]

[25]

[26]

173/278

R. Poppe, “A survey on vision-based human action recognition”, Image Vis Comput 28(6), pp. 976–990, 2010. C. Hide, T. Moore, and T. Botterill, “Low cost vision-aided IMU for pedestrian navigation”, Journal of Global Positioning Systems, Vol.10, No.1, pp. 3-10, 2011. H. Koyuncu and S. H. Yan, “A survey of indoor positioning and object locating systems”, IJCSNS Int. J. Comput. Sci. Netw. Security, vol. 10, no. 5, pp. 121–128, May 2010. W. Elloumi, S. Treuillet, and R. Leconge, “Real-time estimation of camera orientation by tracking orthogonal vanishing points in videos”, 8th Int Conf on Comp Vision Theory and Appl (VISAPP 2013), Barcelona, Spain, February 2013. L. Ruotsalainen, H. Kuusniemi, and R. Chen, “Heading change detection for indoor navigation with a smartphone camera”, International Conference on Indoor Positioning and Indoor Navigation (IPIN), 21-23 September 2011. Zebra Technology, available online: http://www. wherenet.com/, 2008. M. Bouet, A.L. dos Santos, “RFID tags: positioning principles and localization techniques”, Wireless Days, WD '08, 1st IFIP, pp. 1-5, 2008. C. Hekimian-Williams, B. Grant, L. Xiuwen, Z. Zhenghao, and P. Kumar, “Accurate localization of RFID tags using phase difference”, RFID, IEEE International Conference on, pP. 89-96, 2010. R.C.Luo, and O.Chen, “Wireless and pyroelectric sensory fusion system for indoor human/robot localization and monitoring”, IEEE/ASME Transactions On Mechatronics, Vol. 18, N° 3, pp. 845-853, June 2013. S. B. Lang, “Pyroelectricity: from ancient curiosity to modern imaging tool,” Phys. Today 58(8), 31–36, 2005. Q. Hao, D.J. Brady, B.D. Guenther, J.B. Burchett, M. Shankar, and S. Feller, “Human tracking with wireless distributed pyroelectric sensors”, IEEE Sensors Journal, vol. 6, n°6, pp. 1683–1695, 2006. B. Shen, and G. Wang, “Object localization with wireless binary pyroelectric infrared sensors”, Proceedings of 2013 Chinese Intelligent Automation Conference, Lecture Notes in Electrical Engineering, Vol. 255, pp. 631-638, 2013. A. Catovic, and Z. Sahinoglu, “The Cramer-rao bounds of hybrid TOA/RSS and TOAD/RSS location estimation schemes”, IEEE Commun. Letters, 8(10), pp. 626-628, 2004. K. Pahlavan, and X. Li, “Indoor geolocation science and technology”, IEEE Commun. Mag., 40(2), pp. 112-118, 2002. R. Hsiao, D. Lin, H. Lin, S. Cheng, and C. Chung, “Indoor target detection and localization in pyroelectric infrared sensor networks”, in Proc. the 8th IEEE VTS Asia Pacific Wireless Communications Symposium (APWCS 2011), Singapore, Aug. 2011. Q. Hao, F. Hu, and Y. Xiao, “Multiple human tracking and identification with wireless distributed pyroelectric sensor systems”, IEEE Systems Journal, Vol. 3, No. 4, pp. 428-439, December 2009. M. Magno, F. Tombari, D. Brunelli, L. Di Stefano, and L. Benini, “Multi-modal video surveillance aided by pyroelectric infrared sensors”, Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications - M2SFA2 2008, Marseille, France, 2008. N. Li, and Q. Hao, "Multiple human tracking with wireless distributed pyro-electric sensors", Proc. SPIE 6940, Infrared Technology and Applications XXXIV, 694033, May 01, 2008; doi:10.1117/12.777250. H. Kim, K. Ha, K. Lee, and K. Lee, “Resident location-recognition algorithm using a Bayesian classifier in the PIR sensor-based indoor location-aware system”. Trans. Sys. Man Cyber Part C 39, 2, pp. 240245, 2009. J. Zisa, and Hymatom, “Method and system for detecting an individual by means of passive infrared sensors”, European patent n° EP06831165, Sep. 2008. J. Zisa, and B. Taillade, “Method and system for detecting an individual by means of passive infrared sensors”, US patent n° US 2009/0219388 A1, Sep. 2009. C. Willen, K. Lehmann, and K. Sunnerhagen, “Walking speed indoors and outdoors in healthy persons and in persons with late effects of polio”, Journ. Neurol. Res, 3(2): pp. 62-67, 2013.

- chapter 10 -

Components, Circuits, Devices & Systems

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 th-31th October 2013

Study of rotary-laser transmitter shafting vibration for workspace measurement positioning system Zhexu Liu, Jigui Zhu, Yongjie Ren, Jiarui Lin State Key Laboratory of Precision Measuring Technology and Instruments School of Precision Instrument and Opto-Electronics Engineering, Tianjin University Tianjin, China [email protected]

Abstract—The wMPS (workspace Measurement Positioning System) is a novel measurement system for indoor large-scale metrology, which is composed of a network of rotary-laser transmitters. The stability of the transmitter’s rotating head is a key factor of the measurement accuracy. This article studies the shafting vibration of the transmitter by dividing it into three independent vibration forms: axial vibration, radial vibration and yaw vibration. The transfer functions between them and the measurement accuracy are also presented. Keywords-indoor large-scale metrology; wMPS; shafting vibration; rotor dynamics

this paper studies the shafting vibration and divides it into three independent vibration forms: axial vibration, radial vibration and yaw vibration. The character of each form is discussed to give out the relationship between the shafting vibration and the accuracy of the wMPS. Following the analysis of the three forms, the transfer function is constructed between the three vibration forms and the measurement error of the wMPS. Then, we can improve the shafting structure of the transmitter through the analysis of the transfer function for higher measurement precision. II.

I.

INTRODUCTION

As science and technology develop and the needs for largescale manufacturing and assembly increase, coordinate measurement systems combining multiple angle measurements are established to achieve large-scale precision measurement, such as theodolite networks, digital photogrammetry, iGPS and wMPS [1]. The wMPS (workspace Measurement Positioning System) [2] is a novel measurement system for indoor largescale metrology and has been successfully demonstrated to be used in industries with its high accuracy, automation and multitasking capability. The wMPS consists of rotary-laser transmitters and optical receivers. Measurement is achieved with the receivers capturing the scanning angles of the rotary-laser planes emitted from the transmitters. In recent years, the performance and applications of the wMPS have been discussed in a fair amount of detail [3-4]. Moreover, its angular survey performance has also been discussed in [5]. Considering the working principle of the wMPS, it is clear that the stability of the transmitter’s rotating head is a key factor of the measurement accuracy, which is primarily determined by the shafting vibration of the transmitter. However, there has been very little work conducted for this subject. In order to improve the accuracy, this article analyses the measurement error of the wMPS influenced by the shafting vibration. Based on the shafting structure of the transmitter,

WMPS TECHNOLOGY

The wMPS is a laser-based measurement device for largescale metrology applications, which is currently under development by Tianjin University, China. As shown in Fig. 1, the configuration of the wMPS is composed of transmitters, receivers, signal processors and a terminal computer.

Figure 1. wMPS configuration

The transmitter consists of a rotating head and a stationary base. With two line laser modules fixed on the rotating head and several pulsed lasers mounted around the stationary base, the transmitter generates three optical signals: two fan-shaped planar laser beams rotating with the head and an omnidirectional laser strobe emitted by the pulsed lasers synchronously when the head rotates to a predefined position of every cycle. The receiver captures the three signals and then converts them into electrical signals through a photoelectrical

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

174/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 th-31th October 2013 sensor. The signal processor distinguishes between the electrical signals obtained from different transmitters and then extracts the information of the laser planes from them. Subsequently, the information is sent to the terminal computer to calculate the coordinates of the receiver.

A. Axial vibration According to one of the rotating laser planes, assuming that the tilt angle between the plane and the rotating shaft is  , the influence of the axial vibration is shown in figure 3.

During measurement, the transmitters are distributed around the working volume and the relative position relationship between them is pre-calibrated through bundle adjustment [6]. They rotate at different speed to allow the signals from them to be differentiated. When the laser planes emitted from at least two transmitters intersect at a receiver, the scanning angles of the laser planes are exactly known from the information captured by the receiver. Then the spatial angles of the receiver can be obtained and the coordinates of the receiver can be calculated through the triangulation algorithm.

Figure 3. Influence of the axial vibration

Figure 2. Schematic of the scanning angle measurement

As mentioned previously, the wMPS is based on the scanning angle measurement, whose schematic is illustrated in figure 2. As shown in figure 2(a), the initial position is defined as the position when the head of the transmitter rotates to a predefined position and the pulsed lasers emit the synchronous laser strobe. At the initial position, the receiver captures the synchronous laser strobe and records the initial time. Rotating with the head, the two laser planes scan the measurement space around the transmitter. As shown in figure 2(b), when a laser plane sweeps past the receiver, the time is also recorded. Assuming that the angular velocity of the rotating head is  , then the scanning angle of the laser plane from the initial position to the position when it passes through the receiver can be obtained [7]. III.

As shown in figure 3, the coordinate frame of the transmitter is defined as O - X YZ . The rotation shaft of the two laser planes is defined as axis Z . The origin O is the intersection of the laser plane and axis Z . Axis X is in the laser plane at the initial time (the time when the pulsed lasers emit the omnidirectional laser strobe) and perpendicular to axis Z . And axis Y is determined according to the right-hand rule. If there is no axial vibration, the laser plane would pass through the receiver R after it sweeps  angles. At this time, the intersection line of the laser plane and O X Y is O P and the nominal scanning angle  is  X O P . When axial vibration happens, the transmitter coordinate frame is changed to O -X Y Z  and the laser plane passing through the receiver is also changed. As shown in figure 3, the intersection line of the laser plane and O X Y  is O P  . Projecting O P  on O X Y , we have O P0  . Then the laser plane O RP  can be treated equivalently as the plane which intersect O X Y at line O P0  .

SHAFTING VIBRATION ANALYSIS

As described previously, the wMPS is essentially based on the scanning angle measurement. Actually, the accuracy of the scanning angle is impacted by many factors, such as the rotating stability of the rotating head, the uniformity of the transmitter’s rotating speed and timing accuracy of the signal processing circuit. Among the impact factors, the stability of the transmitter’s rotating head is a key factor of the measurement accuracy. But it is too complex and can hardly be analyzed. Therefore, in order to simplify the analysis process, we divide the shafting vibration of the transmitter into three independent vibration forms: axial vibration, radial vibration and yaw vibration. The detail analyses are expounded as follow.

The actual scanning angle   is  X O P0  . As shown in figure 3, point D and point D  are the projections of receiver R on O X Y and O X Y  respectively. We define R D  h , RD   h  ,  X O D   , O D  O D   d and O O   D D    z . According to the geometrical relationship, we have the error of the scanning angle:    sin(   )  sin(   )  sin[(   )  (    )]  sin(   ) cos(    )  cos(   ) sin(    ) 

h tan  d

175/278

h  tan  2

1

2

d

2



h  tan  d

1

(1)

h tan  2

2

d

2

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 th-31th October 2013 Considering that h   h   z , we have:  

h tan 

( h   z ) tan  2

1

d

d

2



( h   z ) tan 

d   d   r co s(   )

h tan  2

1

d

2

d

2

2

2

(2)

2

2

 

d

h tan  d

As we can see from equation 2, the error of the scanning angle is impacted by the amplitude of the axial vibration, the height of the receiver and the horizontal distance between the receiver and the transmitter. B. Radial vibration Like the axial vibration, the influence of the radial vibration is shown in figure 4.

(4)

Then equation 3 can be rewritten as:

2

h tan  d  ( h   z ) tan   ( h   z ) tan  d  h tan  2



2

From figure 4, it is easy to know that:

h tan  2

1

[ d   r cos(   ))]



2

h tan  2

d   r cos(   )

1

2

d

2

(5)

h tan  ( [ d   r cos(   )]  h tan   d  h tan  ) 2



h tan 

2

2

2

2

2

2

d [ d   r cos(   )]

As we can see from equation 5, the error of the scanning angle is impacted by the relative position of the receiver and the amplitude and direction of the radial vibration. C. Yaw vibration The influence of the yaw vibration is shown in figure 5.

Figure 4. Influence of the radial vibration

As shown in figure 4, if there is no radial vibration, the laser plane would pass through the receiver R after it sweeps  angles. At this time, the intersection line of the laser plane and O X Y is O P and the nominal scanning angle  is  X O P . When radial vibration happens, assuming that its amplitude is  r , and the transmitter coordinate frame is changed to O -X Y Z  . As shown in figure 4, when the laser plane sweeps past the receiver, the intersection line of it and O X Y  is O P  . We make a plane passing the point O , which is parallel to O P R . The intersection line of it and O X Y is O P0  . It is clear

that the actual scanning angle   is  X O P0  . As shown in figure 3, point D is the projections of receiver on O X Y . We define R D  h , O D  d , O D  d  ,  X O D   ,  XO O    and O O    r . According to the geometrical relationship, we have the error of the scanning angle: R

   sin(   )  sin(   )  sin[(   )  (    )]  sin(   ) cos(    )  cos(   ) sin(    ) 

h tan  d

h tan  2

1

2

d

2



h tan  d

h tan  2

1

2

d

(3)

Figure 5. Influence of the yaw vibration

As shown in figure 5, if there is no yaw vibration, the laser plane would pass through the receiver R after it sweeps  angles. At this time, the intersection line of the laser plane and O X Y is O P and the nominal scanning angle  is  X O P . When yaw vibration happens, assuming that the transmitter coordinate frame rotates around O L and the amplitude is   . Then the transmitter coordinate frame is changed to O -X Y Z  . In this way, when the laser plane sweeps past the receiver, the intersection line of it and O X Y  is O P  shown in figure 5. As shown in figure 5, point D and point D  are the projections of receiver R on O X Y and O X Y  respectively. D 0

is the intersection of

R D  and O X Y .  is the angle

between D D 0  and axis X . We define R D  h , RD   h  , O D  d ,  X O D   ,  XO L   . In order to simplify the analysis, we can approximately divide the yaw vibration into a radial vibration and an axial vibration. Therefore, we have:   r  a

2

176/278

(6)

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 th-31th October 2013 For the radial vibration, its amplitude  r is D D 0  , and direction is  . According to the geometrical relationship, it is easy to know:  r  D D 0   h tan  D 0  R D  h tan  

(7)

    /2

(8)

  r  a h tan  ( [ d  h tan   sin(    )]  h tan   d  h tan  ) 2



2

2

2

2

2

d [ d  h tan   sin(   )]

(15)

h tan  d  (2 h  h / cos(   )  2 sin(   / 2)[ d sin(    )  h tan   ]) tan  2



2

2

d

Submitting equation 7 and equation 8 into equation 5, we have: h tan  ( [ d  h tan   sin(   )]  h tan   d  h tan  ) 2

 r 

2

2

2

2

2

d [ d  h tan   sin(   )]

(9)

For the axial vibration, its amplitude  z is h   h . As shown in figure 5, we extend D D 0  which intersects O L at point L 0 . It is clear that D L 0 is perpendicular to O L , the foot point is L 0 . Then we have: L0 D 0  L0 D  D D 0  d co s(   )  D D 0 

(10)

(2 h  h / cos(   )  2 sin(   / 2)[ d sin(    )  h tan   ]) tan  d  h tan  2



2

2

d

As we can see from equation 15, the error of the scanning angle is impacted by the amplitude and direction of the yaw vibration and the relative position of the receiver. IV.

CONCLUSIONS

This paper analyses the shafting vibration of the wMPS transmitter by dividing it into three independent vibration forms: axial vibration, radial vibration and yaw vibration. The transfer functions between them and the measurement accuracy are presented. The analysis results reveal the complex relationship between the shafting vibration and the measurement accuracy through a simple way and are helpful to the deeper research and improvement of the shafting structure in future.

 d sin (   )  h tan  

Also we have L 0 D   L 0 D 0  therefore:

and  D 0  L 0 D     ,

D  D 0   2 sin(   / 2) L 0 D 0   2 sin(   / 2)[ d sin(    )  h tan   ]

(11)

Moreover, we know that  D 0  R D    , then: R D 0   h / cos(  D 0  R D )  h / cos(   )

ACKNOWLEDGMENT This research was supported by Key Projects in the National Science & Technology Pillar Program of China (2011BAF13B04) and the National High-technology & Development Program of China (863 Program, 2012AA041205). The authors would like to express their sincere thanks to them and comments from the reviewers and the editor would be very much appreciated. REFERENCES

(12) [1]

Therefore, we have:  z  h   h  R D 0  D 0 D   h  h / cos(   )  2 sin(   / 2)[ d sin(   )  h tan   ]  h

(13)

Submitting equation 13 into equation 2, we can get:

[2]

[3]

a h tan  d  (2 h  h / cos(   )  2 sin(   / 2)[ d sin(    )  h tan   ]) tan  2



2

2

d

[4]

(2 h  h / cos(   )  2 sin(   / 2)[ d sin(    )  h tan   ]) tan  d  h tan  2



(14)

d

2

2

[5]

According to equation 6, we have: [6]

[7]

177/278

W. Cuypers, N. Van Gestel, A. Voet, J.-P. Kruth, J. Mingneau and P. Bleys, “Optical measurement techniques for mobile and large-scale dimensional metrology,” Opt. Laser Eng. vol. 47, pp.292-300, May 2008. Z. Xiong, J. G. Zhu, Z. Y. Zhao, X. Y. Yang and S. H. Ye, “Workspace measuring and positioning system based on rotating laser planes,” Mechanika. vol. 18(1), pp. 94-98, January 2012. L. H. Yang, X. Y. Yang, J. G. Zhu and S. H. Ye, "Error analysis of workspace measurement positioning system based on optical scanning," Journal of Optoelectronics.Laser, vol. 21, pp. 1829-1833, December 2010. Z. Xiong, L. H. Yang, X. H. Wang and K. Zhang, “Application of Workspace Measurement and Positioning System in Aircraft Manufacturing Assembly,” Aeronautical Manufacturing Technology. vol. 21, pp. 60-62, 2011. Z. Xiong, J. G. Zhu, L. Geng, X. Y. Yang and S. H. Ye, “Verification of horizontal angular survey performance for workspace measuring and positioning system,” Journal of Optoelectronics.Laser. vol. 23, pp. 291296, February 2012. B. Triggs, P. McLauchlan, R. Hartley and A. Fitzgibbon, “Bundle adjustment—a modern synthesis,” Vision algorithms: theory and practice. Springer Berlin Heidelberg, pp. 298-372, 2000. L. H. Yang, X. Y. Yang, J. G. Zhu, Q. Duanmu and D. B. Lao, “Novel Method for Spatial Angle Measurement Based on Rotating Planar Laser Beams,” Chin. J. Mech. Eng.-En. vol. 23, pp. 758-764, October 2010.

Efficient Architecture for Ultrasonic Array Processing based on Encoding Techniques Rodrigo García, M. Carmen Pérez, Álvaro Hernández, F. Manuel Sánchez, José M. Castilla, Cristina Diego Electronics Department, University of Alcalá Alcalá de Henares (Madrid), Spain {rodrigo.garcia, carmen, alvaro}@depeca.uah.es Abstract— Airborne ultrasonic systems based on phased arrays provide images from the explored area, at expense of a low scanning speed. Encoding based on complementary set of sequences allows the simultaneous emission of the beam in several directions, thus increasing the scanning speed as well as the corresponding computational load. In this paper it is presented an efficient FPGA-based architecture for real-time processing of signals coming from an airborne ultrasonic phased array. Keywords— Ultrasonic Phased Array; B-scan; Complementary set of sequences; Field-Programmable Gate Array.

I.

INTRODUCTION

Ultrasonic Phased Arrays (PA) consist of a set of elements that are activated with different time delays, thereby forming an ultrasonic beam that can be oriented in the desired direction [1]. The applications of this kind of systems are several, from the ultrasonic image generation in medicine [2], or nondestructive testing [3], to the construction of environment maps in mobile robotics [4]. To increase the imaging rate, some works have been recently proposed by using encoding techniques applied to the signals emitted by the array, so information can be overlapped for each image line to be represented [5]. In these cases, the performance of the final system greatly depends on the correlation properties of the codes. Furthermore, the use of these techniques requires complex processing algorithms and the use of sequences with high lengths. This implies a high computational load, what may exceed the limits imposed by the need to work in real time or demand high-cost platforms. An approach consists in the use of new encoding schemes, based on sequences with zero correlation zones [6] [7] [8]. These codes, which are mostly derived from Complementary Sets of Sequences (CSS) [9], provide an Interference Free Window (IFW) in their aperiodic correlation functions. Thus, it is possible to mitigate the Inter-Symbol Interference (ISI) and the Multiple Access Interference (MAI), as long as the relative delays among the different receptions are within the IFW. Furthermore, these sequences also provide efficient architectures for the detection stage, typically implemented in FPGA devices (Field-Programmable Gate Array). This paper presents the real-time implementation of an efficient FPGA-based architecture for ultrasonic signal processing in a phased array based on encoding with sequences derived from CSS, in order to achieve simultaneous scanning

in all directions. The manuscript is organized as follows: Section II briefly describes the proposed architecture and the encoding used in the ultrasonic emission; in Section III the hardware implementation is explained; some experimental results are shown in Section IV and, finally, conclusions are discussed in Section V. II.

SYSTEM OVERVIEW

The goal of the proposed system is to generate B-scan images of the explored area, based on the simultaneous emission in several directions of ultrasonic signals encoded for each scan sector (A-scan). The architecture can be observed in Fig. 1. The emitter performs the storage and modulation of the sequences to be sent. Moreover, it generates the delays for every array element, in order to carry out the desired beam deflection for each sector. The delayed codes are added and then transmitted by the transducer with the purpose of achieving a simultaneous emission in several directions. The receiver block performs the demodulation and correlation of the received signals for each scanned sector (A-scan). It also performs post-processing and the B-scan image generation. CDMA techniques (Code Division Multiple Access) have been applied in order to achieve simultaneous emissions [10]. They provide a different and uncorrelated sequence to each user in the same channel, thus allowing an independent access. To improve the signal-to-noise ratio (SNR) in the image generation and to reduce MAI, codes with good auto- and cross-correlation properties in the scanned region are required. From the different alternatives, those codes with an Interference Free Window (IFW) around the origin of their correlation functions allow to significantly reduce ISI and MAI. It is also possible to adjust the size of the IFW to the explored area, so a better contrast in B-scan images can be obtained, compared to that from other codes. Finally, most these codes present efficient correlation structures that reduce the computational load of the detection process and make realtime operation more feasible [6] [7] [8] [11]. In this approach, the sequences in every CSS have been concatenated in natural order with a set of wo zeros among them, thus obtaining a larger sequence, called macro-sequence [8]. Firstly, N uncorrelated CSS {Sn; 0≤n≤N-1}, each one with N sequences {sn,m[l]; 0≤n,m≤N-1; 0≤l≤L-1} of length L, are generated. Then, the sequences sn,m from each set Sn have been linked with a separation of wo zeros between them. As a result, N macro-sequences Msn {Msn=[sn,0 wo sn,1 wo… sn,N-1]; 0≤n≤N1} are obtained, with an IFW in their correlation functions with

178/278

Fiig. 1. General sccheme of the prop posed architecture.

size 2·wo+1. Fig. 2 shows the generation g pro ocess for N maacrosequ uences Msn.

 Emission E throu ugh an ultrasoonic array with h E=8 elemennts.  Exploration E depth dmx=1.5m m, from the beam b conform mation at a dmn=0.30m. This implies a half IFW sizze wo=354 bitss.  BPSK B (Binarry Phase Shif ift Keying) modulation m w with a carrier c frequency fc=80kHzz, to transmitt according too the frequency f beh havior of the uused EMFi-bassed array [12]..

Fig. 2. G Generation process for macro-sequences Msn.

The T main addvantage of macro-sequen m ces, compareed to others with the sam me IFW propeerty, as LS [6]] or GPC [7], iis the redu uction in the nnumber of opeerations requiired in correlaation. Neveertheless, their process gaain is lower due to the laarger num mber of zeros in the sequence. Using a CSS with leength L=N N¸ the final lenggth LMsn for a macro-sequen nce Msn is (1):: 1 

(1)

Wheere N is the nuumber of sequuences sn,m forr every set Sn ; and wo iss the number of interpolated zeros. Thee process gainn GP, defin ned as the rattio between thhe auto-correlation lobe andd the sequ uence length LMsn is (2): 

A. Emitter E Blockk The T emitter bllock is dividedd into two stag ges: on one haand, a digittal signal proccessing modulle, which concerns the sequuence storaage, the modu ulation, and thhe emission deelay generatioon; on the other hand, an n analog stagee, which invo olves digital-annalog conv version, as weell as the volttage driver req quired to carrry out the emission e throu ugh the ultrasoonic array. Regarding R thee first processiing module, and a considerinng the mem mory requirem ments, a Genessys platform from f Digilentt Inc., baseed on a Xilinx x Virtex5 LX550T FPGA, has h been emplooyed. The block implem mented in the FFPGA can be observed in Fig. F 4. It iss necessary a sampling freqquency fS high enough to make m feasible the delay y generation with the acccuracy requireed to corrrectly deflect the beam. Thherefore, a saampling frequuency fs=1.6MHz has beeen fixed withh a temporal reesolution tr=625ns. Thiss frequency vaalue is feasiblee in current FP PGA devices.

(2)

Fig. F 3 shows an auto- annd cross-correelation for maacrosequ uences with N N=L=4 and wo=32. Note the IFW aroundd the main n correlation lobe, whose size s is 2·wo+1=65 sampless. As has been b mentioneed before, it is desirable to suitably confi figure the IFW I in order tto ensure that all possible echoes are receeived insid de it. Thus, thhe IFW shouldd be adjusted to the dimenssions of th he scanned areea. A reducedd IFW size deegrades the quuality of th he generated iimages, whereeas too large IFW implies long load. sequ uences, thus inncreasing the computational c III.

HARDWARE E IMPLEMENTA ATION

The T followingg considerationns have been taken into acccount in th he design of thhe phased arrayy and its archiitecture [5]:  Simultaneous S scan up to N= =32 different sectors, from m -64º to 64º, using 32 sequences with w length LMs =11998 bits . Mn

Fiig. 3. Auto- and cross-correlationn functions for maacro-sequences Ms M n, with length L= 4 and a w0=32. obtained from N=4 CSS w

179/278

c1[k]

Addr Emission ctrl.

C1

2

m1[k] BPSK modulator

m1,1[k] 6

6

Delays 12

m1,2[k]

s1[k] • •

6 m•1,8[k] • 6

• • •

SUM32 12

data

TABLE I. RESOURCE CONSUMPTION FOR THE EMITTER MODULE IN A XILINX VIRTEX5 LX50T FPGA.

DAC sclk ctrl.

Resources/ Percentage Block

m32,1[k] 6

slices

BRAMs

62 (0,86 %)

0 (0,00%)

0 (0,00%)

64 (53,33%)

576 (8,00%)

32 (26,67%)

Delays8 x32

1984 (27,56%)

0 (0,00%)

SUM32 x8

928 (12,89%)

0 (0,00%)

DAC Ctrl. x8

260 (3,61%)

0 (0,00%)

3810 (52,92%)

96 (80,00%)

Emission controller Cn x32

c32[k]

C32

2

m32[n] BPSK modulator

m32,1[k] 6

6

Delays

m32,2[k] • 6 m32,8[k] • 6

BPSK modulator x32

m1,8[k] 6

data

s8[k]

SUM32

12

DAC sclk ctrl.

Fig. 4. Emitter module implemented in a Xilinx Virtex5 LX50T FPGA.

The binary macro-sequences Msn are stored inside the internal memory block BRAM (cn). A control block (Emission Controller) is responsible for accessing to the position where the sequence bits are, as well as for managing the emission frequency during every whole frame. At the memory output there is a BPSK modulator (BPSK Modulator), which drives other block (Delays 8) in charge of inserting the delays to the modulated sequences mn[k], according to each transducer and scan angle. Finally, an adder (SUM32) carries out the sum of all the delayed transmissions mn,e[k], and the DAC controller sends the result se[k] to the digital-analog converter (DAC). Note that the diagram in Fig.4 only represents the resources required in the emission of one single Msn through the array. The general scheme, for the simultaneous scan of N=32 sectors, implies replicating modulators, memories and delay blocks, N=32 times. The amount of adders and DAC controllers is equal to the number of elements E=8 in the array. Table I shows the resources consumption in the emitter module in the Virtex5 LX50T FPGA. The main design constraints are determined by the sequence length LMsn, due to the high memory requirements, and by the temporal resolution tr configured in the delay generation. It also has influence on the delay block size as well as on the maximum sampling frequency fS. As is explained in Section IV, an EMFi-based array prototype is used for experimental tests, where the array elements require a peak-to-peak voltage level of 150V to transmit the sequences [5]. To reach that bipolar voltage level from the 3V unipolar voltage supplied by the DAC, a filtering and amplification circuit has been included. B. Receiver Block The reception process has been divided in three stages: acquisition and correlation; data sending; and post-processing. A FPGA-based Genesys platform has been also selected for the low-level signal processing. This processing includes BPSK demodulation, and correlation with the emitted macrosequences Msn. The correlation results tn[k] are sent to a computer for high-level processing (envelope detection and image composition). To manage the delivery of the correlation results tn[k], a Microblaze microprocessor embedded in the FPGA has been proposed. Fig. 5 depicts the architecture implemented in the FPGA for the low-level processing.

Total sclk cs data

r[k]

ADC Ctrl.

12

BPSK Demod.

d[k]

d1[k]

8

8 d2[k]

t1[k] t2[k]

8

Delays

18

Correlador CSS

• • •

s_NPI

18

• • •

d32[k]

t32[k]

8

18

e_Adq

Ctrl. DMA  (NPI) Int.

Fig. 5. Block diagram of the low-level processing at the receiver.

The ultrasonic signal r[k] is captured by a microphone and acquired by an analog-to-digital converter (ADC). The acquired signal r[k] is asynchronously demodulated (BPSK Demod.) to obtain d[k]. Then, a correlation block (Correlator CSS) searches for any of the N=32 emitted macro-sequences Msn. It uses the efficient scheme in [7] to obtain the correlation tn[k] with the N= 32 macro-sequences simultaneously. The interest area in correlation is the IFW, stored for later processing. The memory necessary to store all the desired data is w0·D·Of·N=2Mbit; where wo is the size of the IFW; D is the output data width; Of is the oversampling factor; and N is the number of macro-sequences. This means 104% over the FPGA internal memory, so data are stored in external DDR2 memory. These data must be accessible by Microblaze, so a multiport memory controller MPMC has been used, with Xilinx DMA protocol based on a NPI (Native Peripheral Interface). After all the correlation results tn[k] have been stored, a Microblaze interruption is set to indicate that data are ready. To transmit data to a PC, a TCP/IP communication has been established. Finally, an enveloping detection is applied to every correlation to obtain an A-scan line; these A-scan lines provide a dB intensity B-scan image. Table II shows the resource consumption for low-level processing and Microblaze. IV.

EXPERIMENTAL TESTS

First experimental tests have been carried out with an EMFi-based array prototype [5] composed by E=8 elements, with dimensions 0.26x4cm2 (determined to keep the pitch ratio in order to avoid grating lobes), with an element gap of 1mm and a maximum operation frequency of 47kHz. This allows eight angular sectors from -52º to 60º to be covered. Therefore, N=8 macro-sequences are emitted instead of 32 (as

180/278

was originally designed in the system). Due to the geometrical configuration of the array, the carrier frequency fc has been reduced to 40kHz. The IFW has been kept from 0m to 1.5m. TABLE II. RESOURCE CONSUMPTION FOR THE RECEIVER MODULE IN A XILINX VIRTEX5 LX50T FPGA.

increases the image generation rate. The proposed FPGA implementation can achieve real-time processing for the ultrasonic signals from a phased array. First experimental tests with an EMFi ultrasonic array have validated the design. Future works will deal with a further comparison with other existing approaches, as well as with encoding improvement.

Resources/ Percentage Block

slices

ACKNOLEDGEMENTS

DSPs

BRAMs

ADC Ctrl.

15 (0,21%)

0 (0,00%)

0 (0,00%)

BPSK demodulator

68 (0,94%)

1 (0,83%)

1 (2,08%)

Delays

475 (6,60%)

62 (51,67%)

0 (0,00%)

4945 (68,81%)

0 (0,00%)

0 (0,00%)

102 (1,67%)

0 (0,00%)

0 (0,00%)

1295 (17,99%)

22 (18,33%)

3 (6,25%)

6953 (96,57%)

85 (70,83%)

4 (8,33%)

CSS correlator

This work has been supported by the University of Alcalá (SIMULTANEOUS project, ref. UAH2011/EXP-003) and the Spanish Ministry of Economy and Competitiveness (LORIS project, ref. TIN2012-38080-C04-01, and DISSECT-SOC project, ref. TEC2012-38058-C03-03). 0

Microblaze Total

60

-0.5 -1

50

-1.5 -2 -2.5

dB

A test has been conducted for the scenario shown in Fig. 6, with two metal poles of 6x6cm: one (object 1) is placed at 30cm and 20° from the axial axis of the array, and another (object 2) at 45cm and -40°. Fig. 7 depicts the B-scan image with –10dB contrast. In the detection of the objet 1, secondary lobes appear after the main lobe, caused by the multipath effect and by the proximity of the reflector. In Fig. 8, the Bscan image with -5dB contrast is shown, where the secondary lobes caused by multipath effect are no longer observable.

(cm)

40

dB

Ctrl. DMA (NPI)

30 -3 -3.5

20

-4 10

-50

-4.5 -40

-30

-20

-10

0

10

20

30

40

50

-5

(cm)

Fig. 8. B-scan image for the scenario in Fig. 6 with -5dB contrast.

REFERENCES [1]

Fig. 6. Experimental set-up. 0 60

-1 -2

50

-3 -4 -5 30

dB dB

(cm)

40

-6 -7

20

-8 10

-50

-9 -40

-30

-20

-10

0

10

20

30

40

50

-10

(cm)

Fig. 7. B-scan image for the scenario in Fig. 6 with -10dB contrast.

V.

CONCLUSIONS

An efficient processing architecture for an airborne ultrasonic phased array has been presented, allowing simultaneous scanning in several directions, by emitting macro-sequences derived from CSS. The use of these codes

O. T. Von Ramm, S. W. Smith, “Beam Steering with Linear Arrays”, Biomedical Engineering, BME-30, nº 8, pp. 438-452, 1983. [2] S. W. Smith, H. G. Pavy, O. Von Ramm, “High-speed ultrasound volumetric imaging system. Part I: Transducer design and beam steering”, IEEE Tr. on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 38, no. 2, pp. 100-108, 1991. [3] M. Parrilla, P. Nevado, A. Ibañez, J. Camacho, J. Brizuela, C. Fritsch, “Ultrasonic imaging of solid railway wheels”, Proc. of the IEEE Ultrasonics Symposium, Chine, pp. 414-417, 2008. [4] D. T. Pham, Z. Ji, A. Soroka, “Ultrasonic distance scanning techniques for mobile robots”, Int. J. of Computer Aided Engineering and Technology, vol. 1, no. 2, pp. 209-224, 2009. [5] C. Diego, A. Hernández, A. Jiménez, F. J. Álvarez, R. Sanz. “Ultrasonic array for obstacle detection based on CDMA with Kasami codes”, Sensors, vol. 11, pp. 11464-11475, 2011. [6] C. Zhang, S. Yamada, M. Hatori, "General method to construct LS codes by complete complementary sequences", IEICE Trans. on Wireless Communications Tech., E-88-B, vol. 8, pp. 3484-3487, 2005. [7] H.-H. Chen, Y.-C. Yeh et al., “Generalized pairwise complementary codes with set-wise uniform interference-free windows,” IEEE Journal on Selected Areas in Communications, vol. 24, no.1, pp. 65-74, 2006. [8] M. C. Pérez, R. Sanz, J. Ureña, A. Hernández, C. De Marziani, F. J. Álvarez, “Correlator implementation for Orthogonal CSS used in an ultrasonic LPS”, IEEE Sensors J., vol. 12, no. 9, pp. 2807-2816, 2012. [9] C. C. Tseng, C. L. Liu, “Complementary sets of sequences”, IEEE Tr. on Information Theory, IT-18(5), pp. 644-652, 1972. [10] H. Chen, “Next Generation CDMA Technologies”, John Wiley & Son, Ltd, West Sussex PO19 8SQ, England, 2007. [11] M. C. Pérez, J. Ureña, A. Hernández. C. De Marziani, A. Jiménez, “Optimized Correlator for LS Codes-Based CDMA Systems”, IEEE Communications Letters, vol. 15, no. 2, pp. 223-225, 2011. [12] M. Paajanen, J. Lekkala, K. Kirjavainen, “ElectroMechanical Film EMFi. A New Multipurpose Electret Material”, Sensors and Actuators A, vol. 84, pp. 95-102, 2000.

181/278

- chapter 11 -

Communication, Networking & Broadcasting

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Using Double-peak Gaussian Model to Generate WiFi Fingerprinting Database for Indoor Positioning

Lina Chen

Binghao Li

College of Information Science and Technology, ECNU Shanghai, China College of Mathematics, Physics and Information Engineering, Zhejiang Normal University Jinhua, China School of Surveying and Geospatial Engineering UNSW Sydney, Australian [email protected]

School of Surveying and Geospatial Engineering UNSW Sydney, Australian [email protected]

Zhengqi Zheng College of Information Science and Technology, ECNU Shanghai, China [email protected]

Chunyu Miao

Jianmin Zhao

College of Xingzhi, Zhejiang Normal University Jinhua, China [email protected]

College of Mathematics, Physics and Information Engineering, Zhejiang Normal University Jinhua, China [email protected]

ABSTRACT Location fingerprinting using WiFi signals has been very popular and is a well accepted indoor positioning method. The key issue of the fingerprinting approach is generating the fingerprint radio map. Limited by the practical workload, only a few samples of the received signal strength are collected at each reference point. Unfortunately, fewer samples cannot accurately represent the actual distribution of the signal strength from each access point. This study finds most WiFi signals have two peaks. According to the new finding, a double-peak Gaussian arithmetic is proposed to generate a fingerprint radio map. This approach requires little time to receive WiFi signals and it easy to estimate the parameters of the double-peak Gaussian function. Compared to the Gaussian function and histogram method to generate a fingerprint radio map, this method better approximates the occurrence signal distribution. This paper also compared the positioning accuracy using K-Nearest Neighbor theory for three radio maps, the test results show that the positioning distance error utilizing the double-peak Gaussian function is better than the other two methods. KEYWORDS: Indoor positioning; Double-peak Gaussian Arithmetic (DGA); Wi-Fi fingerprinting

182/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 1. INTRODUCTION Location Based Services (LBS) is a mobile application that depends on mobile devices and the mobile network to calculate the actual geographical location of mobile user and further more to provide service information that users need, related to their current real-space position [1] [2]. One of the key issues for LBS is positioning accuracy, and particularly since the requirement for positioning accuracy indoors is usually higher than that for outdoor. In outdoor applications, using Global Navigation Satellite System (GNSS) such as Global Positioning System (GPS) is sufficient because it provides location accuracy within several meters. However, GPS is still not suitable for indoor positioning as the signal from GPS cannot penetrate walls of buildings [3] [4]. Indoor positioning technology has attracted a huge interest from the research community. There are many techniques that can be used in indoor positioning such as the angle and time difference of the arrival of a signal. However, there are significant multipath effects and a non line-of-sight environment that can lead to inaccurate angle and time estimations. The fingerprinting technique has been accepted as a simple and effective approach that can provide locationaware capability for devices equipped with WLAN (such as Wi-Fi) in indoor environments [5]. Wi-Fi is now generally adopted for indoor positioning due to widely deployed access points (APs). Mobile devices are equipped with Wi-Fi chipset as a standard and Wi-Fi signals are typically available in most buildings. Additionally, Wi-Fi as an existing infrastructure can reduce the cost of implementing location dependent services and by only utilizing signal strengths (SS) it can easy to obtain the required measurements to determine a users position. These advantages have made using Wi-Fi for indoor positioning very popular.

understanding of RSSI and approximations of actual RSSI distributions are key issues to improving the performance of WLAN indoor positioning. The average SS of each Wi-Fi AP measured at each reference point (RP) is used to generate the radio map. Since the variation of the SS measured at each point is large, the RSSI distribution is not usually close to the Gaussian or Weibull. The distribution typically varies at different locations and at the same location when the orientation of the antenna changes [13][14]. In our recent research, some new characteristics of Wi-Fi signals were found as listed below and shown in Fig.1. 1) The vast majority of distribution of received signal strength (RSS) from APs consisted of two peaks and a long tail as the red line shows in Fig.1. The double peaks are especially obvious. This has not been mentioned in previous literature. 2) A Gaussian function does not better approximate the distribution of the RSS, just as the black line in Fig.1shows. The Gaussian function is fit using the same data as the occurrence in red line. Unfortunately, the shapes of the two lines are not very similar. 3) The part with the poorest approximation lies in the double peak region of the data. About 90 percent of signals are in the double peaks region and nearly 50 percent of signals are in peak 1 as illustrated in Fig.2. The large difference between the two lines may lead to larger error for location fingerprinting for indoor positioning.

Because of non-line-of-sight (NLOS) propagation and multipath effects for signals it is very difficult to convert SS measurement to range measurement accurately, in order to overcome this problem fingerprinting is usually utilized [6]. The fingerprinting approach is considered as a better method for ubiquitous indoor positioning as it utilizes the NLOS propagation and multipath by mapping location with received signal strength indicator (RSSI) [5],[7]. Although the RSSI can be chosen as the characteristic value to refer indoor location in fingerprinting positioning systems, the actual distribution of RSSI for IEEE 802.11a/b/g itself is rarely known. The location fingerprints can be as simple as patterns of averaged RSSI or distributions of RSSI from a number of APs. In the literature systems that maintain or estimate distributions of RSSI for each location usually have better positioning performance [8]. A lognormal distribution was assumed to model the RSSI [9]. Shape filtering on the empirical distribution was utilized to estimate the RSSI distribution [10]. Kamol et al compared measured data to a Gaussian model to see how well a Gaussian model can fit the data [11]. Another solution was presented using the Weibull function for approximating the Bluetooth signal strength distribution in the data training phase of location fingerprinting [12]. All this shows that improved

183/278

Peak 1

Peak 2

Long tail

Fig. 1. A new distribution characteristic of signals with two peaks and a tail and non-gaussian function 60%

51% 50% 42% 40% 30% 20% 10%

7%

0% peak 1

peak 2

tail

Fig. 2. Distribution proportion of signals in Fig. 1

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 The generating of the radio map is an essential prerequisite for a location fingerprint. The more measurements obtained at each RF the better the performance of the positioning. However, more measurements mean more time is required to complete the more intensive computational task. In fact, only a few samples of the RSSI are typically collected at each RF, so the limited samples cannot represent the actual signal distribution well. This paper presents a new approach using the double-peak Gaussian arithmetic (DGA) to approximate the Wi-Fi SS distribution according to the characteristics observed. Compared to the Gaussian function, the experimental results show that the location fingerprint indoor positioning using DGA is better than the Gaussian approach. The study also improved the efficiency of generating a fingerprinting database.

into some cells which are considered as reference points, and the coordinates of the reference points are determined in advance. Then the RSSI at each reference point from all access points are collected, processed and stored as fingerprints in the radio map. During the on-line location phase, the unknown position of a mobile user is estimated by comparing the current RSSI measurements with the data in the radio map [20] [21]. AP(1)

Mobile Users(MU) (xn,yn)

RSSI1, RSSI2, ...

AP(N) data training phase

database

positioning phase

(x,y)?

Positioning based on WiFi location fingerprinting consists of two phases that is the off-line data training phase and the on-line location phase as shown in Fig.3. The offline phase builds a radio map for the targeted area based on RSSI, and the on-line phase is to calculate the user’s location based on the fingerprints stored in the radio map. For the off-line training phase, the targeted area is divided

RSSI1, RSSI2, ...

...

...

The last several decades have seen a revolutionary in the development of Global Navigation Satellite Systems (GNSS). Positioning and navigation is almost perfect in outdoor. However, GNSS is not possible to receive enough good quality satellite signals inside building or underground mine that lead failure to apply in indoor.

As a standard networking technology WiFi access points (APs) are widely deployed. Modern mobile devices are now equipped With WiFi chips and WiFi signals are easily available almost in possible every building which makes using WiFi for indoor positioning become a very practical means.

RSSI1, RSSI2, ...

(x2,y2) AP(2)

2. EXPERIMENT DESIGN 2.1 Location Fingerprinting

Indoor positioning technologies can be based on random signals. The random signals such as WiFi signals are not intended for positioning and navigation. These signals are designed for other purposes, and given the harsh reality of signal propagation in the indoor environment achieving a high degree of accuracy is a very difficult, if not impossible, task [15][16]. Fingerprinting is widely used where signal propagation by direct line-of-sight detection is not typical. The low cost and wide coverage of such methods are the main advantages. There are many positioning technologies that require the deployment of infrastructure, such as positioning systems using infrared, ultrasound [17] [18] and ultra-wideband [19]. Development new infrastructure not only costly but also the coverage is usually very limited such as using hot spot mode. Such technologies typically have to be utilized if a reliable and accurate positioning result is required. An obvious advantage of using WiFi signals for indoor positioning is that does not need to pre-deployed infrastructure, which makes such a system cost effective and only signal strengths (SS) are available.

(x1,y1)

RSSI1, RSSI2, ...

calculate unit

MU

(x,y) MU location

Fig. 3. Location fingerprinting arithmetic 2.2 Experiment Condition and Device This study does an experiment in a six-storey office building. There are total of 46 access points (AP) placing to serve most areas. All access points inside this building can support IEEE 802.11a/b/g wireless local area network (WLAN) cards. Measurements were made in one room with 45 square meters on the fourth floor. There are arranged four reference points (RF) and five test points (Ti, i=1, 2, 3, 4, 5) as illustrated in Fig.4.

Fig. 4. Condition of experiment ( five test points Ti (i=1,2,3,4,5) and four RFs) A standard laptop computer was used to collect Wi-Fi signals at all reference point and test points for the whole experiment. Table 1 lists the device and chipset information and the standard information of wireless local area network. Note that the results of this paper only relate to the wireless device used in the experiment. It may be the same with other computer hardwares, but that requires additional study beyond the scope of the current work. Table 1. Experimental equipment and communication standards used

184/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 Vendor

Intel Corporation

Model

Advanced-N 6205

Chipset

Intel

Interface

PCI-E

standards

IEEE802.11a/b/g

estimated with the double-peak Gaussian solution are significantly better than that obtained from the conventional Gaussian function approach.

3.

DISCUSSION OF EXPERIMENTS RESULTS 3.1 The Double-peak Gaussian Arithmetic (DGA) and Gaussian Function As is commonly known, Gaussian fitting is a traditional method. Its probability density function can be expressed as:

 x  u 2

1

e(  0 ) (1) 2 2 2  Where x is the variable of the function, u is the mean of x, and σ is the standard deviation of x. In this paper, the RSSI of each AP was divided into two parts according to the minimum value between the two peaks. Following this, each part was regarded as a Gaussian function. Finally the two functions were added. Judging from the distribution proportion of the signals of two peaks shown in Fig.2, the weight given to each Gaussian was 1/2. In this paper, the DGA probability density function was defined as function (2), where u1 and σ1 respectively are the mean and the standard deviation of the signals of part 1, and u2 and σ2 belong to part 2. F ( x) 

  1 F ( x)   2  

e-

3.3 The Location Fingerprinting Using DGA The data collected in this experiment was further utilized to produce location fingerprints. Three groups of radio maps were generated. These databases using the Gaussian function, traditional histogram and using Double-peak Gaussian arithmetic proposed in this study. To compare advantages of radio maps generated here against a different approach, the K weighted nearest neighbor (KWNN (K=4)) algorithm has been selected. This test uses the inverse of the signal distance as the weight. Fig.6 shows the location error using the three algorithms for five test points. The blue, the green and the red solid line respectively stand for the Gaussian, histogram and the double-peak Gaussian arithmetic. Clearly, the positioning accuracy using double-peak Gaussian method is better than the other two approaches, at all test points.

 x  u 1 2

  2 1 2  1    x  u 2 2  1 e2  2 2 2  2  1

Fig. 5. Comparison of Gaussian function and double-peak Gaussian with the occurrence distribution of RSSI

2

(  0 )

(2)

3.2 The Comparison of DGA and Gaussian Function Using the Gaussian function to generate radio maps of location fingerprinting for indoor positioning is not a new method [20]. Since the variation of the SS measured is large at each point, in order to achieve more accurate results, the probabilistic approach based on Gaussian distribution has also been developed. Unfortunately, the distribution of the SS is non-Gaussian. This is observed in section one. We attempt to characterize the properties of indoor received signal strength and the results given in Fig.5 provided preliminary guidelines to better understand the nature of RSSI from an indoor positioning systems perspective. In Fig.5, the red dash line is the occurrence distribution of RSSI, the blue solid line is the probability distribution that derived from the double-peak Gaussian with the occurrence RSSI, the green solid line shows the probability distribution derived from the Gaussian function solution with the occurrence RSSI. It can be seen from fig.5 the probability distributions

Fig. 6. Distance errors of each test points using three radio maps The average distance error for whole, horizontal and vertical direction is listed in Table 2 . The test results the positioning accuracy using double-peak Gaussian approach

185/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 are improved for every direction. The highest improved range of mean distance error is about 28%, the horizontal direction is 56% and the vertical direction is 38%. The

smallest sized error for all directions was as low as 26% below other methods.

Table 2. List of distance errors (unit: m) Gaussian

Histogram

Doublepeak Gaussian

Mean (D1)

1.50

1.45

1.08

Horizontal(D2)

0.89

0.96

0.42

Vertical (D3)

1.00

0.92

0.62

In order to better recognize the double-peak Gaussian arithmetic, the maximum and the minimum distance error for each approach and every direction were compared. Table 3 lists the results. From Table 3 two conclusions can be drawn.

One is that either maximum error or minimum error has been improved using the double Gaussian technique and the other is that the biggest absolute distance error decreased by more than 0.8 meters.

Table 3. List of distance errors (unit: m) Gaussian

Max

Min

Histogram

Doublepeak Gaussian

D1

2.18

1.94

1.38

D2

1.70

1.68

1.07

D3

1,98

1.71

1.35

D1

0.32

0.43

0.24

D2

0.23

0.15

0.12

D3

0.23

0.22

0.04

4. CONCLUSION Because of the new characteristics (two peaks), this paper presents a new solution using double-peak Gaussian arithmetic to approximate the WiFi signal strength distribution in the off-line training phase for location fingerprinting. This approach makes it easy to estimate the parameters of the double-peak Gaussian function. Compared to the Gaussian function, the double-peak Gaussian utilizes two peaks to describe the distribution over the entire RSSI domain. This research indicates that the reliability and accuracy of the fingerprint radio map is improves with the double-peak Gaussian function. A histogram and Gaussian function position estimation based on K-Nearest neighbor theory is utilized in the positioning phase. The test results show that the positioning accuracy using the double-peak Gaussian arithmetic performs better than the other two fingerprint methods. Although this test shows better results, there are still additional concerns. First, this experimental test bed only a small office, the distance between the reference points and test points is relatively small which leads to the distance error being less obvious. Another, using double-peak Gaussian function to build fingerprinting database has not been applied in other WLAN or other open area to verify functionality. Furthermore, the measured SS at each point

has a large variation. It even varies at the same point at different times. Addressing these issues is the subject of future work. ACKNOWLEDGEMENTS This work was partially supported by the Pre-Research project of the key technology research of container intelligent logistics based on BeiDou satellite which funded by Science and Technology Commission of Shanghai Municipality (12511501102). It was also supported by the project of Research on Authentication Platform of Cloud Computing based on the Internet of Things which is funded by the National Natural Science Foundation of China (61272468), and also was supported by the project of high gain low cost miniaturization multimode substrate integrated satellite navigation antenna which funded by Shanghai Municipal Commission of Economy and Informatization. REFRENCES: [1] Richard Ferraro, L. Li (translation), “Location-Aware Applications”, POSTS & TELECOM press, Beijing, 2012.

186/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 [2] Market Survey Report: “Location Based Services-Market and Technology Outlook- 2013-2020” Market Info Group LLC(MIG), Inc, 2013. [3] B. Li, J. Zhang, A.G. Dempster, C. Rizos, “ Open Source GNSS Reference Server for Assisted-Global Navigation Statellite Systems”, The Journal of Navigation, Vol. 64, No. 1, 2011, pp. 127-139. [4] B. Li, J. Salter, A.G. Dempster, C. Rizos, “ Indoor Positioning Techniques Based on Wireless LAN”, in Proceedings of 1st IEEE Int. Conf. on Wire-less Broadband & Ultra Wideband Communications, Sydney(Australia), 13-16 March, 2006. [5] P. Bahl, N. V.N. Padmanabhan, “RADAN: an in-building RF-based user location and tracking system”, Proceedings of IEEE 9th Annual Joint Conference of the IEEE Computer and Communications Societies, Tel Aviv (Israel), March 26-30, 2000, pp.775-784. [6] H. Hashemi, “The indoor radio propagation channel”, Proceeding of IEEE, Vol.81, No.7, 1993, pp. 943-968 [7] B.Li, Y. Wang, H.K. Lee, A.G. Dempster, C. Rizos, “ Database updating through user feedback in fingerprinting-based Wi-Fi lacation systems”, Proceedings of International Conference on Ubiquitous Positioning Indoor Navigation & Location Based Service, Kirkkonummi (Finland), October 14-15, 2010, paper 1, session 3. [8] K. Kaemarungsi, “Distribution of WLAN Received Signal Strength Indication for Indoor Location Determination”, 2006 1st International Symposium on Wireless Pervasive Computing, Phuket (Thailand), 16-18 Jan, 2006. [9] M.A. Youssef, “ HORUS: A WLAN-based indoor location determination system”, Ph.D. dissertation, University of Maryland, College Park, MD, 2004. [10] Z. Xiang, S. Song, J. Chen, H. Wang, J. Huang, X. Gao, “ A wireless LAN-based indoor positioning technology”, IBM Journal of Research and Development, Vol. 48, No. 5/6, 2004, pp. 617-626. [11] K. Kaemarungsi, P. Krishnamurthy, “ Properties of Indoor Received Signal Strength for WLAN Location Fingerprinting”, Proceedings of IEEE First Annual International Conference on Mobile and Ubiquitous Systems: Networking and Services, Boston (America), August 22-26, 2004, pp. 14-23. [12] L. Pei, R. Chen, J. Liu, H. Kuusniemi, T. Tenhunen, Y. Chen, “ Using Inquiry-based Bluetooth RSSI Probability

Distributions for Indoor Positioning”, Journal of Global Positioning Systems, Vol. 9, No. 2, 2010, pp. 122-130. [13] A.M. Ladd, K.E. Bekris, A. Rudys, G. Marceau, L.E. Kavraki, S. Dan, “Robotics-based location sensing using wireless Ethernet”, Eighth ACM Int. Conf. on Mobile Computing & Networking (MOBICOM), Atlanta, Georgia(US) 23-28 September 2002, pp. 227-238. [14] Y. Wang, X. Jia, H.K. Lee, G.Y. Li, “ An indoor wireless positioning system based on WLAN infrastructure”, 6th Int. Symp. on Satellite Navigation Technology Including Mobile Positioning & Location Services,, Melbourne (Australia), July 22-25, 2003, CDROM proc., paper 54. [15] B. Li, A.G. Dempster, J. Barnes, C. Rizos, D. Li, “Probabilistic algorithm to support the fingerprinting method for CDMA location”, in Proc. Int. Symp. on GPS/GNSS, 2005. [16] B. Li, Y. Wang, H.K. Lee, A.G. Dempster, C. Rizos, “Method for yielding a database of location fingerprints in WLAN”, IEE Proceedings-Communication, Vol.152, 2005, pp. 580-586. [17] R. Want, A. Hopper, V. Falcao, J. Gibbons, “The active badge location system”, ACM Transactions on Information Systems, Vol. 10, 1992, pp. 91-102. [18] N. B. Priyantha, A. Chakraborty, H.Balakrishana, H. Balakrishnan, “ The cricket location-support system”, 6th ACM International Conference on Mobile Computing and Networking, Boston(America), Aug 6-11, 2000, pp.32-43. [19] S. Gezici, Z. Tian, G. Giannakis, H. Kobayashi, A. Molisch, H. Poor, Z. Sahinoglu, “Localization via ultrawideband radios: a look at positioning aspects for future sensor networks”, Signal Processing Magazine, IEEE, Vol. 22, 2005, pp. 70-84. [20]

M.A. Youssef, A. Agrawala, and A.U. Shankar, “WLAN Location Determination Via Clustering and Probability Distributions”, Proceedings of the First IEEE International Conference on Pervasive Computing and Communications, Texas (America), March 23-26, 2003, pp. 143-150.

[21] T. Roos, P. Myllymaki, H. Tirri, P. Misikangas and J. Sievanen, “A Probabilistic Approach to WLAN User Location Estimation”, International Journal of Wireless Information Networks, Vol. 9, No. 3, 2002, pp. 155-164.

187/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Indoor Positioning using Ultrasonic Waves with CSS and FSK Modulation for Narrow Band Channel Alexander Ens, Fabian Hoeflinger and Leonhard Reindl

Johannes Wendeberg and Christian Schindelhauer

Laboratory for Electrical Instrumentation (EMP) Department of Microsystems Engineering (IMTEK) University of Freiburg, Germany Email: [email protected]

Chair for Computer Networks and Telematic (CONE) Department of Computer Science (IIF) University of Freiburg, Germany Email: [email protected]

Abstract—We propose a transmission scheme for localization based on the exchange of data between the transmitter and receiver. The ultrasonic signal is used twice, first for indoor localization by synchronization of the transmitter with the receiver and second to transmit additional information to improve the localization. Our approach for coding the information is a combination of chirp spread spectrum (CSS) signals and frequency shift keying (FSK) to method avoids fast phase changing and shifts of the ultrasonic wave, which results in narrow band characteristic. Index Terms—FSK; CSS; Ultrasonic; Communication; Localization

I. I NTRODUCTION In our everyday life it is important to know the actual position of things. The interests for localization services are growing and there is huge amount of possible applications, e.g. as navigation of shopping carts in super markets. Localization systems based on ultra-sonic are very cheap, have a low complexity compared to radio frequency and good position accuracy is with simple hardware possible, too. While the speed of sound is about 106 times slower than the speed of light, the position can be determined by time delay of arrival (TDOA) methods with low sampling rates of the received signal and without an additional intermediate frequency mixer. The disadvantage of ultrasonic is the absorption and therefore the attenuation of the transmitted signal by the air. Furthermore the attenuation in the environment depends on temperature and humidity of the air. Also the sound noise from industry and traffic disturb the ultrasonic sound. Another point that should be kept in mind, are the good reflections at walls and plane surfaces that cause additional echoes, which disturb the signal and reduces the signal-to-noise ratio (SNR) at the transmitter. To overcome the absorption of the air, the use of low frequency for the transmission can be used [1]. To avoid the distortion of the signal by echoes, a guard interval is used to have a silent pause before the next signal is transmitted. A simple localization system has one transmitter and at least three receivers to determine the position by TDOA in 2D of the transmitter. To distinguish between more than one transmitter, the transmitted signals need additional information of the signal origin and therefore the identification of the transmitter.

Then the receiver can determine the origin of the signal and map the time of arrival to the transmitter. The calculation of the position is augmented from the TDOA problem to data transmission and TDOA. A possible solution is, to give each transmitter a different frequency band. Yet, this is very expensive, because of the need of a broad band receiver and the limited free frequency bands. Another modulation scheme is the chirp spread spectrum (CSS) [2]. The chirp modulation avoids destructive interference of the echoes at the receiver by linear frequency modulation and therefore the signal can’t disappear at the receiver. Another advantage of the CSS is the robustness against the Doppler shift and good detection of the center of the chirp sequence by correlation. The CSS modulation needs fast Phase changes and therefore a higher bandwidth. The Gaussian Minimum Shift Keying (GMSK) overcomes the problem of fast phase switching by rounding the phase transitions [3]. II. S YSTEM D ESCRIPTION In our measurement setup we place the receiver at the top and the transmitters are mobile robots. The position of the receiver is known and the position of the signal origin can be calculated by established TDOA algorithms. The used narrow band transmitter device has its resonance frequency at about 39 kHz and the receiver at about 41 kHz. Therefore to get the maximum of the transmission devices a band of 2 kHz will be used. The symbol set consists of two continuous sinuses with constant frequency and f0 and f1 , an “up” chirp and a “down” chirp. The symbol length is the inverse of the used frequency 1 = 0.5 ms. bandwidth: T = ∆f The first symbol in the frame is for precise synchronization and therefore this symbol is only used at the beginning of the frame. The synchronization symbol is an “up” and “down” chirp in the duration of a symbol. The next symbol is an “up” chirp for logic 1 or a constant sinus with frequency f0 for logic 0. The followed symbol depends on the previos symbols. The Table I below shows the mapping of the symbols depending on the previous symbol. So the data is coded in the frequency by

188/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 constant sinus or in chirps. The modulation of the frequency over time for the Bit sequence 0011000 is shown in figure 1. Instead of using two frequency sources for the FSK, we use only one sinus source and change the phase slope smoothly. Previous Data Bit

Current Data Bit

Symbol

0

0

Constant sinus with frequency f0

0

1

“Up” chirp from frequency f0 to f1

1

0

“Down” chirp from frequency f1 to f0

1

1

Constant sinus with frequency f1

Table I S YMBOL MAPPING FOR COMBINED FSK AND CSS MODULATION

result is a Gaussian like shape of the spectrum. This is because the shape of the combined modulation spectrum has, unlike the CSS spectrum, not the periodic minima of the sinc pulse. One of the important property of the Gaussian shape in the frequency domain is the minimized time-bandwidth product [4]. A further simulation shows in figure 3 the bit error rate (BER) over the energy-per-bit and noise power. The BER for the combined modulation is exactly between the bipolar modulation, like binary orthogonal Keying (BOK) CSS [5], and the unipolar modulation, like On-Off-Keying (OOK) [4]. The gain is about 1.5 dB to unipolar modulation. The reason therefore is that, the chirp signal is not orthogonal to the constant sinus signal. 100

Bit Error Rate (BER)

Frequency in kHz of the transmitted signal

101

41

40

103 104 105 106

combined FSK and CSS Modulation Binary Orthogonal Keying (BOK), FSK Modulation Unipolar Modulation

107 0

0

Bit 1

Bit 2

1

1

39

0

0

0

10

1

Figure 1.

Bit 3

Bit 4

Bit 5

2 Time in ms

Bit 6 3

4

Combined CSS and FSK modulation for Bit sequence 0011000

The needed bandwidth of the transmission scheme is simulated by 10’000 frames with 20 symbols for the combined modulation of FSK and CSS and pure CSS modulation in figure 2. The black line is the measured ultra-sonic channel (includes transmitter and receiver) by a vector analyzer. The power spectrum of the combined modulation (blue dashed line) fits very well to the channel characteristics (black line). 20

5

Bit 7

III. S IMULATION R ESULTS

Powerspectrum in dB, 10 · log10 |s|2 /E |s|2 2 /E|s|2])

8

0

Sync 0

!

102

Figure 3.

10 Eb/N0, dB

15

20

Bit Error Rate over Bit-Energy per Noise power

IV. C ONCLUSION AND D ISCUSSION In this paper, we presented a combined FSK and CSS transmission scheme for the available bandwidth without frequency shifts and smoothly phase changes. The bit error rate can be further decreased by applying the Viterbi algorithm to estimated data. Then the correlation coefficient between the signal and the symbol set can be used as a metric for the path calculation in Trellis diagram. Furtermore, the synchronization can be extended to synchronize over all sweeps in the signal. This can further improve the synchronization and the localization accuracy. ACKNOWLEDGEMENT We gratefully acknowledge financial support from “Spitzencluster MicroTec Suedwest” and BMBF.

Measured Ultrasonic Channel CSS Modulation combined FSK and CSS Modulation

10

R EFERENCES 0

-10

-20 30

35

40 Frequency in kHz

45

50

Figure 2. Bandwidth comparison of CSS Modulation and proposed modulation with FSK and CSS

The spectrum of the received signal is the multiplication of the channel spectrum and the modulated signal spectrum. The

[1] “ISO 9613-1:1993, acoustics – attenuation of sound during propagation outdoors – part1.” [2] A. J. Berni and W. Gregg, “On the utility of chirp modulation for digital signaling,” IEEE Transactions on Communications, vol. 21, no. 6, pp. 748–751, 1973. [3] K. Murota and K. Hirade, “GMSK modulation for digital mobile radio telephony,” IEEE Transactions on Communications, vol. 29, no. 7, pp. 1044–1050, 1981. [4] J.-R. Ohm, Signalübertragung: Grundlagen der digitalen und analogen Nachrichtenübertragungssysteme. Berlin [u.a.]: Springer, 2005. [5] A. Springer, W. Gugler, M. Huemer, L. Reindl, C. Ruppel, and R. Weigel, “Spread spectrum communications using chirp signals,” in EUROCOMM 2000. Information Systems for Enhanced Public Safety and Security. IEEE/AFCEA, 2000, pp. 166–170.

189/278

Improving Heading Accuracy in Smartphone-based PDR Systems using Multi-Pedestrian Sensor Fusion Marzieh Jalal Abadi∗ ,† , Yexuan Gu∗ , Xinlong Guan∗ , Yang Wang∗ , Mahbub Hassan∗ ,† and Chun Tung Chou∗ ∗ School

of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia Email: abadim, ygux197, xgua341, ywan195, mahbub, [email protected] † National ICT Australia, Locked Bag 9013, Alexandria, NSW 1435, Australia Email: Marzieh.Abadi, [email protected]

Abstract—Accurately estimating the heading of each step is critical for pedestrian dead reckoning (PDR) systems, which use step length and step heading to continuously update the current location based on a previous known location. While magnetometer is a key source of heading information, poor accuracy of consumer grade hardware coupled with frequent presence of manmade magnetic disturbances makes accurate heading estimation a challenging problem in smartphone-based PDR systems. In this paper we propose the concept of multipedestrian sensor fusion where sensor data from multiple pedestrians walking in the same direction are fused to improve the heading accuracy. We have conducted experiments with 3 subjects walking together in the corridors of 4 different buildings. Based on the magnetometer data collected from these subjects, we find that multi-pedestrian fusion has the potential to improve magnetometer-based heading error by 42% compared to the case when no fusion is used. We further show that a very basic fusion algorithm that simply takes the average of 3 individual heading estimations can yield a 27.77% error reduction. Index Terms—Heading Estimation, Pedestrian Dead Reckoning, Multi-Sensor Data Fusion, Indoor Localization.

I. I NTRODUCTION PDR, which uses step length and heading estimation to compute current location relative to a previously known location, is a viable positioning alternative to GPS in indoor environments. While magnetometer is considered as a key source of heading information for PDR, it is known to exhibit large errors when used indoors due to presence of significant magnetic disturbances caused by metallic infrastructure. Because these perturbations are likely to be highly localised, in this paper, we propose the concept of multi-pedestrian sensor fusion where sensor data from multiple pedestrians walking in the same direction are fused to improve the heading accuracy. The key hypothesis is that pedestrians experiencing high perturbation will benefit from those experiencing no or minor perturbations if their devices could share their sensor data in real-time. Emerging device-to-device communication standards, such as WiFi-Direct1 , are definitely opening up such data sharing possibilities. To test this hypothesis, we collected magnetometer readings from three pedestrians walking in the same direction in the corridors of 4 different buildings. Our study reveals the following interesting results: 1 http://www.wi-fi.org/discover-and-learn/wi-fi-direct

When pedestrians use their individual heading estimations, i.e., when no fusion is used, the average heading error from the true heading is 12.45 degrees. • A simple averaging of all three individual estimations, which is called Na¨ıve fusion in this paper, reduces the error to 8.99 degrees, which yields an improvement of 27.77%. • If, however, we were able to filter out the highly perturbed data, which is called Oracle fusion in this paper, we could potentially reduce the error to 7.21 degrees or achieve up to 42% error reduction. The rest of our paper is organized as follows. In the next section, we describe the data collection methodology followed by the multi-pedestrian fusion analysis in Section III. Related work is reviewed in Section IV before concluding the paper in Section V. •

II. DATA COLLECTION We performed multiple experiments to collect the data for our study. In order to ensure diversity in environment conditions (especially magnetic perturbation), experiments were conducted in 4 different building on our university (UNSW) campus. In each building, we chose different corridors to provide different heading directions. Each experiment consisted of three subjects carrying an android smartphone. The subjects held the smartphone horizontally in their hand and walked along the corridor of the building. They ensured that they walked parallel to the corridor of the building, thus having the same heading, by following the line between the floor tiles. The smartphones record the magnetometer readings at 16Hz. Table I shows the building name and true heading used in each experiment. The true headings are estimated by assuming that the corridor is parallel to the face of the building. The three subjects walked in a line parallel to the corridor, one after another, with a gap of 5 meters between them. This means that, at a given time, the three subjects were always at different locations. The motivation for doing this is to test whether the magnetic perturbation at different places are independent. We identify the three subjects as “Back”, “Middle” and “Front”. After obtaining the magnetometer readings, we use the two horizontal components mx and my to compute the estimated

190/278

TABLE I I NDOOR LOCATIONS FOR DATA COLLECTION , UNSW, S YDNEY Buildings

True heading

Library, 3rd Floor

188.99 9.35

Electrical Engineering Building, 2nd Floor

278.99 98.9

Robert Webster Building, LG Floor

99.27 279.18

Old Main Building, Ground Floor

279.24 99.46

heading with respect to the magnetic North from mx h = tan−1 ( ) my

Fig. 1. Library, 3rd Floor, True heading=188.99

(1)

TABLE II L IBRARY, 3 RD F LOOR , T RUE HEADING =188.99,NA¨I VE

We will use these heading estimations for multi-pedestrian data fusion in the next section. III. M ULTI - PEDESTRIAN DATA FUSION In order to motivate multi-pedestrian data fusion, we plot the estimated headings from the three subjects in Figure 1 for an experiment conducted in the Library building. The figure also shows the true heading which is 188.99 degrees. The figure shows that the estimated headings deviate from the true heading due to man-made magnetic perturbation. Note also that, at a given time, each magnetometer experienced a different amount of perturbation. Consider the time interval between 11.44 to 13.64 seconds, bounded by the two vertical bars in Figure 1. In this interval, the Front subject experienced a large perturbation in heading estimation while both the Middle and Back subjects did not. If there was a method to tell that the Front heading estimation was erroneous, then we could discard it and replace it by the average of the other two heading estimates to obtain better heading estimation. This is the key idea behind Oracle fusion. In this section, we will compare the performance of two different fusion strategies. We first describe the fusion strategies. A. Fusion methods We define two different fusion methods, Na¨ıve and Oracle. We assume that all the subjects exchange their estimated heading using wireless communication such as WiFi. We assume there are n subjects. At a given sampling time, subject i calculates its heading estimates hi . After the exchange of heading estimates, each subject has the data: h1 , h2 , ..., hn . The method is applied for each sampling time. For Na¨ıve fusion, each subject computes the simple average P n 1 i=1 hn of all estimated headings. Note that Na¨ıve fusion n works well if the estimated headings are perturbed by random zero-mean noise but its performance under other types of perturbations can be poor. The Oracle method is used here to quantify the best possible improvement provided by data fusion. The method assumes that each subject knows the true heading hT and uses a given

Participant

Average error (No Fusion)

Na¨ıve Fusion error

Front Middle Back

23.34 9.35 10.48

8.91

Average

14.42

8.91

threshold γ. It also assumes each subject has all the estimated headings from all subjects: H = {h1 , ..., hn }. Each subject eliminates all the estimated headings in H that exceed an error threshold γ from the true heading hT , or in other words, each ˜ = {hi ∈ H : hi ∈ [hT − γ, hT + subject determines the set H ˜ is non-empty, then the Oracle method returns γ]}. If the set H ˜ Otherwise, if the simple average of the heading estimates in H. ˜ H is empty, the Oracle method uses the subject’s own heading estimate, i.e. subject i uses hi . B. Results and discussions For each building, we have collected multiple sets of data at different times of the day where each data set contains approximately 900 magnetometer samples for each subject. For a given data set, we applied the two fusion methods to each sample to obtain the fused headings. The heading error is calculated as the absolute difference between the true and the estimated headings. For each data set, we obtain one heading error data by averaging the 900 error data computed for the 900 samples. Table II shows the results of applying Na¨ıve fusion to one of the experiments conducted on the third floor of the Library building. It compares Na¨ıve fusion against the average heading error of each subject when no data fusion is used. The last row of the table shows the results of averaging over all subjects. Note that the results of Na¨ıve fusion is independent of the subject. It can be seen that Na¨ıve fusion reduces the average error from 14.42 degrees to 8.91 degrees. Table III shows the results of applying Oracle fusion to the same dataset. The different γ values used are shown in the first column. In columns 2–4, we show the average heading

191/278

TABLE III L IBRARY, 3 RD F LOOR , T RUE HEADING =188.99,O RACLE

γ

Average error Back

Average error Middle

Oracle (Perfect fusion) Average 1 above 2 above error γ γ Front

3 above γ

1

9.55

8.71

19.97

4

133

843

10

5.67

5.69

5.17

499

385

91

12

5.39

5.42

5.07

625

287

53

15

4.80

4.78

4.74

699

189

7

20

4.56

4.56

4.56

675

82

0

25

4.72

4.72

4.72

562

42

0

30

5.75

5.75

5.75

408

4

0

35

6.78

6.78

6.78

221

0

0

40

8.24

8.24

8.24

62

0

0

45

8.70

8.70

8.70

19

0

0

50

8.85

8.85

8.85

5

0

0

60

8.90

8.90

8.90

0

0

0

70

8.90

8.90

8.90

0

0

0

80

8.90

8.90

8.90

0

0

0

90

8.90

8.90

8.90

0

0

0

100

8.90

8.90

8.90

0

0

0

120

8.90

8.90

8.90

0

0

0

140

8.90

8.90

8.90

0

0

0

150

8.90

8.90

8.90

0

0

0

Fig. 2. Heading error for Oracle fusion in Robert Webster Building, LG Floor, True heading=99.27.

shown in brackets. The last row shows the average error and percentage improvements over all the 10 experiments. It can be seen from Table IV that Na¨ıve fusion is useful and can deliver improvement of 27.77% on average. For Oracle fusion with a fixed threshold γ, the improvement is −24.77%. This means a fixed γ does not deliver good results. Finally, the Oracle fusion with optimum threshold delivers the best improvement of 42.04%. IV. R ELATED W ORK

error for each subject for different values of γ. Note that each subject can have a different average error because if all the heading estimates at a given time exceed the threshold γ, each subject uses its own heading estimate as the output of the Oracle method. In column 5, we show for each value of γ, the number of sampling times where exactly 1 of the estimated headings is above the threshold γ, or equivalently, ˜ has exactly 2 the number of sampling times that the set H elements. Columns 6 and 7 are similarly defined. For γ = 1, we find that, for a lot of sampling times, all the three heading estimates have an error greater than γ. This is due to low value of error threshold γ. As the threshold γ increases, the number of sampling times that all three heading estimates are above the threshold become lower. An interesting observation that can be made from columns 2–4 in Table III is that, as γ increases, the average heading error for each subject decreases and then increases again. This means that there is an optimal threshold γ that gives the minimum estimation error. This observation is also found in the data from the other experiments. In Figure 2, we plot the average heading error for each subject against the γ for an experiment conducted in the Robert Webster Building. In Table IV, we compare the fusion methods over all the 10 data sets from the four buildings. Four different methods are used: no fusion, Na¨ıve fusion, Oracle fusion with a fixed threshold of 10 and Oracle fusion with the optimum threshold that gives the minimum heading error. Percentage improvements, compared to the case when no data fusion is used, are

Some approaches are currently in use to improve heading estimation such as sensor fusion by Kalman Filter [1]– [3], magnetometer fingerprinting [4]–[7], and magnetometer filtering [8]–[10]. Kalman filter is a sophisticated filter and uses magnetometer, accelerometer and gyroscope to estimate pedestrian’s heading. In magnetometer fingerprinting, different algorithms are used to match the observed magnetometer reading with a pre-surveyed database. In magnetometer filtering, the perturbed data are filtered to improve its accuracy. Our proposed fusion algorithms rely only on smartphone’s magnetometer without using any infrastructure. V. C ONCLUSION While magnetometer is considered as a key source of heading information for PDR, it is known to exhibit large errors when used indoors due to presence of significant magnetic disturbances caused by metallic infrastructure. Since these perturbations are highly localised, it may be possible that not all pedestrians are affected (equally) at the same time, opening up the possibility of reducing error by fusing sensing data among multiple pedestrians walking in the same direction. In this paper, we have experimentally quantified the error reduction potential of such multi-pedestrian sensor fusion. Our study reveals that there is opportunity for significant error reduction (42.04%), but only 27.77% is achievable with a Na¨ıve averaging. This calls for research in more advanced fusion models to achieve the full potential of multi-pedestrian sensor fusion.

192/278

TABLE IV C OMPARISON OF THE FUSION ALGORITHMS OVER 10 DATA SETS FROM FOUR BUILDINGS

No-fusion

Na¨ıve fusion(%)

Oracle fusion, γ=10(%)

Oracle fusion, Optimum γ(%)

Library, Day 1, (188.99)

14.42

8.91(38.21)

5.51(61.77)

4.64(67.82)

Library, Day 2, (188.99)

21.20

12.01(43.37)

14.56(31.35)

10(52.84)

Library, Day 1, (9.34)

17.94

15.92(11.28)

55.85(-211)

14.56(18.86)

Library, Day 2, (9.34)

12.18

4.92(59.60)

39.53(-224)

5.97(50.98)

Electrical Engineering Building, Day 1, (278.99)

11.61

11.08(4.52)

8.80(24.16)

8.32(28.31)

Electrical Engineering Building, Day 2, (98.9)

9.75

5.77(40.82)

5.26(46.01)

5(48.71)

Robert Webster Building, Day 1, (99.27)

9.61

8.72(9.25)

7.20(25.07)

6.7(30.31)

Robert Webster Building, Day 2, (279.18)

11.75

10.70(8.96)

6.35(45.96)

6.24(46.88)

Old Main Building, Day 1, (279.24)

6.60

5.69(13.75)

4.54(31.29)

4.53(31.39)

Old Main Building, Day 2, (99.26)

9.36

6.18(34.24)

7.68(18.24)

6.18(34.24)

Average

12.45

8.99(27.77)

15.53(-24.77)

7.21(42.04)

Building, Day, (True Heading)

R EFERENCES [1] W. Li and J. Wang, “Effective adaptive kalman filter for memsimu/magnetometers integrated attitude and heading reference systems,” Journal of Navigation, vol. 1, no. 1, pp. 1–15, 2013. [2] K. Abdulrahim and C. H. T. M. C. Hill, “Integrating low cost imu with building heading in indoor pedestrian navigation,” Journal of Global Positioning Systems, vol. 10, no. 1, pp. 30–38, 2011. [3] S. Kwanmuang, L. Ojeda, and J. Borenstein, “Magnetometer-enhanced personal locator for tunnels and gps-denied outdoor environments,” in SPIE Defense, Security, and Sensing. International Society for Optics and Photonics, 2011, pp. 80 190O–80 190O. [4] F. Li, C. Zhao, G. Ding, J. Gong, C. Liu, and F. Zhao, “A reliable and accurate indoor localization method using phone inertial sensors,” in Proceedings of the 2012 ACM Conference on Ubiquitous Computing. ACM, 2012, pp. 421–430. [5] Y. Kim, Y. Chon, and H. Cha, “Smartphone-based collaborative and autonomous radio fingerprinting,” Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, vol. 42, no. 1, pp. 112–122, 2012. [6] C. Sapumohotti, M. Y. Alias, and S. W. Tan, “Wilocsim: Simulation testbed for wlan location fingerprinting systems,” Progress In Electromagnetics Research B, vol. 46, pp. 1–22, 2013. [7] C. Laoudias, C. G. Panayiotou, and P. Kemppi, “On the rbf-based positioning using wlan signal strength fingerprints,” in Positioning Navigation and Communication (WPNC), 2010 7th Workshop on. IEEE, 2010, pp. 93–98. [8] J. Bird and D. Arden, “Indoor navigation with foot-mounted strapdown inertial navigation and magnetic sensors [emerging opportunities for localization and tracking],” Wireless Communications, IEEE, vol. 18, no. 2, pp. 28–35, 2011. [9] W. T. Faulkner, R. Alwood, D. W. Taylor, and J. Bohlin, “Gps-denied pedestrian tracking in indoor environments using an imu and magnetic compass,” in Proceedings of the 2010 International Technical Meeting of the Institute of Navigation (ITM 2010), 2010, pp. 198–204. [10] M. H. Afzal, V. Renaudin, and G. Lachapelle, “Assessment of indoor magnetic field anomalies using multiple magnetometers,” Proceedings of ION GNSS10, pp. 1–9, 2010.

193/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

A New Indoor Robot Navigation System Using RFID Technology Manato Fujimoto, Emi Nakamori, Daiki Tsukuda, Tomotaka Wada, Hiromi Okada, Yukio Iida The Faculty of Engineering Science Kansai University Suita, Japan {manato, nakamori, tsukuda2011, wada, okada}@jnet.densi.kansai-u.ac.jp iida@ kansai-u.ac.jp

Kouichi Mutsuura The Faculty of Economics Shinshu University Matsumoto, Japan [email protected]

Abstract— In this paper, we propose a new indoor robot navigation system using only passive RFID technology. RFID is simple composition and inexpensive. The proposed system does not need many kinds of devices by controlling the moving of a mobile robot using only information stored in RFID tags. To show the validity and effectiveness of the proposed system, we evaluate whether a mobile robot can reach to the final destination correctly and smoothly by computer simulations. As the results, the proposed system can control the mobile robot’s moving correctly and let the mobile robot move to the final destination smoothly by only RFID tag’s information.

discusses the outline of the indoor robot navigation system. Section 3 proposed a new indoor robot navigation system using RFID technology. Section 3 presents the performance evaluations by computer simulations. Finally, Section 4 concludes this paper.

Keywords - indoor robot navigation system; passive RFID technology; moving control; mobile robot; short term destination

I.

INTRODUCTION

Recently, the research development of support technologies for assisting aged and physically handicapped people is increasing very much by recent worldwide trend of increasing aged people. In particular, indoor robot navigation systems have been enthusiastically researched in the world. In the development of these systems, the most important issues are to control the harmonized moving of a mobile robot like electric wheelchair. The existing systems need to consist of several kinds of sensors (e.g. infrared sensor, ultrasound sensor, etc.) and wireless communications devices to control a mobile robot’s moving smoothly [1]-[4]. So, these systems are very complex and expensive. For this reason, the indoor robot navigation system with low cost and simple composition is required very much. In this paper, we propose a new indoor robot navigation system using passive RFID technology with features of simple composition and inexpensive to solve the above problems. The proposed system is effective system which let a mobile robot move to the final destination smoothly while a mobile robot communicates with RFID tags which are attached on a wall of a passage at regular intervals. This system controls a mobile robot’s moving by using only information stored in RFID tags. Then, this system does not need to consist of several kinds of sensors and wireless communications devices other than RFID. To show the effectiveness of the proposed system, we carry out the performance evaluations by computer simulations. This paper is organized as follows. Section 2

II.

OUTLINE OF INDOOR ROBOT NAVIGATION SYSTEM

The indoor robot navigation system is the system which assists to reach the destination of a mobile robot by providing the pathway between the current position of a mobile robot and the destination. To realize this system, the mobile robot’s moving control is very important. The moving control consists of three essential functions; 1) position estimation, 2) routing, and 3) moving correction and tuning. The main purposes of the moving control are to select a safe pathway to the destination and to move a robot smoothly without colliding a wall or obstacles. Researchers have been proposed many methods about each function to control the moving of a mobile robot to achieve such purposes [1]-[4]. Each function is very effective when it combines according to purpose or environments. However, because the existing indoor robot navigation systems control a mobile robot's moving by combining many kinds of sensors and devices, there systems becomes very complex and expensive. Then, the indoor robot navigation system with low cost and simple composition is required very much. III.

PROPOSED SYSTEM

To solve the above problems, we propose a new indoor robot navigation system using only passive RFID technology. RFID is popular, simple composition, inexpensive, and very easy to store information. The proposed system controls a mobile robot’s moving by using only information stored in RFID tags. This system does not need many sensors and devices. The proposed system is the system that a mobile robot moves to the final destination while communicating with each RFID tag which is attached on a wall of a passage at regular intervals. The mobile robot holds an RFID tag's ID attached in the final destination as the final destination information. The mobile robot can obtain the short term destination for approaching the final destination by collating the information read from the RFID tag and the final

194/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 destination information. The mobile robot can move toward the final destination accurately and smoothly by following all short term destinations that are needed to reach the final destination. So, the proposed system does not need a global map. Here, we explain a mobile robot’s moving control in the proposed system. Firstly, when a mobile robot detects an RFID tag, the system can obtain the mobile robot’s position and the short term destination by estimating an RFID tag’s position using previous CM-CRR [2] and reading information in an RFID tag. Secondly, the system calculates the pathway which connects mobile robot's position and the short term destination. Finally, the mobile robot moves toward the short term destination by tracking this pathway while controlling the moving speed and direction. By repeating this operation in each short term destination, the mobile robot can reach the final destination.

ACKNOWLEDGMENTS This research was partially supported by the Grants-in-Aid for Scientific Research (C) (No. 23500103) and the Kansai University Research Grants: Grant-in-Aid for Promotion of Advanced Research in Graduate Course, 2013. REFERENCES X. Xiong, et al., “Positioning estimation algorithm based on natural landmark and fishi-eyes’ lens for indoor robot,” IEEE 3rd International Conference on Communication Software and Networks (ICCSN 2011), pp.596-600, Xi’an, China, May 2011. E. Nakamori, et al., “A New Indoor Position Estimation Method of RFID Tags for Continuous Moving Navigation Systems,” The 3rd International Conference on Indoor Positioning and Indoor Navigation (IPIN 2012), pp.1-6, Sydney, Australia, Nov. 2012. B. Hartmann, et al., “Indoor 3D position estimation using low-cost inertial sensors and marker-based video-tracking,” IEEE/ION Position Location and Navigation Symposium (PLANS 2010), pp.319-326, CA, USA, May 2010. M. Suruz Miah, et al., “Keeping track of position and orientation of moving indoor systems by correlation of range-finder scans,” IEEE/RSJ/GI International Conference on Intelligent Robots and Systems (IRO 1994), vol.1, pp.595-601, Munich, Germany, Sept. 1994.

[1]

[2]

PERFORMANCE EVALUATIONS

V.

[3]

[4]

32m Room Passage 2m 25m

Pathway 1: Pathway 2:

S2 (11.2, 3.0)

O

45m

16m

Tag 2(-10.0, -4.5; ID: 53)

10m

10m

S1 (34.5, -13.7)

Tag 1 (-14.0, -14.5; ID: 124)

52m

Fig. 1 Simulation environments. Table 1 Simulation parameters. Parameters Passage width Attachment interval of RFID tags Interval of control points Attachment number of RFID tags Size of the mobile robot Moving speed of the mobile robot Major axis Long range Minor axis Major axis Short range Minor axis

CONCLUSION

We have proposed a new indoor robot navigation system using only RFID technology. This system controls a mobile robot’s moving by using only information stored in RFID tags. Hence, this system does not need several kinds of sensors, devices and a global map. To show the effectiveness of the proposed system, we evaluated the pathway error

60

60

50

50

Pathway Error [cm]

To show the effectiveness of the proposed system, we carry out the performance evaluations by computer simulations. The purpose of these simulations is to evaluate the pathway error characteristics for the proposed system in the environment which assumed the fourth experiment building of Kansai University. Fig. 1 shows the simulation environments. Table 1 shows the simulation parameters. We set the mobile robot's starting points to S1 (34.5, -13.7) and S2 (11.2, 3.0), and the final destination tags to Tag1 (-14.0, -14.5; ID: 124) and Tag2 (-10.0, -4.5; ID: 53). We determine the pathway which connected S1 and Tag 1 as the pathway 1 and the pathway which connected S2 and Tag 2 as the pathway 2. The mobile robot starts movement toward the moving direction of the pathway from the starting point while reading the RFID tag attached to the wall of the left side. The assumed pathway shall connect the position distant 80 cm from the wall of the left side. Fig. 2(a) shows the pathway error characteristic in pathway 1. In the straight pathway, we find that the mobile robot can move to the final destination while maintaining very small error. In addition, we find that the mobile robot does not collide with a wall since the maximum pathway error is 10.58 cm. Fig. 2(b) shows the pathway error characteristic in pathway 2. We find that the pathway error increases when the mobile robot is moving on the curve pathway. However, this error can decrease in the straight pathway. From these result, we find that the proposed system can control the mobile robot’s moving without colliding a wall and let the mobile robot move to the final destination smoothly by using only information stored in RFID tags.

Pathway Error [cm]

IV.

characteristics for the proposed system by computer simulations. As the results, we found that the proposed system is able to control the mobile robot’s moving correctly and smoothly by only information stored in RFID tags.

40 30 20

Values 2m 1m 1.5m 225 60cm×120cm×86cm 25cm/s 121.28cm 48.16cm 93.44cm 35.36cm

40 30 20 10

10

0

0 0

20

40

60 80 100 Elapsed Moving Time [sec]

120

(a) Pathway 1

140

160

0

20

40

60

80

100

120

140

160

Elapsed Moving Time [sec]

(b)Pathway 2

Fig. 2 Pathway error characteristics in pathway 1 and 2.

978-1-4673-1954-6/12/$31.00 ©2013 IEEE

195/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Accurate positioning in underground tunnels using Software-Defined-Radio Fernando Pereira, Christian Theis

Sergio Reis Cunha

Radiation Protection European Organization for Nuclear Research Geneva, Switzerland [email protected]

University of Porto Porto, Portugal [email protected]

Manuel Ricardo UTM, INESC Porto University of Porto Porto, Portugal

Abstract— Localization in tunnels and other underground environments has been regarded as of extreme importance, notably for personal safety reasons. Nevertheless, due to the inherent difficulties which these scenarios present, achieving high accuracy levels with low-cost generic solutions has shown to be quite challenging. In the specific case of long but narrow tunnels, like the ones of the Large Hadron Collider [1] at CERN, localization based on fingerprinting techniques with Received Signal Strength (RSS) performs well but yields accuracy levels in the order of 20m, which is still not sufficient for the most demanding applications envisaged for the system. In this context, a new technology based on Time-of-Flight (ToF) is being developed and prototyped using programmable Software Defined Radio (SDR) devices. By measuring the carrier phase-delay, the system aims at achieving meter-level accuracy. This paper describes the localization technique under research and whose design takes into account the SDR specificities, in contrast to dedicated hardware. Keywords- underground tunnel, phase-delay, SDR, leakyfeeder

I.

INTRODUCTION

In the last years much attention has been paid to localization in tunnels and other challenging environments mostly due to its extreme importance with respect to personal safety. Among the many existing techniques for indoor localization, not all of them are interesting or applicable to these special cases, in which one of the dimensions is generally very large while the others can often be of little interest. Furthermore, it is common that, due to the adverse conditions – rough surfaces, humidity, magnetic fields, radiation, etc – special precautions must be taken regarding the installation of hardware devices, and in some cases it might even be not possible.

requires neither the installation of dedicated infrastructure hardware nor the allocation of extra radio frequency (RF) spectrum [2]. Thus, they are potentially very cost-effective as well. For the purpose of localization in the CERN accelerator tunnels, techniques based on RSS fingerprinting have been previously explored, which took advantage of the dense network coverage available via a set of leaky-feeder cables. Besides the benefit with respect to increased personal safety, a good level of accuracy, in the order of one to two meters, would enable for much faster processes carried out by various technical departments at CERN, including radiation surveys with automatic position tagging. Even though these methods have shown to be effective in estimating the location based on the RSS of both the GSM and WLAN networks, their accuracy was limited to 20m at a confidence level of 88% [3], which is not sufficient for providing an accurate position tag for some applications. In order to increase the accuracy up to the envisaged levels, techniques based on Time-of-Flight (RF wave propagation delay), that could meet the tunnels restrictions and specificities, are currently being investigated. By using frequencies in the VHF band (2m wavelength), the technology aims at achieving 1 meter-level accuracy and, by propagating the signal over the leaky-feeder cable, full tunnel coverage is expected to be achieved with a small number of units. In order to allow for fast prototyping and custom deployment of the methods at a relatively low cost, programmable Software Defined Radio devices are considered for the implementation. The following chapters provide an overview of the methods being investigated, and the first results of their performance using SDRs, as well as a discussion on the limitations of these devices for such specialized applications.

Localization based on RSS fingerprinting has therefore been regarded as very attractive solution for these cases, as it

196/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 II.

BACKGROUND

A. Indoor Positioning Localization techniques build on three main distance measurement principles: angle-of-arrival (AOA), receivedsignal strength (RSS), and propagation-time based measurements that can further be divided into time-of-arrival (TOA), roundtrip-time-of-flight (RTOF) and timedifference-of-arrival (TDOA) methods [4]. By using several distance measurements, eventually of different types, it is possible to calculate a position in a 2D or 3D coordinate system. Angle of Arrival techniques calculate a position by determining and intersecting the directions where a signal comes from, by making use of directional antennas. Due to its relative simplicity and wide coverage, broadcast networks have mostly used it. However, the position accuracy is always subject to growing uncertainties as the distance between the devices increases. RSS systems are based upon the principles of path loss of electromagnetic waves, and therefore they are virtually applicable to any existing wireless network. Nevertheless, although simple models exist for free space propagation, multipath fading and shadowing effects have a dominant impact in indoor environments, which strongly limit the measurement accuracy. One of the most widely used approaches is RSS fingerprinting, in which a given RSS sample is compared against a map of RSS values previously collected and filtered. RSS fingerprinting can be quite effective but has the drawback of requiring a calibration phase to build the map and subsequent effort to keep it updated. Methods based on RSS fingerprinting have been studied in previous stages of the current investigation. For more in-depth information on their application to the current scenario refer to [3]. Propagation-time based localization (ToA) is arguably the technique delivering the highest accuracy levels. Although the principle of measuring the propagation time of a wave is relatively simple, due to the very high propagation speed and multi-path effects, these techniques require careful design of efficient algorithms and their implementation in hardware. For instance, in a radar-like setup (RTOF) with RF waves, a positioning accuracy in the range of 10 cm requires a clock frequency in the order of 1.5 GHz. In a direct configuration (without round-trip) precise clock synchronization is also required among the receiver and transmitter units. Methods using Ultra Wideband pulses (UWB) typically reach resolutions better than 30 cm, and are typically found to handle multipath effects best, as long as the pulses are short enough. UWB methods frequently employ pseudorandom-noise (PRN) codes so that a receiver applies autocorrelation techniques to the received signal, yielding accuracy levels proportional to their bandwidth. Among the best known cases is GPS C/A [5] based positioning, which employs codes of 1023 chips at 1 Mchip/s, whose receivers are currently able to detect shifts in the order of 1% of chip time.

In systems using narrower frequency bands, although multiple propagation paths can be more difficult to distinguish, the carrier phase can be recovered at the receiver, which enables cm-level resolution. In these methods, the accuracy is proportional to the wavelength (typically in the order of 5%) and bandwidth is mostly required for target disambiguation. Due to the limited bandwidth Software-Defined-Radios can handle, the approach being investigated falls in the narrowband category, and uses pairs of signals for position disambiguation. B. Software Defined Radio platforms Radio systems have traditionally consisted of transceiver chains with several stages, where the signal is converted to an intermediate frequency, filtered, then converted down to baseband and finally demodulated. With the advent of fast and cheap digital signal processors (DSPs), radio systems now employ digital transceivers composed of a radio FrontEnd followed by an analogue to digital converter (ADC) and finally by a Back-End responsible for the further signal processing, like filtering and demodulation. The need for fast-paced development and prototyping has motivated the research for ways how to change the behaviour of some digital blocks with minimum time and cost, i.e. turn them software programmable. This class of transceivers is known as Software–Defined Radios and uses either FieldProgrammable Gate Arrays (FPGA) or even General Purpose Processors (GPP) to perform digital operations equivalent to a traditional analogue transceiver [6]. Despite the increased degree of flexibility achieved in such configuration, FPGAs and more critically GPPs are intrinsically slower than Application Specific Integrated Circuits (ASIC). Therefore the computational requirements of the application must be carefully assessed to be sure they can be implemented in SDR. In the case of using GPPs, an intermediate FPGA is commonly used to perform the most demanding operations and down sample the signal to lower rates before sending them to the GPP. This configuration is the one evaluated in the current study. III.

CASE STUDY AND METHODOLOGY

The LHC tunnel at CERN is located 100 m below the surface; it is divided into 8 sections and measures nearly 27 km in circumference. GSM network coverage is available all along the tunnel’s length via a set of leaky-feeder cables installed at nearly 2 m from the ground – see Figure 1. They propagate electromagnetic waves of up to 1950 MHz and exhibit a longitudinal loss of 3.16 dB/100 m at 900 MHz [7]. Although RSS fingerprinting methods couldn’t provide the desired accuracy levels, they succeed in clearly distinguishing among tunnel’s regions. This fact motivated the study of complementary localization technology with higher resolution on a small scale that could cover long ranges but without the need to provide disambiguation between the tunnel’s regions. Under these circumstances, a

197/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 DSP blocks are available as modules which can be programmatically linked to implement the desired functionality. Although this task can be fully carried out in Python, the GNU Radio Companion (GRC) GUI extension allows for the full specification of the system by graphically creating a flow-graph of DSP blocks. The localization techniques presented in the next chapters were implemented using the mentioned GRC software and tested in regular office conditions. The host machine features a 3-year-old dual core CPU, which could process up to 2 MS/s. Along the implementation, several new DSP blocks had to be created for the GNU Radio library using its C++ API, and will be identified as they are mentioned in the text. IV.

Figure 1. The LHC tunnel. On he top right corner one can see the leakyfeeder cable (black)

narrowband phase-delay system was considered. Furthermore, given that it could potentially be implemented in SDR, optimal conditions for experimental development were met while creating a solution that, to the best of the authors’ knowledge, has not been investigated to date. Tests and development has been carried out with USRP B100 SDR units from Ettus Research [8]. The units implement a Xilinx® Spartan® 3A 1400 FPGA, 64 MS/s dual ADC, 128 MS/s dual DAC and USB 2.0 connectivity to provide data to host processors, and are equipped with the WBX daughterboard that provides 40 MHz of bandwidth within the 50 MHz to 2.2 GHz spectrum range. For an overview of its architecture, refer to Figure 2. Despite the high sample rate at which the unit operates internally, it is limited to stream up to 8 MS/s over the USB link. Therefore, processing at higher rates must be implemented in the FPGA. After passing by the unit’s FPGA, the signal flows to the host computer where signal processing can be performed in pure software. For this purpose, the GNU radio framework [9] [10] was adopted given the support from Ettus Research and the large active community. In GNU radio the several

PHASE-DELAY POSITIONING WITH SDR

A. General architecture While the current scenario presents many challenges, the presence of a leaky feeder comes in as an opportunity to propagate the signals much further and therefore avoid the installation of additional receiver units. Therefore the envisaged system, as shown in Figure 3, has the following components: 

Rover Unit (Rover) – moveable SDR unit whose location is to be determined, and shall be simple as to allow for future implementation as portable devices.



Fixed receiver Units (Fixed) – units directly coupled to the leaky feeder, which receive the signals from the Rover Unit and, eventually, from the Reference Unit. In principle there should only exist one per region delimited by the signal reach.



Reference Unit (Reference) – units that might be required, depending on the design, to act as online calibration points of the system.

B. Design considerations SDR technology allows for fast-paced development by “turning hardware problems into software problems” [10]. Despite being an incredible advantage for research, this facility comes at a high cost. In general, bandwidth and CPU power are the main constraints in SDR platforms when implementing a communication system. Nevertheless, since SDR platforms have not been specifically designed for

Figure 3. Localization system overview

Figure 2. USRP B100 architecture

198/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 localization purposes, other parameters like delay, jitter, clock stability might restrict the feasibility of using such a system. Since these platforms are more complex than pure hardware, there are also more stages where unexpected effects might occur. In a configuration using the USRP B100 (see Figure 2) a signal, after being received and down-converted in the frontend, is sampled in the ADC and then filtered. Subsequently it is decimated in the FPGA and finally streamed to a host computer via USB. During this initial phase, the internal clocks for down-conversion impose the most serious challenges, as the front-end PLL must precisely tune to the signal center frequency, while the DAC/ADC must operate at a multiple of the base sampling frequency. In both cases they rely on a clock generator (TXCO) with a 2.5 ppm rated accuracy. Although it might seem sufficiently accurate (indeed it is for most applications), it also means that at 100 MHz there might be a frequency shift of 250 Hz, which is enormous for phase sensitive applications. In a next stage, after being transferred over a USB link, the signal has to go through several communication layers, implemented in kernel as well as user space, of a computer system. When the samples finally arrive at the user DSP program, they have been delayed by a significant, and not necessarily constant, amount of time or have even been dropped. These side effects – jitter and frame dropping – heavily affect precise systems and might be very hard to compensate. Having these considerations in mind, two approaches have been explored. On the one hand, if clocks are stable enough, an emitter-receiver approach with a synchronization signal should perform well. On the other hand, if jitter and frame-dropping are within acceptable limits, a radar-like approach could be effective while relaxing the need for clock synchronization. C. Direct phase detection with reference unit The original design reflects very closely the architecture presented in Figure 3. In this scenario, both the Rover and the Reference unit only emit reference waves. In turn, the Fixed receiver recovers each signal’s carrier, performs the phase measurements taking into account the reference signal and communicates the results to the Rover unit using any existing data network. As for the parameters of the communication it was established: 

Wave1: 150 MHz / (1.33 m wavelength in cable)



Wave2: 151 MHz (carrier1-carrier2 = 1MHz-> 200m wavelength in cable)

Figure 4. Frequency plan of the direct phase detection system with reference unit



Channel separation / Δf (between rovers and reference station): 10KHz

For testing, this scenario was simplified to a single Rover, and a combined Fixed Receiver-Reference unit. Both units were configured from the same GNU Radio program, where all the DSP was being performed as well. In order to correct clock drifts between the Rover and the Fixed receiver, the Reference unit will generate a wave that compensates for the frequency and phase offset. As illustrated in Figure 4, around a carrier frequency (fc), six waves are to be transmitted in total: four transmitted by the Rover (f1, f1-Δf, f2 and f2+Δf) and two corrective ones transmitted by the Reference unit (f1-2Δf, f2+2Δf). Frequencies close to f1 are relative to wave 1, which is directly employed in the calculation of the fine-grained position within a short range (one wavelength). In turn, frequencies close to f2 are relative to wave 2, which demodulated by f1 create a low-frequency wave used to localize within a long range, in the order of 200m. Let f1-Δf be f12 and the corrective wave (f1-2Δf) be f1r. The Reference unit will calculate f1r so that its frequency and phase offset is the same to f12 as between f1 and f12. The calculation that yields the correction factor can be simplistically illustrated by the block diagram of Figure 5. In an initial step, the phase difference between the two original waves (f1 and f12) is compared with the phase difference between f12 and fr. The result is itself a wave with the corrective frequency and phase to be applied. Ideally, it would be sufficient to use this wave to correct a reference wave but, since the clock drift is expected to change quicker than the response time of the system, the correction is done is a two-step process. First, finding the peak of the FFT one determines the frequency shift to be applied, which, accumulated over time, converges to the real frequency correction. When this frequency converges, the phase information from the comparator blocks is also used in a complex VCO to produce the final correcting wave. At this point, the wave comparison result (after the last multiplyconjugate block) should have 0-frequency and 0-phase.

Figure 5. Conceptual implementation of the direct phase detection system with reference unit

199/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 For the implementation of the current system the blocks marked with asterisk were implemented as add-on blocks for gnu-radio. With a stable system at the Reference unit, the Fixed receiver unit will only see phase shifts depending on the position of the Rover. In this design the Refence unit is a critical part since frequency and phase shifts are expected to occur continuously since various factors, like the temperature, affect the unit’s clock independently. Nevertheless it is required that these shifts are smooth enough in order to allow the system the follow the change. In the case of eventual abrupt changes in the clocks, the system must wait until it stabilizes again. D. Round-trip phase detection An approach which tries to minimize the need for full clock synchronization was also considered. In this setup, since the signal is emitted and received by both units, the goal is to evaluate whether phenomena affecting the signal in each transmit/receive chain (e.g., frequency shift due to different clock rates) are counter compensated when the signal goes through the reverse chain of the same device. As a matter of fact, when receiving a signal emitted from the same device, despite all the SDR complexity, it appears in optimal conditions, i.e., without frequency shifts. Even though this architecture might be conceptually more complicated, due to the round-trip, the principle can be easily verified with two units. On the one hand the Fixed unit will emit a simple wave and listen for “reflections” in N channels separated by a given frequency Δf, where N is the potential number of Rover units. On the other hand, the Rover unit will listen for the original wave, shift it to its own frequency channel and retransmit it. In this stage, signal processing should be kept to a minimum extent to avoid introducing jitter. A simplified view of the implementation with a single channel in gnu-radio is presented in Figure 6.

recovers the transmitted wave, filters it and retransmits the wave shifted in frequency by –Δf and Δf. In the last step, again in USRP box1, these two waves are received, then individually shifted by their nominal frequency (to 0-hz), filtered and their phase averaged. The reason for having two reflected waves (at Δf and – Δf) is the fact that any shift in frequency will incur a continuous phase delay in one direction. Doing so for two symmetric frequencies and averaging their phases in the reception will cancel out this delay. V.

RESULTS

In order to assess the performance of these methods in a first phase, the signal phase stability, which is the most crucial parameter in the system, was analyzed graphically. Using the plot tools of GRC, the phase of the signal could be checked for variation over time. In the direct phase detection method, which uses a reference unit, the system would ideally converge to 0frequency 0-phase in very few seconds, which would be acceptable if changes in frequency were progressive and rarely abrupt. Unfortunately, none these conditions were met. Fast and significant changes in frequency, like those observed in Figure 7 (usually by 30 Hz or more) happened quite frequently, around every 1 to 5 seconds. Given that the system needed a few seconds to stabilize, this method showed to be little helpful for the current scenario.

For this this test, USRP box1 (acting as Fixed unit) simply transmits a sine wave of frequency f1. Then, as seen in the flow graph in the middle of Figure 6, USRP box2 Figure 7. Method 1 phase stability

Figure 6. Conceptual implementation of the round-trip phase detection system

200/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 From the plot, one can also notice that, despite these frequency changes, phase continuity was kept, which indicates that frame dropping was not the issue. In order to isolate the problem, instead of a SDR unit emitting the original wave, a dedicated RF wave generator was used. Differences in the signal stability were immediate and very noticeable, eliminating this frequency hopping phenomena almost completely. This fact is a strong indicator that, at some point of the transmit chain, the signal suffers side effects, eventually from components highly sensitive to the clock changes or de-synchronism among processing layers, introducing little frequency hops in the resulting wave. The second method, based on the round-trip principle to minimize clock synchronization issues, was evaluated in a very similar way. After implementing the model, step-bystep tuning of frequencies and filter parameters was performed and quite interesting results could be observed. In the current setup, after being “reflected”, the signal could be recovered in the original unit and after being shifted to 0-frequency it would behave well, without presenting any kind of frequency hops. Although sensitive to interference of nearby bodies, when performed basic smoothing using a Moving Average filter, the phase would remain impressively constant while perfectly responsive to distance changes. In Figure 8 it is possible to observe the evolution of the signal phase over a period of 16 sec. After second 5 one unit was moved by nearly 40 cm, kept there for 2 sec and then rapidly moved back to the original position. Indeed, in a round trip configuration at 150 MHz over air (2 m wavelength) a complete period (2π) shall occur with 1 m displacement. A 40 cm change should therefore incur a phase delay of 2.5 rad, which is approximately the observed value. The results obtained in this test provide a strong argument that, although the signal looks slightly fluctuating while transmitting, those effects are not independent between the transmit/receive chains of the SDR. Indeed, it is quite remarkable that, at sample rates higher than 1 Msample/s, there would be no frame dropping and the receive and transmit chains would remain perfectly aligned so that those fluctuation effects in the signal could cancel out.

VI.

CONCLUSIONS AND FUTURE WORK

This paper presents a study evaluating localization techniques implemented in Software-Defined-Radio (SDR) platforms, intended to enable accurate positioning over the total length of the CERN tunnels. Two narrowband approaches, based on the principle of Time-of-Flight, were investigated. On the one hand, one approach uses directphase detection with a synchronization signal, requiring very simple Rover units. On the other hand, a radar-like approach relaxes the need for clock synchronization but requires jitter and frame-dropping to be within acceptable limits. Tests using the USRP B100 and GNURadio showed that the first approach did not perform well since the signal was recurrently being affected by fast frequency hops which could not be compensated within its stability time frame. In turn, the second approach showed to perform quite well, as the effects introduced in the signal along the transmit chain of the the SDR were cancelled out when also passing through the receive chain of the same device. Furthermore relative movement of the units among each other could be perfectly observable, closely matching the displacement. Next steps foresee the development of the second method for long term phase stability and comprehensive tests in the tunnels taking advantage of the leaky feeder infrastructure. ACKNOWLEDGMENTS The authors would like to express their gratitude for the support of F. Chapron, A. Pascal and A. Molero from the IT/CS group at CERN, without whom this study wouldn’t have been possible. REFERENCES [1] “CERN - The Large Hadron Collider,” [Online]. Available: http://public.web.cern.ch/public/en/LHC/LHC-en.html. [2] A. Bensky, “Received Signal Strength,” in Wireless Positioning Technologies and Applications, Artech house, 2008. [3] F. Pereira, C. Theis, A. Moreira and M. Ricardo, “Multi-technology RF fingerprinting with leaky-feeder in underground tunnels,” in Indoor Positioning and Indoor Navigation (IPIN), 2012 International Conference on, Sydney, 2012. [4] M. Vossiek, L. Wiebking, P. Gulden, J. Wieghardt, C. Hoffmann and P. Heide, “Wireless Local Positioning,” IEEE microwave magazine, vol. 4, no. 4, pp. 77-86, December 2003. [5] GPS.gov, “GPS Standard Positioning Service (SPS) Performance Standard,” 2008. [6] D. Valerio, “Open Source Software-Defined Radio: A survey on GNUradio and its applications,” Vienna, 2008. [7] “1-1/4" RADIAFLEX® RLKW Cable, A-series,” RFS, [Online]. Available: http://www.rfsworld.com/dataxpress/Datasheets/?q=RLKW11450JFLA.

Figure 8. Method 2 phase stability and reaction to position change

[8] “Ettus Research LLC,” [Online]. Available: http://www.ettus.com/. [9] “GNU Radio official website,” [Online]. http://gnuradio.org/redmine/projects/gnuradio/wiki.

Available:

[10] E. Blossom, “GNU Radio: Tools for Exploring the Radio Frequency Spectrum,” 2004. [Online]. Available: http://www.linuxjournal.com/article/7319.

201/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Positioning in GPS Challenged Locations NextNav's Metropolitan Beacon System Subbu Meiyappan, Arun Raghupathy, Ganesh Pattabiraman NextNav, LLC Sunnyvale, CA 94085, USA [email protected], [email protected], [email protected]

Abstract— In this paper, we explore the limits of GNSS-based positioning solutions with specific emphasis on the challenges presented to GNSS systems in indoor locations and urban canyons. In this context we introduce NextNav's Metropolitan Beacon System as a reliable, ubiquitous, low power, fast (order of 6s cold-start TTFF) positioning service. NextNav’s technology enables consistent, wide-area indoor and outdoor location accuracy using a network deployed with metropolitan-area coverage in contrast with other positioning systems that are limited to specific venues. The NextNav system provides high horizontal accuracy (currently ~20m) and precise vertical accuracy (~1 – 3m), with yields of 98% where deployed. NextNav’s technology has been operational in the San Franscisco area for well over three years, and has been subjected to numerous third party trials to verify system performance. Most recently, the FCC-sanctioned CSRIC Working Group 3, tasked with exploring indoor location accuracy standards for wireless emergency E911 location, conducted a side-by-side trial of various location technologies at its national test bed. This test program examined horizontal and vertical indoor location performance across rural, suburban, urban and dense urban morphologies. NextNav technology had 28m(67 th percentile) and 44m (90th percentile) 2-D error in rural morphologies, 28m/52m in sub-urban, 62m/141m in urban, and 57m/102m in dense urban morphologies, which was significantly better accuracy than competing technologies. The results from the CSRIC trial will be presented and discussed in this paper. Keywords-indoor positioning, altitude, E911, fast TTFF

I.

terrestrial

signals,

environments, for commercial applications, such as WiFi, RFID, Low Energy Bluetooth (BLW) etc. However, to provide reliable, accurate, scalable location and timing on a wide area basis, a dedicated network designed to provide location signals (like GPS) with terrestrial transmissions is essential. NextNav is building such a network, called Metropolitan Beacon System (MBS), currently deployed in some markets in the United States. II.

NEXTNAV MBS NETWORK

Unlike location systems designed to offer a buildingspecific indoor location capability, NextNav‟s service is built as a wide-area network with similar coverage scale to a metropolitan cellular network. Consistent, accurate indoor location performance is designed to be available across an entire market area, and is not limited to a specific venue or set of venues. Figure 1 illustrates the basic architecture of the NextNav network. Where the satellite signals are blocked, for example, in an urban canyon or deep indoors, NextNav beacons provide terrestrial ranging signals to enable receivers to compute their location.

precise

INTRODUCTION (HEADING 1)

GNSS based location systems provide very reliable and accurate location information in urban and sub-urban outdoor environments. While A-GNSS helps in providing some level of indoor location solutions, it seldom works in deep indoor and in dense-urban environments. With the growing usage of smartphones, a reliable, accurate yet scalable solution is needed to provide similar levels of performance as GNSS, in GNSS challenged environments, for both public safety applications (E911) and commercial Location Based Services (LBS). There are several Signals of Opportunity (SoP) that are currently being used to solve indoor location problems in localized

202/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 B. Nextnav Elevation System The NextNav elevation system is based on the principle that atmospheric pressure decreases with increasing elevation. The challenge comes from the fact that even normal weather phenomena cause changes in pressure that is an order of magnitude larger than the pressure change resulting from moving from one floor to another in a building. In the San Francisco Bay Area, for example, NextNav has observed ambient pressure changes equivalent to ascending or descending more than 200 feet within several hours. The solution to this challenge is to simultaneously measure the weather-induced changes in pressure at multiple fixed locations and to use that information as a real-time reference with the pressure readings resulting from pressure changes at a mobile device. By offsetting the weather-induced changes (including temperature differentials and other factors), the remaining changes in pressure correspond to elevation changes. Figure 1: NextNav Network Architecture This location computed on the device is then utilized by applications on the mobile or measurements are relayed over the network to a server, where the location is computed. A. NextNav MBS Network Characteristics Some characteristics of the MBS network are: -

The network consists of high power (30W peak ERP) broadcast transmitters (beacons)

-

The network is designed and deployed for both coverage and geometry such that at every location the Geometric Dilution of Precision (GDOP) is ≤ 1.5

-

The beacons in a network are synchronized autonomously to GPS time and to each other within a few nanoseconds of each other.

-

The transmit antenna location, GPS antenna location, cable lengths etc are precisely surveyed/measured and normalized across the network for delay computation.

NextNav has determined that, in order to deliver a highperformance altitude measurement system, the device must be capable of measuring barometric pressure with a high degree of accuracy. For such a device to be practical, it must be small enough, low enough cost, and low enough power to be embedded in a portable consumer product. Due to demand for pressure measurements in a variety of consumer electronics devices, most recently on mobile phones (e.g., the Samsung Galaxy series of handsets) and various tablet computers, there are a number of MEMS pressure sensors on the market that meet the requirements for size, power and cost. Note that reference pressure available readily from airports, National Oceanic and Atmospheric Administration (NOAA) etc. do not have the precision to provide floor level accuracy.

III.

NEXTNAV SIGNAL STRUCTURE

MBS signals are transmit between 919.75 to 927.25MHz in the United States. MBS signals are designed such that an existing GNSS baseband can be reused in its entirety to process the MBS signal.

-

Several beacons in a given network have weather stations installed to help with altitude determination at the receiver.

-

The beacons occupy a very small footprint and are colocated on cell-towers or roof-tops with, typically, an omni-directional antenna.

The current signal structure for the MBS network has characteristics very similar to GPS signals in that, the chipping rate is 1.023Mcps and uses the family of Gold codes defined in the GNSS specifications.

-

The network functions as an “overlay” network to the existing cellular infrastructure.

2.

-

Ideal location for the beacons is highest available point on existing broadcast, paging or cellular tower facilities.

-

The beacons do not need a backhaul – some telemetry services are used for remote monitoring and control from the Network Operations Center (NOC).

-

Since the broadcast signals from the beacons are used to compute position, there are no limits on capacity.

The spectral characteristics at baseband are shown in Figure

203/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 

Establish a benchmark upon which emerging technologies can be compared to determine their relative promise in improving the capabilities that are currently available (both in terms of accuracy and/or consistency).

An independent test house, Technocom, was selected to perform the test in various morphologies using all available location technologies in the San Francisco bay area during Nov-Dec 2012. The detailed test report submitted by Technocom is available at the FCC website [1] or in the NextNav website at [4]. A summary of the results are discussed below. The morphologies (or wireless use environments) are those that were defined in ATIS-0500011, namely, dense urban, urban, suburban and rural. These morphologies have subsequently been adopted in ATIS-0500013[2] defining the recommended indoor location test methodology Figure 2: NextNav Signal's Spectral Characteristics The signal from each transmitter is transmitted in a time slot with a specific PRN and frequency offset as determined during network planning. The payload data (transmitter latitude, longitude, timing correction, pressure, temperature etc.) modulates the PRN sequence at the rate of 1bit/msec.

B. TEST CONFIGURATION Various technologies participated in the test bed: Nextnav‟s MBS, Polaris Wireless RF Fingerprinting, Qualcomm‟s AGPS/AFLT. All receivers (2 per technology) were assembled in a cart and tested simultaneously as shown in Figure 3. NextNav receiver is shown in the left.

Further details about the NextNav signal structure is available in the MBS ICD [1] and can be obtained by making an official request to NextNav. MBS is a terrestrial GNSS “constellation” of beacons in which the ranging measurements are similar to a GNSS system and the MBS receiver can utilize the same call flows as for AGNSS. IV.

CSRIC TESTING

A. CSRIC COMMITTEE and TEST OBJECTIVES CSRIC III Working Group 3 (WG3) was tasked by the FCC to investigate wireless location in the context of indoor wireless E911. In June 2012, WG3 submitted its initial report to the FCC regarding Indoor Location Accuracy for E9-1-1. As one of its primary findings, the report identified the lack of objective and validated information regarding the performance of available location technologies in various representative indoor environments. The Working Group identified obtaining this critical information as its highest priority and established a set of cooperative actions, including the creation of an independent test bed, to accomplish this task. WG3 created the framework for the test bed whose objectives have been to:  

Figure 3: Receiver Assembly for Test The end-to-end test configuration for NextNav technology used in this test is illustrated in Figure 4.

Enable an „apples to apples‟ comparison of various location technologies in real world conditions Provide unbiased, objective data on the performance of various location technologies in indoor environments to the FCC. This will establish the framework for establishing it‟s longer term objectives for E911.

204/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

4. 5. Vo Em ice ai Ca l L ll og W W AN

Public Cloud

Conference Call Bridge

AT&T AT&T BTS BTS

USB OTG

NN APP

NN Receiver Unit

Fix ition Pos s iate ion Fixe it n I 1. osit 2. P

3. Log Position Fixes on Device

n atio Loc

Sig

.

NextNav NextNav Location Location Beacons Beacons

Android-Phone Android-Phone

Figure 4: Test Setup C. TEST EXECUTION Indoor ground truths were carefully surveyed to a maximum of +/- 2cm in both vertical and horizontal accuracy. Each technology was tested with at least two handsets at any given location. Over 13,400 test calls were placed from the devices of each of the 3 technologies at 74 valid indoor test points, averaging over 180 calls per test point. The test point distribution is shown in Table 1.

Figure 5: Dense Urban Morphology Polygon

Table 1: Test Point Distribution Summary Morphology Number of Test Points Dense Urban (DU) 29 Urban (U) 23 Suburban (SU) 19 Rural (R) 4 Total 75 Buildings were selected from the polygon shown in Figure 5 for testing in dense urban morphology. Similar polygons were setup and agreed upon for other morphologies. The test points were selected by Technocom to meet the general requirements of the test plan with adequate diversity in their RF environment (including adequate cellular signal coverage), placement of the point in the building, and non-intrusive test performance. Several of the test points were deep indoor locations (5-6 walls inside). Example picture of one such building is shown Figure 6 which is located in Hearst Office Bldg (699 Market St.), San Francisco. There were typically 4 test locations indoors, at each building.

Figure 6: A Building in Downtown SF – example test point The tests were conducted over a period of 21 days and the results were processed and analyzed for the following performance criteria:     

205/278

Location Accuracy Latency (TTFF) Yield Reported Uncertainty Location Scatter

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 Note that The Time to First Fix (TTFF) or the time to obtain the first computed caller location is reported for each technology at each test point and is calculated by establishing the precise time for call initiation (or an equivalent initiation event if the vendor‟s test configuration did not support the placement of an emergency like call, e.g., 922). A 30 second timeout was set for latency as the maximum time allowed to get a location fix, while the call was in progress. Further, each of the fixes is performed and measured under cold start conditions. Hence, the notion of TTFF is different in this test compared to conventional GNNS concepts. V.

RESULTS

Summary results for accuracy per morphology are shown in Table 2 and Table 3 for different technologies. Figure 7 shows a pictorial view of the results with the 67th percentile in the bottom and the 90th percentile at the top of each bar for the different morphologies. The 67th and 90th percentile numbers can be compared to the FCC E911 mandated phase 2 requirements of 50m 67% of the time and 150m 90% of the time (shown in solid grid lines). In Table 3, only Nextnav‟s technology is shown for vertical results because that is the only technology that tested a vertical system. Note that, typical floor height in a multi-story building is around 3m. In Table 2 the numbers have been rounded to the nearest integer, where appropriate, for typographical space. A copy of the full report with the CDF curves for each morphology and technology can be found in [3].

Figure 7: Comparative Performance

Table 3: Summary vertical error results from CSRIC Testing 67th 90th

From the results, it is evident that using a terrestrial constellation designed for delivering position location signals performs better than any available wide area positioning system. Further, the elevation information with a floor level accuracy is a unique capability that has never been achieved before in a wide area context. Table 2: Summary 2D results of key parameters from CSRIC testing Technology and 67th 90th Avg. Yield Conf. Morphology TTFF (m) (m) (%) (m) (DU/U/SU/R) (s) NextNavDU Qualcomm DU Polaris DU NextNav U Qualcomm U Polaris U NextNav SU Qualcomm SU Polaris SU NextNav R Qualcomm R Polaris R

57 156 116 62 227 198 28 75 232 28 48 575

102 268 400 141 450 447 52 205 420 44 210 3005

27.36 28.24 24.37 27.40 27.83 24.11 27.39 23.53 24.68 27.56 24.88 23.38

93.9 85.8 99.4 95.4 90.8 99.9 100.0 91.4 99.8 97.3 99.3 96.9

93 93 69 87 79 61 97 85 71 95 82 49

NextNavDU NextNav U NextNav SU NextNav R

(m)

(m)

2.9 1.9 4.6 0.7

4.0 2.8 5.5 1.1

ACKNOWLEDGMENT NextNav would like to thank Technocom for providing the results of the CSRIC III testing.

REFERENCES [1]

[2] [3]

[4]

206/278

http://transition.fcc.gov/bureaus/pshs/advisory/csric3/WG3_Indoor_Test _Report_Bay_Area_Stage_1_Test_Bed_Jan_31 _2013.pdf CSRIC WG3 Indoor Location Test Report, Mar 2003 https://partner.nextnav.com/ https://www.atis.org/docstore/product.aspx?id=25009, “Approaches to Wireless E9-1-1 Indoor Location Performance Testing,” ATIS-0500013, Feb 2010. http://www.nextnav.com/sites/default/files/CSRIC_III_WG3_Final_Test _Bed_Rpt_3_14_2013.pdf, “Indoor Location Test Bed Report,”,CSRIC III, Mar 2003.

Indoor Positioning using Wi-Fi – How Well Is the Problem Understood? Mikkel Baun Kjærgaard, Mads Vering Krarup, Allan Stisen, Thor Siiger Prentow Henrik Blunck, Kaj Grønbæk, Christian S. Jensen Department of Computer Science, Aarhus University, Denmark Email: mikkelbk,mvk,allans,prentow,blunck,kgronbak,[email protected] Abstract—The past decade has witnessed substantial research on methods for indoor Wi-Fi positioning. While much effort has gone into achieving high positioning accuracy and easing fingerprint collection, it is our contention that the general problem is not sufficiently well understood, thus preventing deployments and their usage by applications to become more widespread. Based on our own and published experiences on indoor Wi-Fi positioning deployments, we hypothesize the following: Current indoor WiFi positioning systems and their utilization in applications are hampered by the lack of understanding of the requirements present in the real-world deployments. In this paper, we report findings from qualitatively studying organisational requirements for indoor Wi-Fi positioning. The studied cases and deployments cover both company and public-sector settings and the deployment and evaluation of several types of indoor Wi-Fi positioning systems over durations of up to several years. The findings suggest among others a need for supporting all case-specific user groups, providing software platform independence, low maintenance, allowing positioning of all user devices, regardless of platform and form factor. Furthermore, the findings also vary significantly across organisations, for instance in terms of need for coverage, which motivates the design of orthogonal solutions.

I.

I NTRODUCTION

Motivated by the challenge of indoor positioning, a substantial amount of research has focused on methods for indoor Wi-Fi positioning. For instance, a search for Wi-Fi and positioning on Google scholar [1] returns over ten thousand papers. Already in 2007 a survey covered over fifty papers presenting different methods for Wi-Fi positioning [2]. Since then research on the topic has increased its output and is by now accompanied by articles that study the links between WiFi positioning and other positioning technologies. Research articles on WiFi indoor positioning are foremost method oriented, e.g, most of them propose a new technique to address one general goal, e.g., positioning accuracy as evaluated on collected datasets. General arguments are given to promote addressing the specific topic of the presented contribution. However, these claims are often not backed up with statements grounded in insights from positioning system stakeholders (e.g., future owners or users) or real-world use experiences with deployed systems. Therefore, it is largely unknown whether or not research is addressing the most pressing issues, e.g., it is unclear whether further accuracy gains are more pressing than, e.g., the improvement of methods that allow for positioning of devices of a broader variety of operating systems and form factors. Therefore, an understanding of the organisational requirements for indoor Wi-Fi positioning as deployed in real-world use, e.g., at companies or public-

sector institutions such as hospitals, is needed to justify both further research on known issues as well as on yet mostly unaddressed issues. To the best of our knowledge, so far no studies have been published which focus on reporting the organisational requirements for indoor Wi-Fi positioning. Research on Wi-Fi positioning has inspired commercial businesses to provide—on common smartphones and within urban areas—positioning systems that, e.g., allow to pinpoint the building you are in or enable a points-of-interest application to show you that there is two kilometres to the nearest shopping mall. For such positioning systems earlier studies claim accuracy levels of 30-70 meters—depending on calibration level and algorithm used [3]. Furthermore, quite a number of papers discuss applications of urban positioning, e.g., location-based games, life logs from place visits or location-based reminders [4]. At the indoor-level the research has also inspired businesses to provide site-specific indoor WiFi positioning systems to individual organisations targeting an accuracy below 3 meters [5], [6]; however, such technology is not massively deployed yet. Recently, new players are entering the scene providing site-independent indoor Wi-Fi positioning for public spaces together with indoor maps, e.g., Google Maps 6.0 [7] targeting an accuracy of 5-10 meters [8]. However, given the lack of knowledge of organisational requirements it is hard to judge the application potential for these systems. Given the substantial amount of research into methods for indoor Wi-Fi positioning in the last decade one would expect that there by now would exist a multitude of papers reporting on the deployment experience of indoor locationbased applications which utilize indoor Wi-Fi positioning. Therefore it is a paradox that when the authors surveyed the literature only seven articles on the topic could be identified. In the light of the published as well as of our own experiences on indoor Wi-Fi positioning deployments, we hypothesize the following: Current indoor Wi-Fi positioning systems and their utilization in applications are hampered by the lack of understanding of the organisational requirements present in the real-world deployments. This paper thus addresses the lack of knowledge of organisational requirements for indoor Wi-Fi positioning. Our case studies and deployments cover both company and public-sector settings and the deployment and evaluation of several types of indoor Wi-Fi positioning systems deployed for several years. The paper’s contributions are as follows: We present findings of important requirements in different organisations based on case studies of deployed indoor Wi-Fi positioning systems both in company and publicsector settings. The findings suggest among others a need

207/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 e.g., public institution versus private company. Secondly, the size of the organisation in terms of number of potential users and total coverage area of buildings. We selected cases among organisations which we knew to either already have experiences with positioning or have an interest in trying out positioning. Consequently, we chose to study cases at a Small Private Company (SP C), a Medium-sized University Department (M U D), a Large Shopping Mall (LSM ) and a Large Public Hospital (LP H). In the following, we will use the above abbreviations to denote either the respective deployment scenario or the respective stakeholder parties. Table I lists the studied organisations together with the number of potential system users, the size of the total coverage area and the type of the Wi-Fi positioning that was deployed. Within the organisations we contacted persons with an interest in or with experiences of using indoor Wi-Fi positioning. The contacted persons varied in their knowledge specifically about positioning and also in their level of technical knowledge in general.

for supporting all user groups, providing software platform independence, low maintenance, and enabling of positioning for user devices regardless of form factor. Additionally, there is a need to establish application requirements for not only accuracy but also latency. Furthermore, the findings also vary significantly across organisations, for instance in terms of need for coverage, thereby motivating the design of orthogonal solutions. II.

R ECAP OF INDOOR W I -F I -P OSITIONING

For use in the reminder of the paper we will establish some terminology for Wi-Fi positioning systems building on the yet most extensive attempt to structure this field [2]. Indoor WiFi positioning has been studied for more than a decade and research has proposed a variety of methods and algorithms building on the notion of location fingerprinting [9]. At the core of any location fingerprinting system is a radio map which is a model of network characteristics in a deployment area. A positioning method uses this radio map to compute a likely position given an observation of the current network characteristics. Additionally, positioning methods might fuse Wi-Fi derived positions with other sensor observations [10]. Wi-Fi positioning systems are classified according to the division of role as device-based if both taking measurements and positioning are performed by the device to be positioned, device-assisted if measurements are taken by the device and positioning is performed remotely, and network-based if the network carries out both the measuring and the positioning remotely.

Our procedure for gathering information for the case studies is primarily based on semi-structured interviews with stakeholders about their organisations’ requirements for indoor Wi-Fi positioning. Additionally, for two of the organisations we deployed a positioning system at their site: for SP C, since the organisation did not have prior experience with positioning, and for LP H to to enable them to experiment with a different type of indoor Wi-Fi positioning. For the two latter cases we also did follow-up interviews after the deployments. To guide the case study we have reviewed existing literature. However, we found no research studying in depth the organisational requirements for Wi-Fi positioning. Instead, as stated in the introduction, research in the field is motivated by and focuses on general goals regarding the improvement of accuracy or the reduction of deployment cost, and, accordingly, general arguments for these goals are given to promote the specific topic of the presented contribution to the field. In regards to organisational requirements, though, literature has so far focused foremost on capturing the technical differences among positioning systems by defining evaluation criteria capturing different technical aspects of these systems, e.g., resolution, accuracy, coverage, infrastructure requirements among others. Furthermore, the claims regarding the system’s performances are often purely technical, but not backed up with statements grounded in insights from users and stakeholders. So far only two aspects of WiFi positioning deployments have been linked with and discussed in regards to the organisational requirements: Firstly privacy, e.g., in an academic setting [14] and secondly—and only in form of general comments—on social barriers for fingerprint collection [11] .

Radio maps can be constructed by methods which can be classified as either empirical or model-based. Empirical methods use collected fingerprints to construct radio maps. Model-based methods use instead a model parameterised for the covered area to construct radio maps [9]. Furthermore, given recent trends we will in this paper subdivide the empirical methods into administrative, participatory and opportunistic fingerprinting. Administrative fingerprinting is carried out by the administrator of the system or by an expert hired on the behalf of the administrator [9]. Participatory fingerprinting refers to users of the positioning system collecting fingerprints when and where they want to [11]. Opportunistic fingerprinting refers to collection of fingerprints during normal system use without any user intervention and explicit ground truth provision, e.g., with the assistance of inertial sensors [12] or using unsupervised techniques to recover the mapping to the physical space [13] . III.

O RGANISATIONAL R EQUIREMENTS

In this section we address the knowledge gap regarding organisational requirements for indoor Wi-Fi positioning. Here, organisation denotes an organized entity involving several people with a particular purpose, such as a business or a public-sector institution or department. To gather knowledge about the organisational requirements, we chose the analysis of case studies as our guiding research method. This enabled us to study different cases of organisations and distill from the collected information the case-specific requirements and analyse how these differ among organisations.

In the following we present the findings from the four case studies, structured according to nine requirement types we identified as important. These findings are summarized also in Table II which reveals that the identified requirements differ among organisations, suggesting that there may not be a ”one size fits all” indoor positioning system. User Groups (O1) Several organisations, most clearly M U D and LP H, stated that they had several user groups, that were candidates for using a positioning system. These were for LP H: clinical staff, service staff, patients and guests; and for M U D: academic staff, technical staff, administrative

The cases we consider have been chosen in order to cover the following two dimensions. Firstly, the type of organisation, 2

208/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 TABLE I.

D ETAILS FOR CASE ORGANISATIONS

Description

Potential Users

Total Coverage

Wi-Fi Positioning Deployment

Interviewed

70

A three story building

140 + students

LPH

Large public hospital

5000 + patients and visitors

Five buildings with three to five stories each Large building complex with 85 shops Large building complex with 6000 individual rooms

Empirical participatory fingerprinting Model-based and empirical administrative fingerprinting Empirical administrative fingerprinting Empirical administrative and participatory fingerprinting

Two software engineers

LSM

Small private company Medium-sized university department Large shopping mall

SPC MUD

100 + customers

TABLE II.

Network Administrator Site Manager Two Network Administrators

S UMMARY OF FINDINGS FOR ORGANISATIONAL REQUIREMENTS

O1

O2

O3

O4

O5

O6

O7

O8

O9

SP C

Single

Incremental

Few

+

External

+/-

+

-

+

MUD

Multiple

Complete

Several

+

External

+/-

+

-

+/-

LSM

Single

Complete

Few

+

External

+/-

+

-

+/-

LP H

Multiple

Complete

Many

+

Local

+

+

+/-

+

system of the devices to be supported. In particular, LP H stated they would like their applications to work regardless of the users’ device operating systems. When visiting the organisations we also noticed how the organisations used and supported laptops, phones and tablets with different operating systems. LSM would like that all visitors would have access to position-based services wihtin the premises. Such platform-independent positioning is not trivial and may induce restrictions in other regards: For instance, in the SP C and LP H deployments potential system users were limited by the fact that the tested device prototype was implemented on the Android platform as device-side positioning is not possible to implement on current iOS devices due to the restrictions of the currently available APIs.

staff, students and guests. From the studies we noticed that these groups differ in all of: i) how they utilize the space, ii) their mobility patterns, and iii) for how long they are within the premises of the organization. For instance within LP H, the staff comes and leaves at regular (work-shift) times, and depending on their function they are either largely stationary at one department, or move around the whole hospital. In contrast, guests who visit a hospitalized patient often go directly to a specific department—occasionally with a detour to some of the common facilities. Patients on the other hand either stay mainly in an department if they are hospitalized, and otherwise walk from an entrance to the department where they receive the ambulant treatment and exit the building afterwards. The two organisations LSM and SP C had foremost a single specific group in mind to provide positioning to: for LSM customers, and for SP C staff.

Infrastructure (O5) For LP H it was paramount that the positioning system came with a high level of reliability and availability—as soon as their work processes integrated positioning systems. To achieve this, they viewed it as a crucial measure that the positioning service was hosted within their own infrastructure. SP C and LSM instead did not have such concerns and were willing to accept a cloud-hosted service— such as the remotely hosted system that was eventually provided to them in the deployment.

Coverage (O2) The organisations differ in how they view the requirements for coverage. When introduced to Wi-Fi positioning, SP C was favoring the concept of slowly growing the coverage to incrementally include places according to how much these were frequented, as this would easen the initial deployment. After trying this approach, they provided positive statements regarding growing the coverage. However, they also did not have a lot of experience with potential applications, which potentially could make them reconsider these statements. M U D, LSM and LP H stated that they would like complete coverage for their premises, so that the provided applications would work without outages and ”dark spots” within the targeted areas and for the intended user groups and for tracked assets.

Data Privacy (O6) For LP H it was important to protect location traces whenever these originate from a device carried by a identifiable person; the reason being, that they considered such traces as personal and privacy-sensitive data since, e.g. in the case of patients, medical conditions and their severity may be deduced from their in-hospital position traces. LSM , on the other hand, did not view privacy as an organisational issue, because the positioning was used by their customers without LSM having knowledge of the resulting position data.

Form Factors (O3) Most of the organisations stated that they would like the positioning to work for several form factors of devices. SP C and LSM was focused on the positioning of smart phones, M U D on smart phones and tags and LP H considered all of laptops, smartphones, tablets, Wi-Fi-enabled badges, watches; where smartphones would be foremost used for people tracking, and tags asset tracking. Furthermore, given the rapid evolution of different form factors, a wish was stated to be able to adopt new device types and form factors as these enter the market.

Maintenance (O7) All organisations required explicitly that their positioning solution should have only a low degree of maintenance. LP H had already tried an empirical administrative fingerprinting-based solution and gave up on fingerprinting their premises exhaustively, once they had concluded that this task would take more than three months for a single person. Furthermore, because they invested in the positioning system as an add-on when replacing their wireless infrastructure, no major resources were assigned to running or configuring this add-on. A second maintenance issue that LP H encountered

Software Platform (O4) Some of the organisations pointed to that positioning should work regardless of the operating 3

209/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 M U D stated that for asset tracking they had gotten positive reactions when providing an (empirically determined) 6 meter median accuracy. In general, the organisations’ stakeholders were unsure about the link between positive application experiences and specific requirements for accuracy and latency.

was, that there was no updating procedure in place to inform the installed system that an access point had been replaced, e.g., in place of a access point with a different identifier but of similar type and location—suggesting that old fingerprints could be reused. After the deployment, LP H saw some potential in empirical participatory fingerprinting as a low cost solution to improve accuracy in specific areas of the hospital. This fingerprinting approach was attractive also to SP C. Initially, they were concerned if such a solution was really cheaper, given that highly paid staff may end up spending work time on this task. However, after deployment SP C even suggested that the system should propose new places to users, where they should go to take a fingerprint. A problem encountered with this system was that people often selected the wrong floor, since the floors’ layouts were very similar. The issue was partly solved by increasing users’ awareness of what floor they selected when fingerprinting via presenting to them a different coloring of each floor. M U D had ran a model-based fingerprinting solution for three years with extremely low maintenance. E.g., when all access points were replaced after two years with newer models, only the mac addresses, residing in a single file, had to be reconfigured to get the system up and running again. They also tested a empirical administrative system in some parts of their buildings to improve accuracy but had given up fingerprinting it after all access points had been replaced. LSM did a partnership with an external party so maintenance was limited to providing information about their Wi-Fi infrastructure.

IV.

C ONCLUSIONS

In this paper we have hypothesized about the reasons for that after more than ten years of research on indoor Wi-Fi positioning, it has not yet achieved a widespread breakthrough in terms of real world deployments. We argue that this is due to a overly narrow research focus on algorithmic optimization of positioning accuracy in insufficiently realistic settings—at the expense of a broader understanding of the organisational side of indoor Wi-Fi positioning. The findings suggest among others a need to consider how to support all user groups, provide software platform independence, low maintenance, allowing positioning of all user devices, regardless of platform and form factor. We hope that the research community will address such challenges in future work. ACKNOWLEDGMENT The authors acknowledge the support granted by the Danish Advanced Technology Foundation under J.nr. 076-2011-3 R EFERENCES [1] [2]

Fingerprinting Limitations due to Social Barriers (O8) When confronted with the social barriers that might affect the decisions, whether an administrator or expert could or should collect fingerprints , SP C did not view this as a major problem. They compared it to the duties of cleaning personnel or of a person watering the flowers—which also require access to and temporary presence in most of the premises. However, for participatory fingerprinting there may be restrictions: E.g., given the case that the system suggests that a participating user should fingerprint his boss’s office, him entering that office unnoticed may not be considered as an acceptable action within the organisation. M U D had allowed earlier for such collection and therefore also did not view this as a major problem. LP H and LSM did not utter issues with the topic either. However, in a hospital setting there are a number of locations which are difficult to get access to, e.g., doctors’ offices, resting rooms, and operation rooms, as these are either seldom unoccupied or considered private areas.

[3]

[4]

[5] [6] [7] [8]

[9]

[10]

Accuracy and Latency (O9) When the stakeholders were asked upfront, LP H and SP C generally wished for roomlevel accuracy with high confidence and low latency. Wi-Fi positioning systems generally struggle to provide room-level accuracy (assuming rooms < 20 square meters) with high confidence (at least if not assisted by other sensor modalities or employing massive deployments of short range Wi-Fi access points). Therefore, the technology may not be able to fulfill the above wishes fully. LP H linked these wishes to a number of clinical applications—but also recognized that for other applications more relaxed requirements were sufficient, e.g., sub-department level for providing an overview of assets. LSM stated that for way-finding via in-app indoor maps they had gotten positive reactions from customers—despite the current positioning accuracy was coarser than room-level.

[11]

[12]

[13] [14]

4

210/278

“scholar.google.com,” Dec. 2012. M. B. Kjærgaard, “A Taxonomy for Radio Location Fingerprinting,” in LoCA, 2007, pp. 139–156. I. Constandache, R. R. Choudhury, and I. Rhee, “Towards mobile phone localization without war-driving,” in INFOCOM 2010, 2010, pp. 2321– 2329. T. Sohn, K. A. Li, G. Lee, I. E. Smith, J. Scott, and W. G. Griswold, “Place-its: A study of location-based reminders on mobile phones,” in UbiComp, 2005, pp. 232–250. “www.ekahau.com,” Dec 2012. “www.aeroscout.com,” Dec. 2012. “maps.google.com,” Dec. 2012. “www.theage.com.au/digital-life/smartphone-apps/indoor-gps-everystep-you-take-every-move-you-make-googles-got-maps-for-you20121115-29e1b.html,dec.2012.” P. Bahl and V. N. Padmanabhan, “RADAR: An In-Building RF-Based User Location and Tracking System.” in INFOCOM, 2000, pp. 775– 784. R. Nandakumar, K. K. Chintalapudi, and V. N. Padmanabhan, “Centaur: locating devices in an office environment,” in MobiCom, 2012, pp. 281– 292. J. geun Park, B. Charrow, D. Curtis, J. Battat, E. Minkov, J. Hicks, S. J. Teller, and J. Ledlie, “Growing an organic indoor location system,” in MobiSys, 2010, pp. 271–284. H. Wang, S. Sen, A. Elgohary, M. Farid, M. Youssef, and R. R. Choudhury, “No need to war-drive: unsupervised indoor localization,” in MobiSys, 2012, pp. 197–210. K. Chintalapudi, A. P. Iyer, and V. N. Padmanabhan, “Indoor localization without the pain,” in MobiCom, 2010, pp. 173–184. W. Griswold, P. Shanahan, S. Brown, R. Boyer, M. Ratto, R. Shapiro, and T. Truong, “Activecampus: experiments in community-oriented ubiquitous computing,” Computer, vol. 37, no. 10, pp. 73–81, 2004.

- chapter 12 -

Aerospace

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 th-31th October 2013

The workspace Measuring and Positioning System (wMPS) — an alternative to iGPS Bin Xue, Jigui Zhu*, Yongjie Ren, Jiarui Lin State Key Laboratory of Precision Measuring Technology and Instruments School of Precision Instrument and Opto-Electronics Engineering, Tianjin University Tianjin, China [email protected] I.

INTRODUCTION

The workspace Measuring and Positioning System (wMPS) is a modular, large-volume tracking system enabling factorywide localization of multiple objects with metrological accuracy, applicable in manufacturing and assembly [1,2]. Like iGPS [3], the components of the wMPS are a network of transmitters, a control center and a number of receivers. Besides, the wMPS possesses all the distributed characteristics iGPS features including sharing the measurement task by implementing a network of transmitters, tracking an unlimited number of targets simultaneously, and so on. The localization principle of the wMPS is multi-plane constraint, namely, the position of a receiver can be determined by several laser beams intersecting there, while iGPS is based on triangulation, i.e., each transmitter presents two measurement values to each receiver: the horizontal (azimuth) and the vertical (elevation) angles, receivers can calculate their position whenever they are localized in the line of sight of two or more transmitters [4]. Other than the localization principle, the wMPS and the iGPS have the same characteristics and advantages in accuracy and functionality. II.

LOCATING PRINCIPLE

The components of the wMPS are illustrated in Fig.1.

transmitter from another, different rotating velocities are assigned to the transmitters in the workspace. The moment a laser beam sweep over a receiver, the pre-processor plugged with the receiver provides the time from the start to the present moment by accumulating the pulses. Several planes going through a receiver will locate the position of the receiver, see Fig 1. III.

THE APPLICATION AREA OF THE WMPS

The workspace Measuring and Positioning System (wMPS) can be applied in many measuring and manufacturing areas in industry. For instance, the wMPS has successfully applied in the airplane level measurement project. Other projects like providing high absolute accuracy to an industrial robot, guiding an AGV (Automatic Guided Vehicle) moving in a workshop, assembly project in shipbuilding, and so on, are also in process. IV.

THE ACCURACY EVALUATION OF THE WMPS

To evaluate the accuracy of the wMPS, we set up the experiment. First, we set up the wMPS with the scale bar. Second, sample at least three correspondences in both the wMPS and the laser tracker frame in order to achieve the relationship between the two. Third, we use both the wMPS and the laser tracker to measure the test points. Because the diameter of the receiver is the same size as the SMR of the laser tracker as we designed, obtaining a same point in both the frames is feasible. Six correspondences are sampled, and the results are listed in TableⅠ. Table1 Data comparison between the Laser Tracker and the wMPS

Figure 1.

The components and the locating principle of the wMPS

Fig.1 presents the typical components of the wMPS. For each transmitter, there is a rotating head with two lasers mounting on, which emit plane-shaped beams. To distinguish one

Nominal Actual Nominal Actual Nominal Actual Nominal Actual Nominal Actual Nominal Actual

211/278

X(mm) 5880.18 5880.22 7631.58 7631.50 7840.26 7840.41 6006.25 6006.17 10483.85 10483.66 11701.76 11701.85

Y(mm) -3213.32 -3213.29 -5720.73 -5720.86 -5626.45 -5626.32 -3468.72 -3468.65 -939.81 -939.76 -3006.05 -3005.95

Z(mm) 127.60 127.70 152.48 152.47 559.75 559.67 534.27 534.24 535.94 535.91 558.46 558.48

Error(mm) 0.11 0.15 0.22 0.11 0.20 0.14

2013 International Conference on Indoor Positioning and Indoor Navigation, 28 th-31th October 2013 The data under the heading Error are the distances between the nominal and the actual coordinates. The distance between the nominal provided by the Laser Tracker and the actual provided by the wMPS is used to measure the accuracy of the wMPS. The smaller the distance is, the better the accuracy would be.

extrinsic parameters achieved in one area becoming inaccurate in another area, the correspondences not being placed as dispersed as possible. The factors should be studied one by one theoretically and practically. The object is to achieve the best accuracy the wMPS can provide in the desired area to meet the requirement of the industrial measurement.

We see from the data under the heading Error that the maximum error is 0.22mm and the minimum is 0.11mm. That is to say, the accuracy of the wMPS approximately lies around 0.2mm. We want to point out that the results are obtained in a reasonably well controlled environment. When the circumstances in the workshop are complex so that the placement of the transmitters is influenced, the accuracy may deteriorate. V.

THE DETERIORATION OF THE ACCURACY TOWARDS THE BOUNDARIES OF THE OPTIMUM MEASURING AREA OF THE WMPS

The contradiction between the accuracy and the measuring area is a constant topic in large-scale measurement [5]. Although the wMPS has the potential to solve this contradiction by its distributed characteristics, there are still problems unclear and to be solved. For example, locating more transmitters in the network can enlarge the measuring area, but how to arrange the placement in order to provide the optimum accuracy to the specific measurands? How to place the scale bar in the calibration procedure in order to achieve the best calibration accuracy? When converting the current coordinate frame to a workpiece, how to place the correspondence points in order to get the optimum converting accuracy? These urgent problems need to be solved theoretically and practically. In this section, we just demonstrate an experimental result which reflects the laws about the above problems in a nutshell. In the workshop, we place three transmitters according to the environment, and a laser tracker to provide the reference. Set up the wMPS with the scale bar, and then obtain several correspondences in both the wMPS and the laser tracker frame to achieve the relationship between the two. The obtainment of the correspondences benefits from the same diameter of the receiver and the SMR. Also, by the benefit we can do some coordinates comparisons to reflect the relationship between the accuracy and the measuring area. The results are illustrated in Fig.2. In Fig.2, the area where we locate the scale bar in order to calibrate the wMPS is marked by both the yellow crosses and the scale bar icons. The pink crosses represent the correspondences used to achieve the relationship between the wMPS and the laser tracker frame. The laser tracker is not marked in the figure. The blue crosses together with the number represent the points used to test the accuracy there. The area where the transmitters approximately intersect best gives a relatively better accuracy. It is also the area where the scale bar is located to implement the calibration procedure. As shown in Fig.2, the accuracy deteriorates significantly outside the boundaries of the area. The deterioration may be caused by several factors such as the bad intersection, the accurate

Figure 2. The deterioration of the accuracy towards the boundaries of the optimum measuring area of the wMPS

VI.

CONCLUSION

In this paper, we first introduce the wMPS in a nutshell, and then offer the accuracy assessment by comparing it with a laser tracker. The results show that the wMPS is able to achieve an accuracy of around 0.2mm at a reasonably well controlled environment. At last, we point out that as a distributed large-scale measurement system, the accuracy of the wMPS differs in different areas. This characteristic is caused by several factors. To effectively control the factors in order to achieve the required accuracy in the area in which we are interested is a problem to be solved. REFERENCES [1]

[2]

[3]

[4]

[5]

212/278

Z. Xiong, J. Zhu, Z. Zhao, X. Yang, and S. Ye, “Workspace measuring and positioning system based on rotating laser planes,” Mechanika. vol. 18, pp. 94-8, 2012. B. Xue, J. Zhu, Z. Zhao, J. Wu, Z. Liu, and Q. Wang, “Validation and mathematical model of workspace Measuring and Positioning System as an integrated metrology system for improving industrial robot positioning,” Proc IMechE Part B: Journal of Engineering Manufacture., in press. F. Franceschini, M. Galetto, D. Maisano, L. Mastrogiacomo, and B. Pralio, “Distributed large-scale dimensional metrology: new insights,” London, Springer, 2011. S. Kang, D. Tesar, “A noble 6-DOF measurement tool with indoor GPS for metrology and calibration of modular reconfigurable robots,” IEEE ICM International Conference on Mechatronics, Istanbul, Turkey, 2004. Y. Mohamed, A. Kemal, “Strategies and techniques for node placement in wireless sensor networks: A survey, ” Ad Hoc Networks, vol. 6, pp. 621-655, 2008.

- chapter 13 -

User Requirements

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Key Requirements for Successful Deployment of Positioning Applications in Industrial Automation

Linus Thrybom, Mikael Gidlund, Jonas Neander, Krister Landernäs Corporate Research ABB AB Västerås, Sweden

Abstract—Positioning and navigation applications have so far mainly been targeting the consumer market but are now beginning to penetrate the industrial automation domain. The requirements in this environment are however quite different compared to the consumer market, but needs nevertheless to be met before the systems can be used in the automation domain. A failure in the positioning system may cause substantial production losses and could even be fatal, which is the reason for the generally stricter requirements. This paper will describe the industrial usage needs and define the industrial requirements such as environmental, availability, safety, technical, usability and cyber security that industrial positioning solutions needs to support. Furthermore, the requirements are compared to the state-of-art in order to identify current gaps that need to be researched and solved. The paper concludes that there are large gaps to solve in several areas, and that these gaps need to be managed before a large industrial adoption of positioning systems can be achieved. The paper furthermore concludes that the research community plays an important role in supporting future industrial automation. Keywords: industrial; automation; requirements; positioning

I.

INTRODUCTION

Industrial use of positioning systems is still on a low level compared to the consumer market, which has adopted the use of position and navigation data into many different application areas. However, the positioning technologies are getting an increased interest also from the industrial automation perspective. The automation level achieved in a process depends on what data that is available, and here the positioning data will play an important role to further automate industrial processes. The largest interest for industrial positioning systems is related to autonomous systems and processes including vehicles and mobile devices. Autonomous systems have been in place for some time, but only in very dedicated applications like automatic warehouse trucks. The automated factory is usually operated from a control room, and the process is autonomous from the input of raw

material to the output of final product. However e.g. the transport of the raw material into the automated factory is today often not automated or integrated with the main process. One example is in the mining industry, where the actual ore extraction is performed remotely from the ore processing. If the quality and amount of raw material can be measured and controlled, it can be used to improve the final product quality of the plant. This extended automation process would also provide a tool for better usage of the machines and other assets involved in the process. Additional future scenarios of industrial automation include factories in environments which are remotely located or unfriendly for humans. In these future factories the autonomous operation will require a new set of position systems for industrial applications. II.

INDUSTRIAL EXAMPLES

Positioning is today used in e.g. mining, harbor and port systems as well as in the oil and gas industry. The ABB 800xA system plays an important role in process plants by collecting detailed status information from all connected smart instruments including wireless sensors. Integration and analysis of such data improves the operation and process quality. A. Mining The mining industry is probably the industry which has progressed furthest on using positioning systems. The risk related to human health in underground mines has pushed for systems which are able to locate people in case of an emergency evacuation due to hazard gases or fire, the knowledge that someone still is in the hazardous zone is critical information for the rescue team. These systems are often limited in accuracy, but provide enough detail of information in order to know in which area a person is located. Active and passive RFID solutions are here the most common solution used today. Still, the degree of automation and data integration is less utilized in underground mining applications, but this is foreseen to change. The reason is that the ores requires more and more effort to be extracted; one necessary way to go in order to continue to increase the productivity is to increase in

213/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 automation and data integration [1]. It is in this change that further automated mining systems as well as a more accurate position system play an important role, both for human safety and for higher productivity. A second use case for positioning in mining industry is for open pit mining and fleet management systems. The open pit mining requires exact positioning data for its operation, e.g. for drilling, shoveling and surveying. In fact the drilling process can be improved significantly when the drill position is known down to centimeter level. As the open pit mine areas are stretching over larger and larger areas, with longer and longer distances between the ore and the process plant, the ore logistics becomes important factor in the process. It becomes an interesting and valuable optimization problem to solve when taking all aspects of: ore quality, vehicle conditions, position and speed into account. To some extent this transition has started in underground mining. For many years, the ore has already been extracted by remotely controlled machines. These machines are thus operated far away from the actual mine face area, in a modern industrial control room fashion.

mechanical vibrations and electrical EMC disturbances. Functional safety requirements are of increasingly importance, and apply for an increasing set of applications. Usability requirements include e.g. that users wear gloves, helmets and may work in a noisy and dirty environment, which will impact the user interface and user interaction with a system. Cyber security is an extremely important area; since malicious positioning data could lead to collisions, broken equipment and production stop. Both authentication and integrity are here the key cyber security requirements for the industry. Other typical industrial requirement areas are listed below:  Availability – An industrial system has often high availability requirements, typically 99.999%, which partly can be achieved using redundancy solutions and strict verification processes.  Scalable coverage – It is important that the system is able to scale when adding more sensors or changes are made to the infrastructure.  Standardized equipment – Customers often requires that the used equipment is standardized. This allows them to use different vendors.

B. Shipping Port Another application that benefits from position system in conjunction with autonomous operation is the handling of shipping containers in a port. One example is [2] which provide a good insight of the industrial requirements, e.g. regarding reliability and integrity, which resulted in a positioning system consisting of two independent loops and four sensors, with four different physical principles. The safety requirement is in [2] seen as one of the most important issues and solved by basing the safety system on both “defense in depth” as well as independence principles. The mixed use of low frequency absolute positioning and high frequency rate sensor was proposed in [3] and is one way to match the industrial requirements. C. Oil & Gas Remotely operated vehicles used for subsea platform inspection uses GPS together with depth as their primary source of position. During the drilling process the position of the wellbore is also monitored. Access control as well as personnel health and safety applications on off-shore platforms are also important positioning functions. Future challenges are e.g. to automate pipe inspections and valve handling on the platform itself, but it should be noted that this environment is very harsh. III.

INDUSTRIAL REQUIREMENTS

A. Industrial Usage Needs & Requirements The industrial requirements are reflecting the high cost of production losses and the high requirements on productivity and efficiency. The exact requirements depend to a large extent on the application, and spans over areas like environmental, availability, safety, cyber security, usability, etc. The environmental requirements include e.g. dust, high / low temperatures, humidity and corrosive gases in the air,

 Robustness – The devices will operate in harsh environments often with extreme heat and humidity conditions. In some applications it is required that the devices are ATEX and Safety Integrity Level (SIL) 3 ready.  System latency - The system application contains typically several control loops, which require data to be available in time.  Simplicity – The equipment should be easy to deploy and maintain.  Retrofit – Software components, e.g. maps, should be easy to integrate with existing control systems  Life cycle – Product life cycles are generally long, from 15-20 years up to 40 years of life time in some applications.  Cost – The cost, per device, system and the whole lifetime is in the end a major decision factor for investing in a positioning system. The industrial requirements may be easy to achieve one by one, but in a majority of applications most of these requirements are mandatory all together. Some applications require additional certifications from 3rd party. Bringing all these requirements into the industrial positioning application means e.g. that the positioning device should run safely in a harsh environment for 15 years and be unavailable < 5 minutes / year with a guaranteed accuracy. Most positioning systems available are radio based and the harsh industrial radio environment becomes then another challenge. This will be further discussed in the next section.

214/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Electromagnetic interference in industrial facilities comes from industrial equipment and coexisting wireless networks. EMC problems origin from sources such as breakers, electric motors, welding, industrial processes, etc. [3], [4]. In some occasions these disturbances can produce delays in the communication due to retransmission, re-synchronization that cause a blockage in the production system and in some cases safety hazards for personnel. Figure 1, shows that the majority of the impulsive interferences occur in the low frequency regions (typically below 1.5 GHz).

10

5

Normalized received power (dB)

B. Radio Communication in Harsh Industrial Environments Using wireless systems in industrial automation is becoming increasingly popular since it brings several benefits such as easy maintenance, lower installation cost and flexibility. However, the radio environment in industrial automation is usually rougher than the consumer market which most of the existing wireless systems are designed for.

0

-5

-10

Iggesund 2.4Ghz Garpenberg 2.4Ghz

-15

-20

0

20

40

60

80 100 Time (samples)

120

140

160

180

Figure 2: Normalized received power in 2.4 GHz frequency band at the Boliden and Iggesund facilities.

In multipath environment the first impinging component might be weaker than the strongest component and this will cause problems for determining the positioning if using TOA or TDOA based solution. Figure 3 shows the PDP for both LOS and NLOS scenario in a real iron mine in Sweden [6]. The PDP of an RF signal is used to highlight the characteristics of a signal received in a multipath environment. By studying the PDP in Figure 3 it becomes apparent that in the LOS case the first impinging component is also the largest, and its arriving time is resolvable. However, for the non-LOS (NLOS) there is no apparent relationship between the first impinging component and the amplitudes. Also the NLOS signal is much dispersed over time, which results in a higher RMSDS. The results in Figure 3 hint that positioning in underground environments using wireless systems is a major challenge. Figure 1: Electromagnetic interferences at low frequencies in a paper mill.

Industrial environments are abundant with highly reflective metal surfaces and moving or static physical obstacles. High reflectivity of the surrounding environment produces a large number of signal copies, whose superposition at the receiver can be constructive or destructive. In addition, the sheer size of indoor industrial facilities results in a large Root Mean Square Delay Spreads (RMSDS), which is the cause of another destructive phenomenon in wireless propagation called intersymbol interference (ISI) [5]. Figure 2 shows the normalized received power over time for the case with a tumbling mill and a paper production facility. It can easily be seen that the signal is fluctuating a lot in the tumbling mill application while it is rather static in the paper production area. However, in the paper production area another problem occurs quite often and that is the case when huge trucks are parked in front of the wireless equipment and create shadow fading. In these cases the received signal drops with 30-40 dB. It is often common in industrial environments that both small-scale fading and large-scale fading occurs and this means that wireless systems needs be able to guarantee high reliability.

Figure 3: Normalized PDP for a 500 MHz pulse with 2450 MHz center frequency in LOS and NLOS at 18 m distance in the Mine tunnel [6].

215/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 In conclusion, the dynamic radio environment in industrial plants is a challenge for any wireless equipment including positioning systems. Guaranteeing consistent performance with the highest reliability under these circumstances is not an easy task. IV.

GAPS

When comparing the state of the art of positioning with the industrial requirements and usage needs there are a number of areas which turn out to be under represented, these are explained and discussed below. A. RSS A large share of articles addresses the different methods and principles around RSS as well as the related fingerprinting methods. Techniques like RSS may work fine in a shopping mall, but not in an industrial environment. An industrial environment often contains large obstacles like machines, material and containers. Some of these objects are moved around during the operation as part of the process. Additionally there are also various types of vehicles which are used to transport material and products in the process. This creates a highly dynamic RF environment with rapidly changing signal strengths as well as alternating LOS / NLOS conditions for the area. As a result, these objects make the RSS fingerprint very dynamic in an industrial environment, and in fact unusable. The same view, i.e. that there is no good attenuation model for this type of environment, is also concluded in [7]. As a conclusion other non-RSS methods must be further investigated to higher degree. New methods which are better suited in industrial environments need to be developed. B. Security A second area where there is a research gap is the security aspect of positioning techniques, and the related personnel integrity. Personal integrity is a soft aspect of positioning which we already face; the challenge is to use the information in the right way. To locate personnel is necessary in case of an emergency situation, but continuous and detailed localization of all personnel may not be allowed in other cases. Cyber security threats are valid for all communication systems, including positioning systems. The actual communication path can be protected using existing methods and principles, but when it comes to e.g. TOA there may be other techniques required to secure the authentication and integrity. The position system would need to authenticate that the position signal originates from the correct source, that there is no replay, and that no one has modified the data. Research on these techniques is today mainly missing. C. Functional Safety A third area that requires more attention is functional safety. This implies that the protocols and methods used are robust, have high availability and are easy to verify. Since position systems often are used together with mobile vehicles / devices, it is obvious that a failure in the position system can result in collisions with the risk of both high material costs and human losses. The simple solution of just stopping a device /

vehicle may only be a rare and temporary solution, since a process stop would impact the productivity and may cause high economic losses. D. High Availability A fourth area that needs more interest is the high availability required of a position system. One common way to achieve high availability of up to 99.999% is to use redundancy. Solutions like redundant processing, redundant I/O and redundant communication are common in many industries today, and the main reason is to fulfill the availability requirements. For positioning systems there are few, if any, research projects targeting high availability or redundant solutions. Technologies which complement each other when in cooperation, but also act as backup in case its partner fails would be highly appreciated topics to see in the research community. The important function is that the user can trust that even if one antenna or cable breaks, the system will still be able to operate. This will be of additional importance in case the application includes mobile vehicles or systems which can move and thus more easily break e.g. cables. V.

CONCLUSION

Although there exist many positioning systems there are actually very few that fulfills the requirements within industrial automation. In this paper we point out some important key requirements and gaps that need to be addressed in future research such that positioning can be used in a larger extension than today. The most important requirements are functional safety, security, high availability, and high accuracy. There is a huge business opportunity for positioning in industrial automation if several of the aforementioned gaps can be closed. REFERENCES [1] [2]

[3]

[4]

[5] [6]

[7]

216/278

Stefan L. Sjöstrom, Kjell G. Carlsten, Krister Landernäs, Jonas Neander, “Mine of information” ABB Review 2013-2, www.abb.com/abbreview Durrant-Whyte, H.; Pagac, D.; Rogers, B.; Stevens, M.; Nelmes, G., "Field and service applications - An autonomous straddle carrier for movement of shipping containers - From Research to Operational Autonomous Systems," Robotics & Automation Magazine, IEEE , vol.14, no.3, pp.14,23, Sept. 2007 J. Ferrer-Coll, J. Chilo, and P. Stenumgaard, “Outdoor APD measurements in Industrial Environments,” in Proc. AMTA 2009, Salt Lake City, USA, Nov. 2009. P. Ängskog, C. Karlsson, J. Ferrer-Coll, J. Chilo, and P. Stenumgaard, “Sources of disturbances on wireless communication in industrial and factory environments,” In Proc. Asia-Pacific Symposium on Electromagnetic Compatibility and Technical Exhibition on EMC RF/Microwave Measurement and Instrumentation, Beijing, China, April 2010. D. Tse and P. Viswanath, “Fundamentals of Wireless Communication,” Cambridge, 2005. J. Ferrer-Coll, P. Ängskog, J. Chilo, and P. Stenumgaard, “Characterization of electromagnetic properties in iron-mine production tunnels,” Electronics Letters, vol. 48, no. 2, pp. 62-63, 2011. Cypriani, M.; Lassabe, F.; Canalda, P.; Spies, F., "Open Wireless Positioning System: A Wi-Fi-Based Indoor Positioning System," Vehicular Technology Conference Fall (VTC 2009-Fall), 2009 IEEE 70th , vol., no., pp.1,5, 20-23 Sept. 2009

- chapter 14 -

Security and Privacy

- chapter 15 -

Ultra Wide Band

Texture-Based Algorithm to Separate UWB-Radar Echoes from People in Arbitrary Motion Takuya Sakamoto, Toru Sato

Pascal J. Aubry, and Alexander G. Yarovoy

Graduate School of Informatics Kyoto University, Kyoto, Japan Email: [email protected]

Microwave Sensing, Signals and Systems Delft University of Technology, Delft, the Netherlands

Abstract—This study proposes a novel algorithm for separating multiple echoes using texture information of radar images. This algorithm is applied to measurement data to be shown to be effective even in scenarios with motion-varying targets. The performance of the algorithm is investigated through its application to ultra-wide-band radar measurement data for two walking persons.

I.

I NTRODUCTION

An ultra wide-band (UWB) radar system is a promising sensing tool for indoor navigation because it provides high resolution range and Doppler information. The range information enables the tracking capability of people, whereas the microDoppler information is proven to be efficient in estimating the action of each person [1]-[9]. However, these conventional studies all assume that it is a single person in the image data; an effective algorithm is needed for separating multiple targets in the scene. One such technology is multiple hypothesis tracking (MHT) [10] that employs a Kalman filter and multiple hypothesis technique redesigned for human tracking. Although this technique can estimate multiple trajectories of people, each trajectory is represented as a curve that does not define the actual region corresponding to the target in the radar image. Thus this method does not actually separate the received signals into multiple components so that single-target algorithms can be applied. In this paper, we propose a new algorithm for separating echoes from multiple persons. This method analyzes the texture of the radar image in the slow time-range domain. The algorithm proposed uses a texture angle that corresponds to a target’s line-of-sight speed. Next, we calculate a pixelconnection map in which each pixel is connected to another pixel that has the closest texture angle. Finally, randomly distributed complex values are numerically propagated to the adjacent connected pixels. This algorithm works autonomously even for motion-varying targets. Specifically, we demonstrate that our algorithm can successfully separate echoes from two people walking at different and time-changing speeds. II.

P ROPOSED S EPARATION A LGORITHM OF E CHOES

The proposed method consists of three steps. First, we calculate the texture angle of the signal. Second, we obtain a pixel-connection map between pixels of the texture angle image. Third, we apply the connection propagation algorithm to the pixel-connection map to separate multiple echoes.

A. Texture Angle for Radar Echoes We propose the texture angle for radar images for estimating the approximate line-of-sight velocities of targets. Unlike the use of a spectrogram, the texture angle can estimate the Doppler velocity for each pixel of the image. In general, the echoes of different targets have different texture angles, unless those multiple targets are exactly in the same motion. We define the texture angle of a slow time-range radar image as   ∂s(t, r)/∂r θ(t, r) = tan−1 v0 , (1) ∂s(t, r)/∂t where s(t, r) is the signal received at a slow time t from a range r. Note that v0 is introduced to make the argument of tan−1 dimensionless. B. Pixel Connection Map based on Texture Angle Next, we explain the procedure to obtain the pixelconnection map, which corresponds to the second step of our proposed algorithm. In this pixel-connection map, each pixel is connected to another pixel that has the closest texture angle. For this calculation, we use the texture angle of each pixel. Note that the texture angle is defined only if the intensity of the pixel is above a threshold. The following procedure applies only to pixels whose texture angle is defined. For the i-th pixel, the right-connected pixel is chosen as Ri = arg min |θj − θi | , j

subject to and

(2)

t i + T s > tj > ti

(3)

     −1  rj − ri tan  − θ i  < δ.  v0 (tj − ti )

(4)

Here, Ts is the window size for the search, and δ is a small angle. These conditions imply that the pixel connected to the i-th pixel is located on the right hand side of the i-th pixel, and the inclination of the line connecting the pair of pixels does not contradict the texture angle. Under these conditions, we choose the pixel that has a texture angle closest to that of the pixel of interest. Similarly, we calculate the left-connected pixels Li that is located on the left-hand side of the pixel of interest using the same process Eq. (2), but with a different time condition, ti − Ts < tj < ti , instead of Eq. (3).

217/278

C. Complex Number Propagation Algorithm Next, we introduce a method that can automatically separate multiple echoes using the pixel connection map Ri and Li that were calculated in the second step. The pixel connection maps are not entirely accurate; the pixels belonging to different targets can be erroneously connected. The algorithm proposed below benefits from statistical averaging effects to suppress such erroneous connections. This algorithm forms a new image by repetitively updating a few pixels at a time. We call hereafter this image the “connection propagation image”, denoted In , where n = 0, 1, · · · is the iteration number. First, we initialize the connection propagation image I0 . A uniformly distributed random variable 0 ≤ ψ < 2π is chosen independently for each pixel to generate a unit complex number ejψ ; if the corresponding amplitude for the pixel is less than the threshold, a zero value is assigned to the pixel of the connection propagation image.

Target A

Tx

Rx 0

Target B 1

2 3 4 Range [m]

5

Fig. 1. Schematic of measurement scenario with antennas and two people walking.

In each iteration, we randomly pick a pixel index i ∈ {1, 2, · · · , Mp } from the connection propagation image, where Mp is the number of pixels in the connection propagation image. Then the pixels are updated if ti ≤ (1 + α)Tmax /2 as In (ti , ri ) In (tLi , rLi )

= =

(In−1 (ti , ri ) + In−1 (tRi , rRi ))/2, (5) (In−1 (ti , ri ) + In−1 (tLi , rLi ))/2, (6)

Fig. 2.

Photo of measurement scenario.

and updated if ti > (1 − α)Tmax /2 as In (ti , ri ) = In (tRi , rRi ) =

(In−1 (ti , ri ) + In−1 (tLi , rLi ))/2, (7) (In−1 (ti , ri ) + In−1 (tRi , rRi ))/2, (8)

where Tmax is the maximum slow-time of the image. Eqs. (5) and (6) mean that the complex numbers propagate to the left if the chosen pixel is on the left half of the connection propagation image. In contrast, the complex numbers propagate to the right with Eqs. (7) and (8) for pixels on the right half. For i that satisfy (1 − α)Tmax /2 < ti ≤ (1 + α)Tmax /2, all operations Eqs. (5)–(8) are applied, which means that complex numbers propagate in both directions. In this way, the initialized pixels around the center of the connection propagation image propagate to both sides along the connection established in the previous subsection. Echoes corresponding to different targets have a relatively fewer number of connections, if any. This prevents the complex numbers from being mixed up across adjacent pixels that belong to different targets. After n = Nmax iterations, we obtain the final connection propagation image. We use the phase of the connection propagation image  INmax (ti , ri ) to separate the echoes. III.

R ADAR M EASUREMENT S ETUP AND DATA

We measured two persons walking using a PulsOn 400 radar system manufactured by Time Domain Corporation. The frequency band is from 3.1 to 5.3 GHz, and the signal is modulated by an m-sequence. The received data are compressed with the same sequence. The transmitted power is −14.5 dBm. The transmitting and receiving antennas are dual-polarized horn antennas (model DP240 manufactured by Flann Microwave Ltd.) with 2 to 18 GHz bandwidth. The antennas are separated by 50.0 cm.

The diagram of the scenario is shown in the lower part of Fig. 1. In this measurement, two persons walked back and forth along the same line. Target A walks from a point 1.0 m away from the antennas to a point 5.0 m away, then back to the original point. Target B walks from a point 3.0 m away from the antennas to a point 1.0 m away, then back to a point 5.0 m away, and walks toward the antenna again. The range measurement repetition frequency is 200 Hz, and the sampling frequency is 16.39 GHz. The received signals are stored and processed afterwards. A photo of the measurement scenario is shown in Fig. 2. IV.

A PPLICATION OF THE P ROPOSED M ETHOD TO M EASUREMENT DATA

In this section, we apply the set of proposed algorithms to the measurement data: texture angle, the pixel connection map, and the complex number propagation algorithm. For calculating the texture angle, v0 is set to 1.84 m/s. A 5 × 5 median filter is applied to the texture angle to eliminate artifacts before calculating a pixel connection map. For the pixel connection map, we set Ts = 1s, and δ = 0.1 rad. For the complex number propagation algorithm, we set Th = 0.03 max |s(t, r)|, α = 0.1, and Tθ = π/20. A slow time-range radar image |s(t, r)| is shown in Fig. 3. The echoes intersect at two points corresponding to 3 s and 10 s. Next, we calculate the texture angle of the slow time-range image (Fig. 4). Each of the two echoes has smooth gradation in the texture angle, which means that speeds of the targets change gradually. This characteristic will be exploited by the proposed method to separate the two echoes. The proposed pixel-connection map and complex-number propagation algorithm are applied to the texture angle image.

218/278

6

0.9 0.8

5 Range [m]

0.7 0.6

4

0.5 0.4

3

0.3 0.2

2

0.1 1 0

2

4

6 Time [s]

8

10

12

Range [m]

Fig. 3. Slow time-range radar image |s(t, r)| measured for two people walking at time-changing speeds.

6

2.5

5

2

4

1.5

3

1

2

0.5

1 0

2

4

6 Time [s]

8

10

12

Fig. 4. Texture angle θ(t, r) calculated, two people walking with timechanging speeds.

Fig. 5. Iterations in segregating the radar image using the proposed method. The image at the top left is the initialized image. The image at the top right is the image after 2000 iterations. The other images are plotted after 4000, 6000, · · · iterations (every 2000 iterations).

The images in Fig. 5 show the iterative steps of the proposed method, in which the angle of the complex value associated with each pixel is displayed. In the first image, each pixel has an independent value of the others. As the iteration progresses, the dominant colors in the middle of the image propagate toward both sides along the echo trajectories. Even at the intersection points, pixels located nearby each other are not necessarily connected in this algorithm. This is why the colors propagate only to the correctly associated pixels in the image. Finally, most of the pixels in the images are correctly segregated into two dominant colors as seen in the final connection propagation image. The final connection propagation image after Nmax = 30000 iterations is shown in Fig. 6. This image indicates that the two targets are clearly separated by our algorithm. A histogram of this image can be used to determine the threshold to separate the two targets. Fig. 7 shows the histogram of the image. We see two significant peaks that correspond to the two targets. In this way, we do not have to know the number of targets in advance to use the proposed method. Multiple echoes are autonomously separated into different colors in this image. In the same way, even if there are more than two targets, the image can be separated into more than two segments by setting multiple threshold values. To develop a method to find the optimal threshold values for this purpose will be an important aspect of future work. With the proposed algorithm, the signals in the image of Fig. 3 are for the most part clearly separated, as shown in Fig. 8 although some undesired components are seen in the lower image.

3

6

2

Range [m]

5

1 4 0 3

−1

2 1 0

−2 −3 2

4

6 Time [s]

8

10

12

Fig. 6. Connection propagation image after applying the proposed method after 30000 iterations (in rad).

V.

C ONCLUSION

This paper proposes a new algorithm for separating multiple targets using a UWB radar. The proposed method calculates a texture angle to estimate an approximate line-of-sight speed of the target at each pixel of a signal image. Targets with different speeds have different textures in the slow time-range image. The texture angle was combined with other proposed techniques such as the pixel-connection map and the complex number propagation algorithm. The pixel-connection map represents pixels connected by having similar texture angles. A pair of pixels is chosen such that their relative position is in accord with the corresponding pixel of the texture angle image. Finally, randomly distributed complex values are numerically propagated to adjacent connected pixels. This algorithm does

219/278

[4] P. Molchanov, J. Astola and A. Totsky, “Frequency and phase coupling phenomenon in micro-Doppler radar signature of walking human,” Proc. 19th International Radar Symposium, pp. 49–53, May 2012. [5] J. Li, Z. Zeng, J. Sun and F. Liu, “Through-wall detection of human being’s movement by UWB radar,” IEEE Geoscience and Remote Sensing Letters, vol. 9, no. 6, pp. 1079–1083, Nov. 2012. [6] C.-P. Lai, R. M. Narayanan, Q. Ruan and A. Davydov, “Hilbert-Huan transform analysis of human activities using through-wall noise and noise-like radar,” IET Radar Sonar Navig., vol. 2, no. 4, pp. 244-255, 2008. [7] A. G. Yarovoy, L. P. Ligthart, J. Matuzas and B. Levitas, “UWB radar for human being detection,” IEEE A&E Systems Magazine, pp. 36–40, May 2008. [8] K. Saho, T. Sakamoto, T. Sato, K. Inoue and T. Fukuda, “Pedestrian classification based on radial velocity features of UWB Doppler radar images” Proc. 2012 International Symposium on Antennas and Propagation, pp. 90–93, 2012. [9] Y. Wang and A. E. Fathy, “Micro-Doppler signatures for intelligent human gait recognition using a UWB impulse radar,” Proc. pp. 2103– 2106, 2011. [10] S.-H. Chang, R. Sharan, M. Wolf, N. Mitsumoto, and J. W. Burdick, “An MHT algorithm for UWB radar-based multiple human target tracking,” Proc. IEEE International Conference on Ultra-Wideband, pp. 459–463, Sep. 2009.

1600

Frequency [pixels]

1400 1200 1000 800 600 400 200 0 −3

Fig. 7.

−2

−1

0 Phase [rad]

1

2

3

Histogram of the connection propagation image in Fig.6.

1

6

0.8

Range [m]

5

0.6

4 3

0.4

2

0.2

1 0

2

4

6 Time [s]

8

10

12

1

6

0.8

5 Range [m]

0

0.6

4 3

0.4

2

0.2

1 0

2

4

6 Time [s]

8

10

12

0

Fig. 8. Separated echos s1 (t, r) and s2 (t, r) using the proposed complex number propagation algorithm.

not require a prior knowledge of the number of targets. The randomly assigned complex numbers automatically propagate and merge into multiple segments. We have demonstrated that the proposed algorithm can successfully separate two motion-varying targets from echoes in a measurement with two walking persons. R EFERENCES [1]

Y. Kim and H. Ling, “Human activity classification based on microDoppler signatures using a support vector machine,” IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 5, pp. 1328–1337, May 2009.

[2]

A. Sona, R. Ricci and G. Giorgi, “A measurement approach based on micro-Doppler maps for human motion analysis and detection,” Proc. IEEE International Instrumentation and Measurement Technology Conference, pp. 354–359, May 2012. D. Tahmoush and J. Silvious, “Simplified model of dismount microDoppler and RCS,” Proc. IEEE Radar Conference, pp. 31–34, May 2010.

[3]

220/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Experimental Evaluation of UWB Real Time Positioning for Obstructed and NLOS Scenarios K. M. Al-Qahtani

A. H. Muqaibel

U. M. Johar

M. A. Landolsi

A. S. Al-Ahmari

Department of Electrical Engineering King Fahd University of Petroleum & Minerals Dhahran, Saudi Arabia alqahtani, muqaibel, umjohar, andalusi, asahmari @kfupm.edu.sa

Abstract—High resolution wireless positioning is attracting increased interest because of its numerous foreseen applications. Ultra Wideband (UWB) technology promises high positioning resolution (sub-centimeters). The positioning accuracy can be hindered by obstacles and Non-Line-for-Sight (NLOS) propagation. A hardware testbed was used to perform NLOS and obstructed measurements campaign. Large number of measurements was collected and analyzed. The effects of large scale and small-scale fading on the system performance are studied. The impact of blocking sensors by wood or aluminum plates is examined. Also, the effects of covering tags by different materials like wood, glass and steel bowl are considered. The results are useful for applications like channel modeling, link budget analysis, and though wall imaging. Keywords- Wireless positioning; Ultra Wideband (UWB); Real Time Positionin;, Non-Line-for-Sight (NLOS) Propagation

I.

INTRODUCTION

Ultra wideband (UWB) systems are generally defined as systems, which exhibit a transient impulse response. An UWB transmitter is defined as an intentional radiator that, at any point in time, has a fractional bandwidth of greater than or equal to 0.2 or occupy a bandwidth greater than 500 MHz regardless of the fractional bandwidth [1-2]. Since these signals have very large bandwidths compared to those of conventional narrowband/wideband signals, they have narrow time-domain pulse which offers very high positioning accuracy. UWB systems are excellent candidates for high resolution positioning and short distance high data rate wireless applications. They have a number of features such as (i) low complexity and cost; (ii) noise-like signal spectrum; (iii) resistance to severe multipath and jamming; (iv) very good time-domain resolution allowing for location and tracking applications. These features are attractive for consumer communication systems. UWB technology supports the integration of communication and radar applications such as imaging and positioning [3-5]. UWB radars have been attracting an increased interest after the proposal by Scholtz [6] to use impulse UWB radio for personal wireless communication applications. Short-range wireless sensor networks (WSNs) is an emerging application of UWB technology [7-8]. The applications of UWB WSNs include the inventory control,

tracking of sport players, medical application, military application, and search and rescue applications. A major challenge to the performance of UWB positioning in indoor applications is performance under obstructed and Non-Line-for-Sight (NLOS) scenarios. To evaluate the degree of degradation a hardware test bed is used. Large number of measurements was collected and analyzed. The effects of large scale and small-scale fading on the system performance are studied. The impact of blocking sensors by wood or aluminum plates is examined. Also, the effects of covering tags by different materials like wood, glass and steel bowl are considered. The rest of the paper is organized as follows. The detail of the research testbed is presented in section II. After that, the experiment procedure and the results are presented in section III. The paper concludes with some remarks. II.

SYSTEM MODEL

The installed positioning system is based on Ubisense®. This system delivers a 15cm 3D positional accuracy in realtime. The Ubisense® real time location system (RLTS) can be divided into two main parts. These parts are the sensor network hardware part, and the location engine software platform. 1) The RTLS Hardware Part The hardware part of the system consists of tags and sensors. Tags are the portable devices which transmit UWB signals that will be detected by sensors. They are the moving parts with positions to be estimated by the system. Sensors receive the signals transmitted from the tag. They are organized in cooperating sets called location engine cells; each one has a single master sensor and a number of slave sensors. In our research, we used four sensors; three are assigned as slave sensors while the fourth one is the master sensor. The master sensor is the reference in time difference of arrival (TDOA) calculations. Sensors are connected by standard 100 BASE-TX Ethernet cables to the master and to the platform server. Using a switch that supports PoE, the sensors get the power along with the networking cable. Figure 1 below shows the connections for a four-sensor location engine cell and the platform server. Note that the sensors require a Dynamic Host Configuration Protocol (DHCP) server to assign their network configuration.

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

221/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 letters for ease of referencing. Measurements were conducted to examine large scale effect and small scale effect. Data collection and measurement execution go through different steps. First, identifying the cell where the system is installed. This step could be considered as the planning phase and it includes different stages starting by identifying the origin of the cell to be used as the reference for all test points. The origin should be selected to have a LOS to all sensors. Then, from the origin of the cell, the positive directions of x-axis, y-axis and zaxis are determined following the right hand rule. Finally, the system is calibrated so that it can measure tag locations accurately. Figure 1. Connections for a four-sensor location engine cell and the platform server

2) The RTLS Location Engine Software Platform Location Engine Software Platform (LESP) is the software that is used to collect real time data from the sensor. The LESP has its own built-in algorithms that will estimate the location of the tag. The calibration for the RTLS involves the calibration for the background noise, the orientation of the sensors, and the cable offset. The noise level heavily depends on the environment. Calibrating the sensors requires fixing its three angles, elevation, azimuth and tilt or rotation. Prior to calibration, it is important to choose the origin and the coordinate system in the environment. The origin is defined to be approximately on the center of the lab. Right-hand rule is followed in determining the x, y and z axes.

A. Large Scale Effects The objective of the large scale analysis is to study the impact of large movements on the positioning accuracy. In general, measurements should be averaged out to remove the impact of multipath and other small scale effects. UWB signals are immune to multipath as will be illustrated when studying the small scale effects. Based on the calibrated reference point, a grid was sketched as shown in Figure 3. Data was collected for all possible points separated by a distance of 1.2 m in both x and y directions. This should results in 56 points around the whole cell. However, due to blocking objects, only 49 points were taken. Missing points were considered as NLOS scenarios.

Figure 2 shows part of the lab environment. After successfully installing the system, the location of the tag can be determined using the software.

Figure 3. Points distribution in the cell (top view)

Figure 2. Lab Environment

III.

RESULTS AND DISCUSSION

Positioning experiments were performed covering the whole laboratory room (Building 59, room 0020, King Fahd University of Petroleum & Minerals (KFUPM)) where the system was installed. The room has a length of 10.8m and a width of about 8.7m. There are five metallic tables and a column in the middle of the room. The dimensions are shown in Figure 3. On the same figure, the sensors are shown and the y-axis is labeled with numbers while the x-axis is labeled with

The error distribution in x-component is shown in Figure 4 where the x-axis corresponds to ‘numbers 1-7’ and ‘letters a-h’ corresponds to the y-axis as depicted in Figure 3. Areas with blue color indicate the least error whereas areas with red color indicate the highest error. Errors are given in cm and the color bar to the right of Figure 4 indicates the range of the error and their corresponding color. Most of the points were positioned with error which is less than 15 cm. Five points have errors exceeding 30 cm. Maximum positioning error was found to be 50 cm. This could be due to obstructed or NLOS scenario. The maximum error occurs at the border of the cell. Figure 5 shows the error distribution of the large scale data in y-component. Most of the values are less than 15 cm. The error in y-component is less than that in x-component as it can

222/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 be noticed by comparing Figure 4 and Figure 5 where the maximum reached about 30 cm. Figure 6 shows the error distribution in the radial component where the radial component is calculated as  ‫ ݎ‬ൌ ඥ‫ ݔ‬ଶ ൅ ‫ ݕ‬ଶ ൅ ‫ ݖ‬ଶ . The error in the radial component is composed of x, y, and z component and is greater than any of them. The areas which suffer more error are those at the border of the cell or those close to the metallic tables. This is expected since at the border, the tag is not seen properly by all the sensors while for the metallic tables act as reflectors.

B. Small Scale Effects and Multipath Immunity Small scale effect was tested on an area of about 0.607m2 near the origin as shown in Figure 3. This area was divided into 25 different points separated by a distance of 0.152m in both x and y direction. To study the small scale effect, the choice of the spacing should be related to the wavelength of the carrier frequency. In case of UWB the center frequency may be used instead. The height (z-direction) was kept fixed at 1.43m, which is close to the height of mobile phones during calls. Figure 7 shows the small scale points and their estimated locations. Every two points with the same color and shape corresponds to the exact and estimated points. The darker point represents the exact position. Some of the points are estimated with errors less than 10 cm. Few points are estimated with larger error because these points are not seen by sensor-1 due to the concrete pillar in the middle of the room and hence the signal is reflected and results in larger estimation error. Most of the points suffer from a fixed bias, which could possibly reflect a calibration error. We have observed that UWB positioning is immune to multipath and small-scale effects. Some points that demonstrated different errors are due to pillars obstruction.

Figure 4. Error distribution in x-axis

. Figure 7. Exact and estimated location of the Tag (same shape & color are for one point)

In the following sections, the system performance is tested for different NLOS and obstructed scenarios which are the major problems in high resolution positioning. Figure 5. Error distribution in y-axis

C. Obstructed Sensors The system performance was tested when one sensor is blocked by wooden plate or Aluminum plate. The tag was placed in a location where already a LOS reading with no blocking was taken so that the blocking effects can be investigated. This point was kept fixed when each of the four sensors was blocked and this point was taken as point (0, 0,1.43)m. Only one sensor is blocked at a time. Blocking any sensor with the wooden plate has less effect compared with aluminum. The impact of blocking sensor 2 with wood is depicted in Figure 8. UWB signals can penetrate wood with reasonable fidelity. Aluminum resulted in larger errors.

Figure 6. Error distribution in r-component

223/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 P d f o f t h e R a n g e e rro r 14 N o rm a l S t e e l+ g la s s b lo c k in g

12

10

Pdf

8

6

4

2

0

0

0.2

0 .4

0.6 0.8 R a n g e E rro r in m e t e r

1

1 .2

1.4

Figure 11. The effect of covering the tag with steel and glass bowls Figure 8. The effects of blocking sensor 2 with wood plate

IV. D. NLOS and Covered Tags The system performance was tested when the tag is covered by different types of materials such as wood box and bowl made of steel or glass (see Figure 9). The box made of wood has dimensions of 16cm in length and width and 20cm in depth while its thickness is 2cm. The bowl made of steel has a top diameter of 16 cm and a depth of 7 cm with thickness of 2 mm. The glass bowl has a top diameter of 12.5 cm and a depth of 5 cm and its thickness is about 3 mm.

CONCLUSION

The performance of a UWB real time location system was evaluated under many NLOS and obstructed circumstances. The error distribution throughout the room was examined. Data was collected and analyzed for large scale, small scale and different scenarios such as NLOS, sensor blocking and tag covering. The error increases as one or more sensors are blocked. The wood plate has blocked the signal but its effect is not as the aluminum plate which almost eliminates the contribution of the blocked sensor. The steel bowl had the greatest effects and then the glass bowl. The wood box had the least effect. Examined scenarios demonstrated immunity to multipath propagations. ACKNOWLEDGMENT

Figure 9. Different materials used to cover the tag

In the following sets of figures, the effects of covering the tag located at the center of the room are presented. The point (d4) is chosen at coordinate (0,0,1.43)m as was shown in Figure 3. This point has a LOS to all sensors. Covering the tag with a wood box shifts the mean of the error by almost 20 cm. When the tag is covered by the glass bowl, the mean of the error is shifted to higher value by almost 40 cm. Covering the tag with steel bowl affects the error range more. The mean of the error is shifted by almost 60 cm as shown in Figure 10. The tag is now covered by the steel and glass bowls together at the same time. The error is shifted up by about 80 cm (see Figure 11). This is due to error in all the components, mainly x and y components.

The author(s) acknowledge the support provided by King Abdulaziz City for Science and Technology (KACST) through the National Science & Technology Unit at King Fahd University of Petroleum & Minerals (KFUPM) for funding this work under project # 08-ELE44-4-1 as part of the National Science, Technology and Innovation Plan. REFERENCES [1]

[2]

[3]

[4]

P d f o f t h e R a n g e e rro r 14

[5]

N o rm a l S t e e l b lo c k in g

12

10

[6]

Pdf

8

[7]

6

4

[8]

2

0

0

0.1

0.2

0.3

0 .4 0.5 0.6 R a n g e E rro r in m e t e r

0 .7

0 .8

0.9

Figure10. The effect of covering the tag with steel bowl

224/278

Federal Communication Commission, “First Order and Report, Revision of Part 15 of the Commission’s Rules Regarding Ultra Wideband Transmission System, FCC 02-48, April 2002. M. G. D. Benedetto, T. Kaiser, A. F. Molisch, I. Oppermann, C. Politano, and D. Porcino, “UWB Communication Systems A Comprehensive Overview”, EURASIP Book Series on Signal Processing and Communications, vol. 5, pp. 5, 2006. R. J. Fontana and S. J. Gunderson, “Ultra-Wideband Precision Asset Location System,” Proc. Of IEEE Conference on Ultra Wideband Systems and Technologies, 21-23 May 2002, pp. 147-150. N. S. Correal, S. Kyperountas, Q. Shi, and M. Welborn, “An UWB Relative Location System,” Proc. Of IEEE Conference on Ultra Wideband Systems and Technologies, 16-19 Nov. 2003, pp. 394-397. W. Chung and D. Ha, “An Accurate Ultra Wideband (UWB) Ranging for Precision Asset Location,” Proc. Of .IEEE Conference on Ultra Wideband Systems and Technologies, 16-19 Nov. 2003, pp. 389-393. R. A. Scholtz, “Multiple Access with Time-Hopping Impulse Modulation,” MILCOM ’93, vol. 2, pp. 447-450, 1993. Z. Sahinoglu, S. Gezici, and I. Guvenc, Ultra-Wideband Positioning Systems: Theoretical Limits, Ranging Algorithms, and Protocols. Cambridge University Press, 2008. S. Gezici, Z. Tian, G. B. Giannakis, H. Kobayashi, A. F. Molisch, H. V. Poor, and Z. Sahinoglu, “Localization via ultra-wideband radios: A look at positioning aspects for future sensor networks,” IEEE Signal Processing Mag., vol. 22, no. 4, pp. 70–84, July 2005.

- chapter 16 -

RFID

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Device-Free 3-Dimensional User Recognition utilizing passive RFID walls Benjamin Wagner, Dirk Timmermann Intitute of Applied Microelectronics and Computer Engineering University of Rostock Rostock, Germany [email protected] Abstract— User localization information is an important data source for ubiquitous assistance in smart environments and other location aware systems. It is a major data source for superimposed intention recognition systems. In typical smart environment scenarios, like ambient assisted living, there is a need for non-invasive, wireless, privacy preserving technologies. Device-free localization approaches (DFL) provide these advantages with no need for user-attached hardware. A common problem of DFL technologies is the distinction and identification of users which is important for multi-user localization and tracking. Expanding the existing approaches with the 3rd dimension it becomes possible to estimate user heights and body shapes depending on the systems resolution. For that purpose we place pRFID transponders at the room’s walls giving us the possibility to generate a 3 dimensional wireless communication grid within the localization area. A person moving within this area is typically affecting the RFID communication giving us the possibility to use RSS based algorithms. In this work we show the basic approaches and define system and model related adaptations. We conduct first experiments in an indoor room DFL scenario for proof of concept and validation. We show that it is possible to recognize the height of a user with reasonable precision for future estimation approaches. Keywords- DFL, RFID, Indoor Navigation, Environments, Pervasive Computing, Wireless

I.

Smart

INTRODUCTION

Recognizing a user within a smart environment is a big challenge in today’s ubiquitous smart technology research. Estimating the position, User Recognition and Intention Recognition are the main steps for generating intelligent assistance. Sensors which are gathering the information need to be invisible and privacy preserving. For that purpose there is much work done on the field of Device-free localization (DFL) utilize wireless radio devices which are installed in the room leaving the user without any attached hardware. In our recent work we introduce an approach for radio based DFL by replacing most of the active radio beacons used in similar situated approaches with completely passive Radio Frequency Identification (RFID) transponders. That combines the advantages of energy efficiency, because the transponders do not need batteries, and very easy deployment. RFID transponders can very easily be placed i.e. under the carpet, on furniture or behind the wallpaper.

Another big advantage are the costs: RFID transponders can be purchased very cheap, as low as 0.20 € per item. Based on that, multiple localization algorithms were proposed in the past providing positioning results with an error as low as 0.3 m in 2D scenarios[1–3]. The available approaches only calculate 2D results. But for superimposed intention recognition systems it is also important to know about a user’s vertical position, i.e. is the user lying on the ground, sitting on a chair or even standing on the ground. Furthermore the height of a user could give information about his identity or could help separating users in multi-user scenarios. In this paper we propose the use of 3 dimensional measurement setups (RFID walls) and adaptations for existing algorithms. Therefore we introduce the related work in section two and explain our methods in section three. The setup and the results of a first experimental validation are shown in the fourth chapter, followed by our conclusions. II.

RELATED WORK

A. Passive RFID Positioning Dealing with the problem of energy efficiency and deployment complexity we invented an approach utilizing ground mounted passive Radio Frequency Identification Tags (RFID) for device-free radio-based recognition[1], [2], [4]. This work has shown that it is possible to calculate 2D user positions with remarkable accuracy [4] and low computational complexity. Using typical RFID hardware provides less signal processing possibilities than typically used wireless sensor nodes. For this reason our measurements regards the Received Signal Strength Indicator (RSSI) with can be regarded as a linear transformation of the original signal strength value. As shown in [5] the presence of the human body does strongly affect the communication between the RFID reader hardware and the passive transponders. This can be modeled as[5]: (

)

(

)

(1)

with ∆P as estimated RSSI change, wave length λ and phase shift . The parameters A,B are subject to the multipath fading properties of the experimental environment[6].

225/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 Therefore the model needs to be re-adjusted for every new setup. The path difference between the Line-of-sight (LOS) and the Non-Line-of-sight (NLOS) path is determining the relative position of a scattering user towards a specific communication link. The influence is shown in Fig. 1.

n as zero mean gaussian noise vector and as matrix of pixel attenuations generating a tomographic picture of the measurement area. The algorithm can locate human with as low as 0.3 m mean location error. In [3] we propose multiple improvements for performance and online operation.

1.5

III.

1

For adding the height as 3rd result dimension both the measurement setup and the model need to be adapted.

0.5

RSS Difference [dB]

0

A. Measurements For the measurement we built “RFID-Walls” with a wall mounted RFID transponder grid. Discretizing the height coordinate we can define 2-dimensional layers within the squared measurement area. As mentioned in [4] these systems have a sender-receiver relation of:

-0.5 -1 -1.5 -2 -2.5 -3

METHODS

0

0.5

1

1.5

2

2.5

(3)

3

because a RFID field typically contains a high number of transponders (regarded as senders) and a relatively low number of receivers (RFID antennas).

Executive Path Length [m]

Figure 1. Theoretical model regression and experimental data points from multiple transponder scenario

Based on this model different methods for the localization of users were investigated in the past:  Database based localization: minimizing a loglikelihood-function from the difference between an expected change of signal strength and the measurement. The results provide a maximum RMSE of 0.75 m[2].  Geometric localization based on Linear Least Squares and Intersection Points applied on the measured signal strength differences. The results provide lower accuracy at approximately 1.61 m, while having a lower computational complexity[2].  Training based approaches, e.g. Multi-layered Perceptrons (MLP) [5], [6]. A three layered MLP getting the RSSI differences into its input layer and providing a 2D user position out of the output layer. Evaluating different training functions and layered transfer functions it is possible to achieve accuracies as low as 0.01m MSE in a ground mounted pRFID scenario. B. Passive RFID Tomography In our recent work [4] wireless sensor network based radio tomographic imaging [7], [8] and RFID DFL were combined. The setup consists of waist-high mounted passive transponders placed around the discretized measurement area. The RFID antennas are placed directly behind the transponder lines to guarantee a maximum power transfer. The imaging result is calculated by using the model of Wilson et.al.[8]:

Transponder Voxel Communication Link RFID Antenna

Figure 3. Principle link structure in sectional view

Because simply integrating more antennas would increase the systems costs and reduce the advantage of cost efficiency we just use one receiver layer with 4 antennas situated at every mid-wall. The 3D measurement area is discretized into voxels generating a measurement picture for every height layer. In contrast to the 2D approach of [4] the transponder layers and the antenna layer are spatial separated. This has a great effect on the model, especially on the voxel-communication link allocation and weight matrix calculation. Figure 3 is showing the principle problem. The link density above and under the antenna layer is declining. This has effect on the weighting matrix. B. Adapted Model

(2) with as matrix of RSS differences in dB, W as precalculated weighting matrix for every pixel-link-combination,

The experimental area is defined by an image vector consisting of N pixels. When a person is affecting specific

226/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

with signal strength y and RSSI difference vector ∆y. The most important part of the RTI method is the image reconstruction since the problem is ill-posed. The authors handle this by using regularization techniques. The resulting image estimation formula can be written as[1]: (

)

(5)

In this formula denotes a covariance vector providing information about the dependence of neighboring pixels due to a zero-mean Gaussian random field [10]: (6) with the voxel-voxel distance d and a correlation term determining the impact of dependence of neighboring pixels. We have to use a weighting model only regarding the backward link between transponder and antenna because due to the experimental scenario a user can only effect this path. The forward link is regarded only as sending power supply. Regarding the model of [11] it can be described as



()

{ ()

()

()

|

with the calculated image .

(

)

(

)}

(7)

over all layers with constant step size s. C. Error Model Most authors dealing with user recognition in the 2D area assume a cylindrical human model with radius [5], [11]. This is not suitable for the 3D area because the human body has a different shape with different reflection properties at every height layer. Typically the body center has the greatest effect on a horizontal communication, although the influence if the users head or legs is less. Therefore we define an extended ellipsoid of rotation with a height dependent radius ( ) as a 3D human model. The reference image can be described as:

(8)

IV.

|

(9)

and the number of all voxels

EXPERIMENTAL VALIDATION

A. Experimental Setup For the experimental validation of our approach we used a passive bistatic UHF RFID system from Alien Technology working on ISM 868 Mhz frequency band. We connected four linearly polarized antennas (G = 6 dB) to the ALR-8800 reader. We did not use circular polarized antennas, because they have a higher attenuation and all transponders are placed in the same orientation. Hence all tags are readable in the same quality. We placed 40 transponder at every of 4 walls at the layer heights of [ ] meters resulting in a total of 160 transponders. Each wall has a length of 2.7 meters and in the center of the measurement area we define 13 possible user positions. Fig. 4 shows profile and topview of the setup. The antennas are situated 1.0m behind the wall to guarantee a best possible energy transmission. This is done due to the specific antenna lobes. Z

Y

{

( )

with the center of the reference object and every voxel . Assuming this model we can define the picture dependent mean-squared error for comparison purposes to be [11]

() Antenna

1.00 m

2.70 m

Transponder

Position

2.70 m 0.30 m

0.90 m 1.05 m

1.00 m

for the backward link, where is the Euclidean distance between transmitting reader antenna , receiving reader antenna and transponder of link . The ellipse width surrounding each link is variable by . Dealing with the problem of inter-layer variation as mentioned in III.A we define the main imaging scale by

|

1.30 m 1.60 m 1.90 m

(4)

|

{

1.00 m

links in that network (see Fig. 1), that attenuation is regarded as the sum of attenuation each pixel contributes[4]. The attenuation is measured as the received signal strength for every transponder-antenna combination. Due to the RFID protocol[9] it is difficult to set a stable power value for every transponder. Therefore a 2 phase measurement was conducted: a calibration phase with no user presence and a measurement step with scatterer in the field. The measurement vector is built by

X

X

Figure 3. Top and sectional view of measurement setup

The RFID reader hardware is connected to a workstation where a Java program is fetching the data. The calculation of the described approach is done in a post processing step with the help of Matlab®. B. Procedure Due to the high amount of transponders we have to limit the data measurements. Therefore we define major operating antenna sequences as follows: {[

] [

with the following annotation:

227/278

] [

] [

]}

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 [Powering Antenna; Receiving Antenna] We did our measurements for every transponder-AS combination with a minimum of 80 data samples to get a reliable mean signal strength value. For experimental validation of 3D user recognition we placed a test person on every of the 13 defined test positions in three ways: 1. User is sitting on a chair 2. User is standing on the ground 3. User is standing on a chair C. Results Fig. 5 depicts a sample result of the test person located in the middle of the room. 1.9 m

1.6 m

It can be stated, that the recognition of the users height within the testbed could be recognized with reasonable accuracy. Fig. 6 depicts a sample result of the testperson located in the upper left edge of the room. It has to be said, that the technique has some problems with the positioning precision in the field edges, because the density of communication links is even lower, but the height information is also recognizable very clearly. V.

CONCLUSION & FUTURE WORK

In this work we did a proof of concept for 3D user recognition with passive RFID walls. To achieve this goal we described adaptations for the mathematical model and the measurement system. Within the model the adaptive bistatic weighting matrix and the covariance matrix needed to be adapted for a 3D scenario. Furthermore we defined a 3D error model, which application would go beyond the scale of this work. In future work estimation algorithm should be developed and applied on to the results. With them the height of a user within a room could be estimated, that could be a valuable data source for user distinction in a multi user scenario. REFERENCES

1.3 m

[1]

1.0 m

Standing on Sitting ground on chair Figure 5. Sample results by layer - center

Standing on chair

1.9 m

1.6 m

1.3 m

1.0 m

Standing on Sitting ground on chair Figure 6. Sample results by layer – edge

B. Wagner and D. Timmermann, “Device-Free User Localization Utilizing Artificial Neural Networks and Passive RFID.”, IEEE International Conference on Ubiquitous Positioning, Indoor Navigation and Location based Services (UPINLBS), 2012. [2] D. Lieckfeldt, J. You, and D. Timmermann, “Exploiting RF-Scatter: Human Localization with Bistatic Passive UHF RFID-Systems,” 2009 IEEE International Conference on Wireless and Mobile Computing, Networking and Communications, 2009. [3] B. Wagner, B. Striebing, and D. Timmermann, “A System for Live Localization In Smart Environments,” IEEE International Conference on Networking, Sensing and Control, 2013. [4] B. Wagner, N. Patwari, and D. Timmermann, “Passive RFID tomographic imaging for device-free user localization,” 9th Workshop on Positioning, Navigation and Communication(WPNC), 2012. [5] D. Lieckfeldt, J. You, and D. Timmermann, “Characterizing the Influence of Human Presence on Bistatic Passive RFID-System,” IEEE International Conference on Wireless and Mobile Computing, Networking and Communications, Oct. 2009. [6] B. Wagner and D. Timmermann, “Adaptive Clustering for Device Free User Positioning utilizing Passive RFID,” 4th Workshop on Context Systems, Design and Evaluation (CoSDEO), 2013. [7] J. Wilson and N. Patwari, “Through-Wall Motion Tracking Using Variance-Based Radio Tomography Networks,” arXiv. org, 2009. [8] J. Wilson and N. Patwari, “Radio Tomographic Imaging with Wireless Networks,” IEEE Transactions on Mobile Computing, 2010. [9] Y. Kawakita, “Anti-collision performance of Gen2 Air Protocol in Random Error Communication Link,” International Symposium on Applications and the Internet Workshops (SAINTW’06), 2005. [10] R. K. Martin, C. Anderson, R. W. Thomas, and A. S. King, “Modelling and analysis of radio tomography,” 4th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), Dec. 2011. [11] J. Wilson and N. Patwari, “Radio Tomographic Imaging with Wireless Networks,” IEEE Transactions on Mobile Computing, 2010.

Standing on chair

228/278

- chapter 17 -

Wireless Sensor Network

- chapter 18 -

Indoor GNSS or Pseudolite

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

First Theoretical Aspects of a Cm-accuracy GNSS-based Indoor Positioning System Ye Lu, Alexandre Vervisch-Picois, Nel Samama Department of Electronics and Physics Institut Mines-Telecom, Telecom SudParis, UMR 5157 SAMOVAR Evry, France [email protected]

RTK (Real Time Kinematic) receivers acquiring signals from both GPS and GLONASS. Furthermore, with Galileo, BeiDou (Compass), and several regional satellite-based augmentation systems under deployment, more choices of independent or collaborative positioning will be available, and with the next generation of GPS, new signals on existing or new frequencies (L5) will certainly lead to more efficient positioning algorithms.

Abstract—Many techniques developed for indoor positioning aim at providing a mass market terminal with the continuity of location service, from outdoors where GNSS are almost unbeatable, to indoors. The joint constraints of real-life environment, mass market and indoor positioning are so far too complicated to deal with, and no acceptable system has been deployed yet. Our current purpose is slightly different: the question we would like to answer is “what is the limit of the possible accuracy of a GNSS-like transmitter based system for indoor positioning?” The target is now oriented to the professional world, where a few constraints can be accepted, especially on the cost and deployment rules. Based on certain infrastructure, the centimeter-level accuracy is desired to be reached in order to present some interests. The goal of this first paper on this subject is to cope with the basics of the problem and to review possible theoretical aspects. Thus, differential techniques are detailed with associated requirements in order to make it possible to reach this level of accuracy. The specificity of the indoor environment is highlighted and is mainly considered through the way of increased noise levels on the various measurements, as well as the immobility of transmitters. Simulations are carried out and it is shown that centimeter level is extremely difficult to reach as soon as the measurement noise increases a little bit. As a conclusion, a few directions of future works are suggested in order to overcome the increased noise level that is almost inevitable indoors. Keywords—indoor pseudolite; repealite.

positioning;

I.

high

accuracy;

However, the situation is quite different indoors: no technique is dominant on indoor positioning, and no standard exists; a trade-off has to be made between the cost of system and its accuracy, availability, and robustness. A synthetic comparison shows that GNSS-based indoor positioning techniques are more precise than those built on radio signal strength indicator, and cover a larger area than UWB (UltraWide Band) or RFID (Radio-Frequency IDentification), and are also more robust than ultrasound or light-based systems with respect to interferences and obstacles [2, 3]. II.

CARRIER PHASE DOUBLE DIFFERENCE POSITIONING

Following the notations in the resume of S. Botton [4], the double difference formation is explained in this section. Let the superscripts , characterize two transmitters, and subscripts , two receivers. The pseudorange and carrier phase observables can be expressed as follows: ,

GNSS;

INTRODUCTION

,

While the human beings explore the nature tirelessly, they also put significant concerns to be aware of themselves, to know better of the circumstances, and to be informed with their precise positions, velocities, trajectories, etc. with respect to the local environment. The location service is so important to the navigation of pedestrians and vehicles that it is indispensable in our daily life. It can be provided outdoors by the GNSS (Global Navigation Satellite System), thanks to the pseudorange and carrier phase observations on different frequencies. Generally speaking, pseudorange measurements provide meter-level accuracy, while the carrier phases can theoretically improve it to the millimeter range [1]. New techniques and methods emerge continuously: after the traditional technique based on one constellation, i.e. GPS, we are already familiar with the

=

,

= ∙

,

+ +

+

+

+ ∙

,



+

,

+



(1)

(2)

where is the speed of light; is the carrier frequency; is the moment of signal emission by transmitter , in system time scale; is the moment of signal reception by receiver , in system time scale; is the distance between transmitter and receiver ; is the propagation delay due to atmosphere (inexistent indoors) or the environment of receiver antenna; is the clock bias of receiver ; is the clock bias of transmitter ; is the integer ambiguity, remaining constant is the measurement noise, until a cycle slip occurs;

1

229/278

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013 including the influence of multipath. The carrier phase cycles, while all the other quantities are in SI units.

,N = 0. 0

is in

∙ ∇∆



+

=

#, ,

=

,

+



+







≜ %#,, +

#, ,

#



+

!+

+

#



!+

#

+ ∙

#, ,

+

+



#





!+

!

#

+

#

!

The most important precondition of this sort of epochaccumulating algorithms is that the row vectors of matrix 0 should always be linearly independent, while the data of new epochs is continuously taken in. Otherwise (6) becomes an underdetermined system of linear equations. Thus, the geometrical change is required among signal transmitters and receivers. In outdoor positioning, the GPS satellites move a few kilometers per second, allowing a static initialization of the user receiver so as to obtain a rather precise floating solution of integer ambiguities and the coordinates of receiver. However, this is not true indoors, because the pseudolites are fixed: 03 does not change at all from epoch to epoch if the receiver is static. As a result, either pseudolites or the user receiver should move when the observations are collected. The former case is quite hypothetical, since a “moving” pseudolite indoors is never envisaged, and in the latter case, the position of receiver is no longer a constant, so this algorithm is not applicable any more. Equations (5) and (9) should then be replaced by:

(3)

(4)

Suppose the coordinates of receiver , & , ' , ( , are known; receiver is to be positioned; is chosen as the reference transmitter, and 1, 2, … , are the other transmitters. Let

, = & ' (

-, ,

∙∙∙

#, ,

.

!

(5)

At epochs 1, 2, … , /, by linearizing (4), we have: 0, = 1

,=

1 = 1- 12 ∙∙∙ 13 !

03 = 03,#

>,?

:;
View more...

Comments

Copyright © 2017 PDFSECRET Inc.