Measure my Cats movements , Accuracy of Mobile phone GPS at metre scale . how to improve it

Measure my Cats movements , Accuracy of Mobile phone GPS at metre scale . how to improve it

I wanted to get some information related to an idea I have I have a lot of cats and a big garden.

The cats generally roam through the garden and this location is fixed and Known

If i have known area , in this case my garden , does this make getting location information GPS , more accurate

Can i track my Cats accurately if i hook them up to a mobile phone.(Picture 1)

I have found the thread here about GPS accuracy What is the maximum Theoretical accuracy of GPS?

How accurate will this be , to 2 metres is this relatisic with mobile GPS

If i had more than one cat with a mobile , Could these devices talk to one another to give an more accurate picture ( picture 2 )

If i was able to put some fixed mobile reciever points in Could be combined with cats mobiles to give more accurate read out ( picture 3)

The outputs of what i like to be able to Capture

xy information per second for my Cat

How accurate would this be . Could this be done , by only using mobile phones. Are there better options

Picture 1 Picture 2 Picture 3

If the area is known and stationary, why not skip GPS altogether and use beacons? You can get accuracy up to a few inches using that approach. As of today, there are many software+hardware SDKs that are coming out that enable you to pinpoint location using fixed beacons and a local coordinate system (which then you can translate to a different coordinate system if that is your thing).

and many more.

If you're using just a mobile phone, I don't think you would be able to get much more than that unless they support some form of differential correction or post processing.

I haven't heard of that support in mobile phones yet. GPS units that collect that type of info (in my experience) are generally rather large and may weigh as much and be as big as your cats. We use sub-meter units that are at least $3.5K/unit so it wouldn't be feasible.

Some mobile phones may be more accurate than others. I believe most of that would depend on the type of GPS chip they're using and whether they support corrections via the cellular network/wifi.

For the devices talking to one another, you could probably do something like that. I would imagine it as using either bluetooth (2) or wifi (2+ with host) connections among the phones and some custom programming to sort out and manually correct locations.

Since I am unaware of an already made mobile solution for this, I would imagine this would be a great undertaking involving lots of custom programming to accomplish your task.

You might have better luck finding something that allows you to lay out a grid (your garden) to capture information. I would think it would have some known points or multiple sensors reading information from radio trackers. That information could then be translated into lat/long. The translation process would probably be pretty tricky since you would need to have known, fixed, and accurate points for your grid. Then you come into the problem of how accurate is the GPS you're using to collect that grid information.

Completely agree @Ragi Yaser. Beacons help a great deal in locating objects of living or non-living nature. The accuracy also is high and helps in positioning of objects indoor. For instance, Pigeon Indoor Positioning and Navigation App facilities indoor wayfinding for large facilities such as resorts, medical facilities, shopping malls, museums, convention centers, etc.

Check here for more info:

Social sensing from street-level imagery: A case study in learning spatio-temporal urban mobility patterns

Street-level imagery has covered the comprehensive landscape of urban areas. Compared to satellite imagery, this new source of image data has the advantage in fine-grained observations of not only physical environment but also social sensing. Prior studies using street-level imagery focus primarily on urban physical environment auditing. In this study, we demonstrate the potential usage of street-level imagery in uncovering spatio-temporal urban mobility patterns. Our method assumes that the streetscape depicted in street-level imagery reflects urban functions and that urban streets of similar functions exhibit similar temporal mobility patterns. We present how a deep convolutional neural network (DCNN) can be trained to identify high-level scene features from street view images that can explain up to 66.5% of the hourly variation of taxi trips along with the urban road network. The study shows that street-level imagery, as the counterpart of remote sensing imagery, provides an opportunity to infer fine-scale human activity information of an urban region and bridge gaps between the physical space and human space. This approach can therefore facilitate urban environment observation and smart urban planning.

Applying mobile phone data to travel behaviour research: A literature review

Travel behaviour has been studied for decades to guide transportation development and management, with the support of traditional data collected by travel surveys. Recently, with the development of information and communication technologies (ICT), we have entered an era of big data, and many sources of novel data, including mobile phone data, have emerged and been applied to travel behaviour research. Compared with traditional travel data, mobile phone data have many unique features and advantages, which attract scholars in various fields to apply them to travel behaviour research, and a certain amount of progress has been made to date. However, this is only the beginning, and mobile phone data still have great potential that needs to be exploited to further advance human mobility studies. This paper provides a review of existing travel behaviour studies that have applied mobile phone data, and presents the progress that has been achieved to date, and then discusses the potential of mobile phone data in advancing travel behaviour research and raises some challenges that need to be dealt with in this process.


Ecosystems are complex and dynamic, and the relationships among their many components are often difficult to measure (Bolliger et al. 2005 , Ascough et al. 2008 ). Ecologists often rely on technology to quantify ecological phenomena (Keller et al. 2008 ). Technological advancements have often been the catalyst for enhanced understanding of ecosystem function and dynamics (Fig. 1, Table 1), which in turn aids environmental management. For example, the inception of VHF telemetry to track animals in the 1960s allowed ecologists to remotely monitor the physiology, movement, resource selection, and demographics of wild animals for the first time (Tester et al. 1964 ). However, advancements in GPS and satellite communications technology have largely supplanted most uses for VHF tracking. As opposed to VHF, GPS has the ability to log locations, as well as high recording frequency, greater accuracy and precision, and less researcher interference of the animals, leading to an enhanced, more detailed understanding of species habitat use and interactions (Rodgers et al. 1996 ). This has assisted in species management by not only highlighting important areas to protect (Pendoley et al. 2014 ), but also identifying key resources such as individual plants instead of general areas of vegetation.

Technology Description
Sonar Sonar first used to locate and record schools of fish
Automated sensors Automated sensors specifically used to measure and log environmental variables
Camera traps Camera traps first implemented to record wildlife presence and behavior
Sidescan sonar Sidescan sonar is used to efficiently create an image of large areas of the sea floor
Mainframe computers Computers able to undertake ecological statistical analysis of large datasets
VHF tracking Radio tracking, allowing ecologists to remotely monitor wild animals
Landsat imagery The first space-based, land-remote sensing data
Sanger sequencing The first method to sequence DNA based on the selective incorporation of chain-terminating dideoxynucleotides by DNA polymerase during in vitro DNA replication
LiDAR Remote sensors that measure distance by illuminating a target with a laser and analyzing the refracted light
Multispectral Landsat Satellite imagery with different wavelength bands along the spectrum, allowing for measurements through water and vegetation
Thermal bio-loggers Surgically implanted devices to measure animal body temperature
GPS tracking Satellite tracking of wildlife with higher recording frequency, greater accuracy and precision, and less researcher interference than VHF
Thematic Landsat A whisk broom scanner operating across seven wavelengths and able to measure global warming and climate change
Infrared camera traps Able to sense animal movement in the dark and take images without a visible flash
Multibeam sonar Transmitting broad acoustic fan shaped pulses to establish a full water column profile
Video traps Video instead of still imagery, able to determine animal behavior as well as identification
Accelerometers Measures animal movement (acceleration) that is irrespective of satellite reception (geographic position)
3D LiDAR Accurate measurement of 3D ecosystem structure
Autonomous vehicles Unmanned sensor platforms to collect ecological data automatically and remotely, including in terrain that is difficult and/or dangerous to access for humans
3D tracking The use of inertial measurements units devices in conjunction with GPS data to create real-time animal movement tracks
ICARUS The International Cooperation for Animal Research Using Space (ICARUS) Initiative is to observe global migratory movements of small animals through a satellite system
Next gen sequencing Millions of fragments of DNA from a single sample can be sequenced in unison
Long-range, low-power telemetry Low-voltage, low-amperage transfer of data over several kilometers
Internet of things A network of devices that can communicate with one another, transferring information and processing data
Low-power computers Small computers with the ability to connect an array of sensors and, in some cases, run algorithms and statistical analyses
Swarm theory The autonomous but coordinated use of multiple unmanned sensor platforms to complete ecological surveys or tasks without human intervention
3D printing The construction of custom equipment and constructing animal analogues for behavioral studies
Mapping molecular movement Cameras that can display images at a sub-cellular level without the need of electron microscopes
Biotic gaming Human players control a paramecium similar to a video game, which could aid in the understanding of microorganism behavior
Bio-batteries Electro-biochemical devices can run on compounds such as starch, allowing sensors and devices to be powered for extended periods in remote locations where more traditional energy sources such as solar power may be unreliable (e.g., rainforests)
Kinetic batteries Batteries charged via movement that are able to power microcomputers

Ecological advances to date are driven by technology primarily relating to enhanced data capture. Expanding technologies have focused on the collection of high spatial and temporal resolution information. For example, small, unmanned aircraft can currently map landscapes with sub-centimeter resolution (Anderson and Gaston 2013 ), while temperature, humidity, and light sensors can be densely deployed (hundreds per hectare) to record micro-climatic variations (Keller et al. 2008 ). Such advances in data acquisition technologies have delivered knowledge of the natural environment unthinkable just a decade ago. But what does the future hold?

Here, we argue that ecology could be on the precipice of a revolution in data acquisition. It will occur within three concepts: supersize (the expansion of current practice), step-change (the ability to use technology to address questions we previously could not), and radical change (exploring questions we could not previously imagine). Technologies, both current and emerging, have the capacity to spawn this “next-generation” ecological data that, if harnessed effectively, will transform our understanding of the ecological world (Snaddon et al. 2013 ). What we term “technoecology” is the hardware side of “big data” (Howe et al. 2008 ), focused on the employment of cutting edge physical technology to acquire new volumes and forms of ecological data. Such data can help address complex and pressing global issues of ecological and conservation concern (Pimm et al. 2015 ). However, the pace of this revolution will be determined in part by how quickly ecologists embrace these technologies. The purpose of this article is to bring to the attention of ecologists some examples of current, emerging, and conceptual technologies that will be at the forefront of this revolution, in order to hasten the uptake of these more recent developments in technoecology.

Cheetah movement data

The Leibniz Institute for Zoo and Wildlife Research (Leibniz-IZW) runs a long-term research project on cheetahs on freehold farmland in central Namibia. Within a study area of approximately 40,000 (hbox ^2) , more than 200 cheetahs were captured in box traps and fitted with a GPS-collar as described by Melzheimer et al. (2018), see Fig. 1a. GPS data were downloaded every two to three weeks from a small airplane equipped with antennas to locate the cheetah. The data were collected over more than a decade and is constantly updated and extended with current movement data. The complete data set takes up several dozen GB in size. We used a subset of the available data for the presented work, and all data sets used were retrieved from the Movebank data repository (Movebank 2020 Kranstauber et al. 2011) (access restricted due to wildlife protection reasons). These data sets commonly consist of GPS locations taken every 15 minutes, and acceleration data of the z-axis measured in bursts of 3.6 seconds length with a resolution of 10 Hz every two minutes. In general, temporal resolution might change not only between individuals, but also over the course of recording for a single individual. This might be the result of a change of sensors or sensor parameters, but also due to short high-resolution bursts of data being collected and sent. In addition, missing or incorrect values can occur due to sensor or communication issues. Analysts are usually aware of potential issues but need to spend time to check and correct (or discard) the data. In our preprocessing pipeline, such entries are filtered out before the visual analysis starts.

Note that while the cheetah is considered to be the fastest mammal on land, current tracking technology does not allow to record detailed information on sprints during hunting. Due to restrictions in data storage and transmission, and a trade-off with energy consumption, only samples of movement parameters such as acceleration and speed are available. The other main trade-off is between longevity and resolution. One could sample with 1 Hz resolution but then the battery would only last a few hours. Usually the used collars record 30,000 GPS fixes during the battery lifetime.

Trajectory and density map of territorial cheetah movement. The visualisation clearly shows a main center of movement, in contrast to the multi-center scenario for the floater in Fig. 3

Conti, M., & Giordano, S. (2014). Mobile ad hoc networking: milestones, challenges, and new research directions. IEEE Communications Magazine, 52, 85–96.

Bettstetter, C. (2001). Smooth is better than sharp: A random mobility model for simulation of wireless networks. In Proceedings of the 4th ACM international workshop on modeling, analysis and simulation of wireless and mobile systems, MSWIM ’01 (pp. 19–27). ACM.

Camp, T., Boleng, J., & Davies, V. (2002). A survey of mobility models for ad hoc network research. Wireless Communications and Mobile Computing, 2(5), 483–502.

Basagni, S., Conti, M., Giordano, S., & Stojmenovic, I. (2004). Mobile ad hoc networking. New York: Wiley.

Agarwal, P. K., Guibas, L. J., Edelsbrunner, H., Erickson, J., Isard, M., Har-Peled, S., et al. (2002). Algorithmic issues in modeling motion. ACM Computing Surveys, 34, 550–572.

Bai, F., & Helmy, A. (2004). A survey of mobility models. Wireless Adhoc Networks. University of Southern California, USA, 206, 147.

Ghouti, L., Sheltami, T. R., & Alutaibi, K. S. (2013). Mobility prediction in mobile ad hoc networks using extreme learning machines. Procedia Computer Science, 19, 305–312.

Díaz, J., Mitsche, D., & Santi, P. (2011). Theoretical aspects of graph models for MANETs. Berlin: Springer.

Padmavathy, N., & Chaturvedi, S. K. (2015). Reliability evaluation of mobile ad hoc network: With and without mobility considerations. Procedia Computer Science, 46, 1126–1139.

Batabyal, S., & Bhaumik, P. (2015). Mobility models, traces and impact of mobility on opportunistic routing algorithms: A survey. IEEE Communications Surveys Tutorials, 17(3), 1679–1707.

Aschenbruck, N., Ernst, R., Gerhards-Padilla, E., & Schwamborn, M. (2010) Bonnmotion: A mobility scenario generation and analysis tool. In Proceedings of the 3rd international ICST conference on simulation tools and techniques, SIMUTools ’10, (ICST, Brussels, Belgium, Belgium) (pp. 51:1–51:10). ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering).

Canu mobility simulation environment (canumobisim). Accessed June 07, 2017.

Traffic analysis tools: Corridor simulation—FHWA operations. Accessed June 07, 2017.

Giurlanda, F., Perazzo, P., & Dini, G. (2015). HUMsim: A privacy-oriented human mobility simulator. Cham: Springer.

Bai, F., Sadagopan, N., & Helmy, A. (2003). Important: A framework to systematically analyze the impact of mobility on performance of routing protocols for adhoc networks. In IEEE INFOCOM 2003. Twenty-second annual joint conference of the IEEE computer and communications societies (Vol. 2, pp. 825–835).

Mousavi, S. M., Rabiee, H. R., Moshref, M., & Dabirmoghaddam, A. (2007). Mobisim: A framework for simulation of mobility models in mobile ad-hoc networks. In Third IEEE international conference on wireless and mobile computing, networking and communications (WiMob 2007) (pp. 82–82).

Quadstone paramics (mid) traffic and pedestrian simulation, analysis and design software. Accessed June 07, 2017.

DLR institute of transportation systems-sumosimulation of urban mobility. desktopdefault.aspx/tabid-9883/16931_ read-41000/. Accessed June 07, 2017.

Toilers colorado school of mines code. misc Accessed June 07, 2017.

Piorkowski, M., Raya, M., Lugo, A. L., Papadimitratos, P., Grossglauser, M., & Hubaux, J.-P. (2008). TranNS: Realistic joint traffic and network simulator for VANETs. ACM SIGMOBILE Mobile Computing and Communications Review, 12(1), 31–33.

Traffic and network simulation environment. Accessed June 07, 2017.

Transims-resources-transportation model improvement program (TMIP)-planning-FHWA. planning/tmip/resources/transims/. Accessed June 07, 2017.

Vanetmobisim- newcom (mid) institut eurecom (mid) politecnico di torino. Accessed June 07, 2017.

Bajaj, R., Ranaweera, S. L., & Agrawal, D. P. (2002). GPS: Location-tracking technology. Computer, 35(4), 92–94.

Constellation status. Accessed June 07, 2017.

GPS constellation status. Accessed June 07, 2017.

Overview of the quasi-zenith satellite system (QZSS). Accessed June 07, 2017.

Kaplan, E., & Hegarty, C. (2005). Understanding GPS: Principles and applications. Norwood: Artech House.

Rossi, L., Walker, J., & Musolesi, M. (2015). Spatio-temporal techniques for user identification by means of GPS mobility data. EPJ Data Science, 4(1), 11.

Longley, P. (2005). Geographic information systems and science. Chichester: Wiley.

Kim, M., Kotz, D., & Kim, S. (2006). Extracting a mobility model from real user traces. INFOCOM, 6, 1–13.

Zignani, M., & Gaito, S. (2010). Extracting human mobility patterns from GPS-based traces. In Wireless Days (WD), 2010 IFIP (pp. 1–5). IEEE.

Kang, J. H., Welbourne, W., Stewart, B., & Borriello, G. (2004). Extracting places from traces of locations. In Proceedings of the 2nd ACM international workshop on Wireless mobile applications and services on WLAN hotspots (pp. 110–118). ACM.

Ashbrook, D., & Starner, T. (2003). Using GPS to learn significant locations and predict movement across multiple users. Personal and Ubiquitous Computing, 7(5), 275–286.

Azevedo, T. S., Bezerra, R. L., Campos, C. A., & de Moraes, L. F. (2009). An analysis of human mobility using real traces. In Wireless communications and networking conference, 2009. WCNC 2009 (pp. 1–6). IEEE.

Whitbeck, J., de Amorim, M. D., Conan, V., Ammar, M., & Zegura, E. (2011). From encounters to plausible mobility. Pervasive and Mobile Computing, 7(2), 206–222.

Raleigh, C., Linke, A., Hegre, H., & Karlsen, J. (2010). Introducing acled: An armed conflict location and event dataset special data feature. Journal of Peace Research, 47(5), 651–660.

Pettersson, T., & Wallensteen, P. (2015). Armed conflicts, 1946–2014. Journal of Peace Research, 52(4), 536–550.

Aschenbruck, N., Munjal, A., & Camp, T. (2011). Trace-based mobility modeling for multi-hop wireless networks. Computer Communications, 34(6), 704–714.

Lambla, A. (2006). The exploratorium’s invisible dynamics project: Environmental research as artistic process. Leonardo, 39(4), 383–385.

SocioPatterns. Accessed June 07, 2017.

Home page-umass trace repository. Accessed June 07, 2017.

Network repository—the first interactive data repository with visual analytics for understanding data easily. Accessed June 07, 2017.

Data sets (mid) foundations of data and visual analytics. Accessed June 07, 2017.

SNAP: Stanford network analysis platform. Accessed June 07, 2017.

Uci machine learning repository. Accessed June 07, 2017.

Baudic, G., Pérennou, T., & Lochin, E. (2016). Following the right path: Using traces for the study of dtns. Computer Communications, 88, 25–33.

Bacciu, D., Barsocchi, P., Chessa, S., Gallicchio, C., & Micheli, A. (2014). An experimental characterization of reservoir computing in ambient assisted living applications. Neural Computing and Applications, 24(6), 1451–1464.

Silva, A. P., Hilário, M. R., Hirata, C. M., & Obraczka, K. (2015). A percolation-based approach to model DTN congestion control. In 2015 IEEE 12th international conference on mobile ad hoc and sensor systems (MASS) (pp. 100–108). IEEE.

Chen, W. (2014). Explosive percolation in random networks. Berlin: Springer.

Amor, S. B., Bui, M., & Lavallée, I. (2010). Optimizing mobile networks connectivity and routing using percolation theory and epidemic algorithms. In IICS (pp. 63–78), Citeseer.

Shen, C.-C., Huang, Z., & Jaikaeo, C. (2006). Directional broadcast for mobile ad hoc networks with percolation theory. IEEE Transactions on Mobile Computing, 5(4), 317–332.

Li, D., Zhang, Q., Zio, E., Havlin, S., & Kang, R. (2015). Network reliability analysis based on percolation theory. Reliability Engineering & System Safety, 142, 556–562.

Avula, M., Yoo, S.-M., & Park, S. (2012). Constructing minimum connected dominating set in mobile ad hoc networks. International Journal on Applications of Graph Theory in Wireless Ad Hoc Networks and Sensor Networks, 4(2/3), 15.

Erciyes, K., Dagdeviren, O., & Cokuslu, D. (2006). Modeling and simulation of wireless sensor and mobile ad hoc networks. In International conference on modeling and simulation.

Raj, A., Saha, D., & Dasgupta, P. (2010). A cost-efficient algorithm for finding connected dominating sets in static wireless ad hoc networks with obstacles. In 2010 IEEE 4th international symposium on advanced networks and telecommunication systems (ANTS) (pp. 73–75). IEEE.

Sharmila, C., & Amalanathan, G. (2016). Construction of pipelined strategic connected dominating set for mobile ad hoc networks. CIT. Journal of Computing and Information Technology, 24(2), 121–132.

Watts, D. J. (1999). Small worlds: the dynamics of networks between order and randomness. Princeton: Princeton University Press.

Hekmat, R. (2006). Ad-hoc networks: Fundamental properties and network topologies. Berlin: Springer.

Penrose, M. (2003). Random geometric graphs, No. 5. Oxford: Oxford University Press.

Chaturvedi, S. K., & Padmavathy, N. (2013). The influence of scenario metrics on network reliability of mobile ad hoc network. International Journal of Performability Engineering, 9(1), 61–74.

Bettstetter, C. (2002). On the minimum node degree and connectivity of a wireless multihop network. In Proceedings of the 3rd ACM international symposium on Mobile ad hoc networking & computing (pp. 80–91). ACM.

Holme, P., & Saramäki, J. (2012). Temporal networks. Physics Reports, 519(3), 97–125.

Ferreira, A. (2003). Building a reference combinatorial model for dynamic networks: Initial results in evolving graphs. Ph.D. thesis, INRIA.

Casteigts, A., Flocchini, P., Quattrociocchi, W., & Santoro, N. (2012). Time-varying graphs and dynamic networks. International Journal of Parallel, Emergent and Distributed Systems, 27(5), 387–408.

Ferreira, A. (2002). On models and algorithms for dynamic communication networks: The case for evolving graphs. In Proceedings of ALGOTEL

Eiza, M. H., & Ni, Q. (2013). An evolving graph-based reliable routing scheme for vanets. IEEE Transactions on Vehicular Technology, 62(4), 1493–1504.

Ferreira, A. (2004). Building a reference combinatorial model for manets. IEEE Network, 18(5), 24–29.

Holme, P. (2015). Modern temporal network theory: A colloquium. The European Physical Journal B, 88(9), 1–30.

Kostakos, V. (2009). Temporal graphs. Physica A: Statistical Mechanics and its Applications, 388(6), 1007–1023.

Kolar, M., Song, L., Ahmed, A., & Xing, E. P. (2010). Estimating time-varying networks. The Annals of Applied Statistics, 4(1), 94–123.

Huang, M. (2012). Topology design for time-varying networks. Ph.D. thesis, The University of North Carolina at Charlotte.

Dehmer, M., Emmert-Streib, F., & Pickl, S. (2015). Computational network theory: Theoretical foundations and applications (Vol. 5). Wiley.

Holme, P., & Saramäki, E. J. (2013). Temporal networks. Berlin: Springer.

Zschaler, G. (2012). Adaptive-network models of collective dynamics. The European Physical Journal Special Topics, 211(1), 1–101.

Afrasiabi Rad, A. (2016). Social network analysis and time varying graphs. Ph.D. thesis, Université d’Ottawa/University of Ottawa.

Cai, X., Sha, D., & Wong, C. (2007). Time-varying network optimization (Vol. 103). Berlin: Springer.

Lentz, H. (2013). Paths for epidemics in static and temporal networks. Ph.D. thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I.

El Alaoui, S. (2015). Routing optimization in interplanetary networks. Master's Thesis, University of Nebraska-Lincoln.

Leskovec, J., Kleinberg, J., & Faloutsos, C. (2007). Graph evolution: Densification and shrinking diameters. ACM Transactions on Knowledge Discovery from Data (TKDD), 1(1), 2.

Acer, U. G., Drineas, P., & Abouzeid, A. A. (2011). Connectivity in time-graphs. Pervasive and Mobile Computing, 7(2), 160–171.

Tang, J. K. (2012). Temporal network metrics and their application to real world networks. Ph.D. thesis, University of Cambridge.

Hekmat, R., & Van Mieghem, P. (2006). Connectivity in wireless ad-hoc networks with a log-normal radio model. Mobile Networks and Applications, 11(3), 351–360.

Hekmat, R., & Van Mieghem, P. (2008). Interference power statistics in ad-hoc and sensor networks. Wireless Networks, 14(5), 591–599.

Peiravi, A., & Kheibari, H. T. (2008). Fast estimation of network reliability using modified Manhattan distance in mobile wireless networks. Journal of Applied Sciences, 8(23), 4303–4311.

Padmavathy, N., & Chaturvedi, S. K. (2013). Evaluation of mobile ad hoc network reliability using propagation-based link reliability model. Reliability Engineering & System Safety, 115, 1–9.

Coll-Perales, B., Gozalvez, J., Lazaro, O., & Sepulcre, M. (2015). Opportunistic multihopping for energy efficiency: Opportunistic multihop cellular networking for energy-efficient provision of mobile delay-tolerant services. IEEE Vehicular Technology Magazine, 10(2), 93–101.

Coll-Perales, B., Gozálvez, J., & Sepulcre, M. (2015). Empirical models of the communications performance of multi-hop cellular networks using D2D. Journal of Network and Computer Applications, 58, 60–72.

Frodigh, M., Johansson, P., & Larsson, P. (2000). Wireless ad hoc networking: The art of networking without a network. Ericsson Review, 4(4), 249.

Kawamoto, Y., Nishiyama, H., & Kato, N. (2013). Toward terminal-to-terminal communication networks: A hybrid MANET and DTN approach. In 2013 IEEE 18th international workshop on computer aided modeling and design of communication links and networks (CAMAD) (pp. 228–232). IEEE.

Gottumukkala, R. N., Venna, S. R., & Raghavan, V. (2015). Visual analytics of time evolving large-scale graphs. IEEE Intelligence Information Bulletin, 16(1), 10–16.

Chapanond, A., Krishnamoorthy, M. S., Prabhu, G., & Punin, J. (2010). Evolving graph representation and visualization. arXiv preprint arXiv:1006.4608.

Beck, F., Burch, M., Diehl, S., & Weiskopf, D. (2014). The state of the art in visualizing dynamic graphs. In EuroVis STAR (Vol. 2).

Ma, K.-L., & Muelder, C. W. (2013). Large-scale graph visualization and analytics. Computer, 46(7), 39–46.

Papanikos, N., Akestoridis, D. G., & Papapetrou, E. (2015). Adyton: A network simulator for opportunistic networks. Accessed June 07, 2017.

Goldman, A., Floriano, P., & Ferreira, A. (2012). A tool for obtaining information on DTN traces. In 4th extreme conference on communication (ExtremeCom 2012) (p. 6).

Bastian, M., Heymann, S., Jacomy, M., et al. (2009). Gephi: An open source software for exploring and manipulating networks. ICWSM, 8, 361–362.

Zeng, X., Bagrodia, R., & Gerla, M. (1998). Glomosim: A library for parallel simulation of large-scale wireless networks. In Twelfth workshop on parallel and distributed simulation, 1998. PADS 98. Proceedings (pp. 154–161). IEEE.

Duarte, P., Macedo, J., Costa, A. D., Nicolau, M. J., & Santos, A. (2015). A probabilistic interest forwarding protocol for named data delay tolerant networks. In International conference on ad hoc networks (pp. 94–107). Springer.

Welcome to icones documentation!icone 1.0 documentation. Accessed June 07, 2017.

Csardi, G., & Nepusz, T. (2006). The igraph software package for complex network research. InterJournal, Complex Systems, 1695(5), 1–9.

Barr, R., Haas, Z. J., & Van Renesse, R. (2004). Jist: Embedding simulation time into a virtual machine. In EuroSim congress on modelling and simulation.

Schoch, E., Feiri, M., Kargl, F., & Weber, M. (2008). Simulation of ad hoc networks: ns-2 compared to jist/swans. In Proceedings of the 1st international conference on Simulation tools and techniques for communications, networks and systems & workshops (p. 36). ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering).

Varga, A., & Hornig, R. (2008). An overview of the OMNeT++ simulation environment. In Proceedings of the 1st international conference on Simulation tools and techniques for communications, networks and systems & workshops (p. 60). ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering).

Sobeih, A., Chen, W.-P., Hou, J. C., Kung, L.-C., Li, N., Lim, H., Tyan, H.-Y., & Zhang, H. (2005). J-sim: A simulation environment for wireless sensor networks. In Proceedings of the 38th annual symposium on simulation (pp. 175–187). IEEE Computer Society.

Hogie, L., Guinand, F., & Bouvry, P. (2006). The madhoc metropolitan adhoc network simulator. Esch-sur-Alzette: Rapport technique, University of Luxembourg.

Matlab-mathworks-mathworks india. Accessed June 07, 2017.

Netminer-social network analysis software. Accessed June 07, 2017.

Issariyakul, T., & Hossain, E. (2011). Introduction to network simulator NS2. Berlin: Springer.

Henderson, T. R., Lacage, M., Riley, G. F., Dowell, C., & Kopena, J. (2008). Network simulations with the ns-3 simulator. In SIGCOMM demonstration (Vol. 14).

OMNeT++ Discrete Event Simulator-Home. Accessed June 07, 2017.

Keränen, A., Ott, J., & Kärkkäinen, T. (2009). The one simulator for DTN protocol evaluation. In Proceedings of the 2nd international conference on simulation tools and techniques (p. 55). ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering).

Batagelj, V., & Mrvar, A. (1998). Pajek-program for large network analysis. Connections, 21(2), 47–57.

De Nooy, W., Mrvar, A., & Batagelj, V. (2011). Exploratory social network analysis with Pajek (Vol. 27). Cambridge: Cambridge University Press.

Program package pajek/pajekxxl. Accessed June 07, 2017.

Mrvar, A., & Batagelj, V. (2016). Analysis and visualization of large networks with program package pajek. Complex Adaptive Systems Modeling, 4(1), 1–8.

Ptolemaeus, C. (2014). System design, modeling, and simulation: using Ptolemy II. Berkeley.

Brooks, C., Lee, E. A., Liu, X., Zhao, Y., Zheng, H., Bhattacharyya, S. S., et al. (2005). Ptolemy II-heterogeneous concurrent modeling and design in Java (Vol. 1: Introduction to ptolemy II), Memorandum UCB/ERL M05/21, EECS, University of California, Berkeley.

Simulator, Q. N. (2011). Scalable network technologies. Inc.[Online]. Available

Bender-deMoll, S., & McFarland, D. A. (2006). The art and science of dynamic network visualization. Journal of Social Structure, 7(2), 1–38.

SoNIA (social network image animator). Accessed June 07, 2017.

Scalable simulation framework. Accessed June 07, 2017.

Kropff, M., Krop, T., Hollick, M., Mogre, P. S., & Steinmetz, R. (2006). A survey on real world and emulation testbeds for mobile ad hoc networks. 2nd international conference on testbeds and research infrastructures for the development of networks and communities, 2006. TRIDENTCOM 2006 (pp. 448–453).

Kiess, W., & Mauve, M. (2007). A survey on real-world implementations of mobile ad-hoc networks. Ad Hoc Networks, 5(3), 324–339.

Ivanic, N., Rivera, B., & Adamson, B. (2009). Mobile ad hoc network emulation environment. In Military communications conference, 2009. MILCOM 2009 (pp. 1–6). IEEE.

Patel, K. N., et al. (2015). A survey on emulation testbeds for mobile ad-hoc networks. Procedia Computer Science, 45, 581–591.

Nordstrom, E., Gunningberg, P., & Lundgren, H. (2005). A testbed and methodology for experimental evaluation of wireless mobile ad hoc networks. In First international conference on testbeds and research infrastructures for the development of networks and communities, 2005. Tridentcom 2005 (pp. 100–109). IEEE.

Ramanathan, R., & Hain, R. (2000). An ad hoc wireless testbed for scalable, adaptive QoS support. In Wireless communications and networking confernce, 2000. WCNC. 2000 IEEE (Vol. 3, pp. 998–1002). IEEE.

Sanghani, S., Brown, T. X., Bhandare, S., & Doshi, S. (2003) Ewant: The emulated wireless ad hoc network testbed. In Wireless communications and networking, 2003. WCNC 2003. 2003 IEEE (Vol. 3, pp. 1844–1849). IEEE.

Hui, P., & Crowcroft, J. (2007) How small labels create big improvements. In Fifth annual IEEE international conference on pervasive computing and communications workshops, 2007. PerCom Workshops’ 07 (pp. 65–70). IEEE.

Hui, P., Crowcroft, J., & Yoneki, E. (2011). Bubble rap: Social-based forwarding in delay-tolerant networks. IEEE Transactions on Mobile Computing, 10(11), 1576–1589.

Flynn, J., Tewari, H., & O’Mahony, D. (2001). Jemu: A real time emulation system for mobile ad hoc networks. In Proceedings of the first joint IEI/IEE symposium on telecommunications systems research (pp. 262–267).

He, R., Yuan, M., Hu, J., Zhang, H., Ma, J., et al. (2003). A real-time scalable and dynamical test system for manet. In 14th IEEE proceedings on personal, indoor and mobile radio communications, 2003. PIMRC 2003 (Vol. 2, pp. 1644–1648). IEEE.

Matthes, M., Biehl, H., Lauer, M., & Drobnik, O. (2005). Massive: An emulation environment for mobile ad-hoc networks. In Second annual conference on wireless on-demand network systems and services, 2005. WONS 2005 (pp. 54–59). IEEE.

De, P., Raniwala, A., Sharma, S., & Chiueh, T.-C. (2005) Mint: A miniaturized network testbed for mobile wireless research. In INFOCOM 2005. 24th annual joint conference of the IEEE computer and communications societies. Proceedings IEEE (Vol. 4, pp. 2731–2742). IEEE.

Raychaudhuri, D., Seskar, I., Ott, M., Ganu, S., Ramachandran, K., Kremo, H., et al. (2005). Overview of the orbit radio grid testbed for evaluation of next-generation wireless network protocols. In Wireless communications and networking conference, 2005 IEEE (Vol. 3, pp. 1664–1669). IEEE.

Beyer, D. A. (1990). Accomplishments of the DARPA SURAN Program. In Military communications conference, 1990. MILCOM’90, Conference Record, A New Era. 1990 IEEE (pp. 855–862). IEEE.

Little, M. (2005). Tealab: A testbed for ad hoc networking security research. In Military communications conference, 2005. MILCOM 2005. IEEE (pp. 936–942). IEEE.

Johnson, D., Stack, T., Fish, R., Flickinger, D., Ricci, R., & Lepreau, J. (2006). Truemobile: A mobile robotic wireless and sensor network testbed. In The 25th annual joint conference of the IEEE computer and communications societies. IEEE Computer Society.

Giordano, E., Tomatis, A., Ghosh, A., Pau, G., & Gerla, M. (2008). C-vet an open research platform for VANETs: Evaluation of peer to peer applications in vehicular networks. In IEEE 68th vehicular technology conference, 2008. VTC 2008-Fall (pp. 1–2) IEEE.

Gerla, M., Weng, J.-T., Giordano, E., & Pau, G. (2012). Vehicular testbeds-model validation before large scale deployment. Journal of Communication, 7(6), 451–457.

Eriksson, J., Balakrishnan, H., & Madden, S. (2008). Cabernet: vehicular content delivery using WiFi. In Proceedings of the 14th ACM international conference on Mobile computing and networking (pp. 199–210). ACM.

Ott, J., & Kutscher, D. (2005). A disconnection-tolerant transport for drive-thru internet environments. In INFOCOM 2005. 24th annual joint conference of the IEEE computer and communications societies. Proceedings IEEE (Vol. 3, pp. 1849–1862). IEEE.

El Alaoui, S., Palusa, S., & Ramamurthy, B. (2015). The interplanetary internet implemented on the geni testbed. In Global communications conference (GLOBECOM), 2015 IEEE (pp. 1–6). IEEE.

Global environment for networking innovations (geni): Establishing the geni project office (gpo) (geni/gpo) nsf06601. Accessed June 07, 2017.

Ameixieira, C., Cardote, A., Neves, F., Meireles, R., Sargento, S., Coelho, L., et al. (2014). Harbornet: A real-world testbed for vehicular networks. IEEE Communications Magazine, 52(9), 108–114.

Reich, J., Misra, V., & Rubenstein, D. (2008). Roomba madnet: A mobile ad-hoc delay tolerant network testbed. ACM SIGMOBILE Mobile Computing and Communications Review, 12(1), 68–70.

Beuran, R., Miwa, S., & Shinoda, Y. (2013). Network emulation testbed for DTN applications and protocols. In 2013 IEEE conference on computer communications workshops (INFOCOM WKSHPS) (pp. 151–156). IEEE.

How, J. P., BEHIHKE, B., Frank, A., Dale, D., & Vian, J. (2008). Real-time indoor autonomous vehicle test environment. IEEE Control Systems, 28(2), 51–64.

Patterson, T., McClean, S., Morrow, P., Parr, G., & Luo, C. (2014). Timely autonomous identification of uav safe landing zones. Image and Vision Computing, 32(9), 568–578.

Georgia tech uav research facility. Accessed June 07, 2017.

Paula, M. C., Rodrigues, J. J., Dias, J. A., Isento, J. N., & Vinel, A. (2015). Performance evaluation of a real vehicular delay-tolerant network testbed. International Journal of Distributed Sensor Networks, 11(3), 219641.

Paula, M. C., Rodrigues, J. J., Dias, J. A., Isento, J. N., & Vinel, A. (2012) Deployment of a real vehicular delay-tolerant network testbed. In 2012 12th international conference on ITS telecommunications (ITST) (pp. 103–107). IEEE.

Hossmann, T., Carta, P., Schatzmann, D., Legendre, F., Gunningberg, P., & Rohner, C. (2011) Twitter in disaster mode: Security architecture. In Proceedings of the special workshop on internet and disasters (p. 7). ACM.

Liu, M., Johnson, T., Agarwal, R., Efrat, A., Richa, A., & Coutinho, M. M. (2015). Robust data mule networks with remote healthcare applications in the amazon region: A fountain code approach. In 2015 17th international conference on E-health networking, application & services (HealthCom) (pp. 546–551). IEEE.

Coutinho, M. M., Efrat, A., Johnson, T., Richa, A., & Liu, M. (2014). Healthcare supported by data mule networks in remote communities of the amazon region. International Scholarly Research Notices, 2014, 1–8.

Coutinho, M. M., Moreira, T., Silva, E., Efrat, A., & Johnson, T. (2011). A new proposal of data mule network focused on amazon riverine population. In Proceedings of the 3rd extreme conference on communication: The amazon expedition (p. 10). ACM.

What works: First mile solutions daknet takes rural communities online. Accessed June 07, 2017.

Pentland, A., Fletcher, R., & Hasson, A. (2004). Daknet: Rethinking connectivity in developing nations. Computer, 37(1), 78–83.

Exploratorium (mid) invisible dynamics (mid) cabspotting. Accessed June 07, 2017.

Piorkowski, M., Sarafijanovic-Djukic, N., & Grossglauser, M. (2009). CRAWDAD dataset epfl/mobility (v. 2009-02-24).

Eagle, N., & Pentland, A. S. (2006). Reality mining: Sensing complex social systems. Personal and Ubiquitous Computing, 10(4), 255–268.

Lindgren, A., Doria, A., Lindblom, J., & Ek, M. (2008). Networking in the land of northern lights: Two years of experiences from DTN system deployments. In Proceedings of the 2008 ACM workshop on Wireless networks and systems for developing regions (pp. 1–8). ACM.

Farrell, S., McMahon, A., Meehan, E., Weber, S., & Hartnett, K. (2011). Report on an arctic summer DTN trial. Wireless Networks, 17(5), 1127–1156.

McDonald, P., Geraghty, D., Humphreys, I., Farrell, S., & Cahill, V. (2007). Sensor network with delay tolerance (SeNDT). In Proceedings of 16th international conference on computer communications and networks, 2007. ICCCN 2007 (pp. 1333–1338). IEEE.

Hubaux, J.-P., Gross, T., Le Boudec, J.-Y., & Vetterli, M. (2001). Toward self-organized mobile ad hoc networks: The terminodes project. IEEE Communications Magazine, 39(1), 118–124.

Hubaux, J.-P., Le Boudec, J.-Y., Giordano, S., & Hamdi, M. (1999). The terminode project: Towards mobile ad-hoc wans. In 1999 IEEE international workshop on mobile multimedia communications, 1999 (MoMuC’99) (pp. 124–128). IEEE.

About tier (mid) technology and infrastructure for emerging regions. Accessed June 07, 2017.

Burgess, J., Gallagher, B., Jensen, D., Levine, B. N. (2006). Maxprop: Routing for vehicle-based disruption-tolerant networks. In Proceedings IEEE INFOCOM 2006. 25TH IEEE international conference on computer communications (pp. 1–11).

Balasubramanian, A., Zhou, Y., Croft, W. B., Levine, B. N., & Venkataramani, A. (2007). Web search from a bus. In Proceedings of the second ACM workshop on challenged networks (pp. 59–66). ACM.

Burgess, J., et. al. (2008). CRAWDAD dataset umass/diesel (v. 2008-09-14).

Caiti, A., Husoy, T., Jesus, S., Karasalo, I., Massimelli, R., Munafò, A., et al. (2012). Underwater acoustic networks: The Fp7 uan project. IFAC Proceedings Volumes, 45(27), 220–225.

Juang, P., Oki, H., Wang, Y., Martonosi, M., Peh, L. S., & Rubenstein, D. (2002). Energy-efficient computing for wildlife tracking: Design tradeoffs and early experiences with ZebraNet. SIGARCH Computer Architecture News, 30, 96–107.

Liu, T., Sadler, C. M., Zhang, P., & Martonosi, M. (2004). Implementing software on resource-constrained mobile sensors: Experiences with impala and ZebraNet. In Proceedings of the 2nd international conference on mobile systems, applications, and services (pp. 256–269). ACM.

Bobbio, A., Ferraris, C., & Terruggia, R. (2006). New challenges in network reliability analysis. CNIP, 6, 554–564.

Fratta, L., & Montanari, U. (1973). A boolean algebra method for computing the terminal reliability in a communication network. IEEE Transactions on Circuit Theory, 20(3), 203–211.

Chaturvedi, S., & Misra, K. (2002). A hybrid method to evaluate reliability of complex networks. International Journal of Quality & Reliability Management, 19(8/9), 1098–1112.

Torrieri, D. (1994). Calculation of node-pair reliability in large networks with unreliable nodes. IEEE Transactions on Reliability, 43(3), 375–377, 382.

Meena, K., Vasanthi, T., Rajeswari, M., & UmamageswarI, P. (2016). Reliability analysis of MANET with RCFP: Reliable cluster forming protocol. International Journal of Applied Engineering Research, 11(1), 440–447.

Cook, J. L., Arsenal, P., & Ramirez-Marquez, J. E. (2007). Recent research on the reliability analysis methods for mobile ad-hoc networks. In Systems research forum (Vol. 2, No. 01, pp. 35–41). World Scientific Publishing Company.

Cook, J. L., & Ramirez-Marquez, J. E. (2007). Reliability of capacitated mobile ad hoc networks. Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability, 221(4), 307–318.

Padmavathy, N., & Chaturvedi, S. K. (2015). Reliability evaluation of capacitated mobile ad hoc network using log-normal shadowing propagation model. International Journal of Reliability and Safety, 9(1), 70–89.

Cook, J. L., & Ramirez-Marquez, J. E. (2008). Mobility and reliability modeling for a mobile ad hoc network. IIE Transactions, 41(1), 23–31.

Soh, S., Lau, W., Rai, S., & Brooks, R. R. (2007). On computing reliability and expected hop count of wireless communication networks. International Journal of Performability Engineering, 3(2), 267–279.

Cook, J. L., & Ramirez-Marquez, J. E. (2007). Two-terminal reliability analyses for a mobile ad hoc wireless network. Reliability Engineering & System Safety, 92(6), 821–829.

Meena, K. S., & Vasanthi, T. (2016). Reliability design for a manet with cluster-head gateway routing protocol. Communications in Statistics-Theory and Methods, 45(13), 3904–3918.

Meena, K. S., & Vasanthi, T. (2016). Optimum reliability analysis of mobile adhoc networks using universal generating function under limited delivery time and cost. Proceedings of International Conference on Information Engineering, Management and Security, 1, 13–17.

Choudhary, A., Roy, O., & Tuithung, T. (2015). Reliability evaluation of mobile ad-hoc networks. International Journal of Future Generation Communication and Networking, 8(5), 207–220.

Dimitar, T., Sonja, F., Bekim, C., & Aksenti, G. (2004). Link reliability analysis in ad hoc networks. In Proceedings of XII telekomunikacioni forum TELFOR.

Chowdhury, C., & Neogy, S. (2011). Reliability estimate of mobile agent system for QoS MANET applications. In 2011 Proceedings—Annual reliability and maintainability symposium (pp. 1–6).

Singh, M. M., Baruah, M., & Mandal, J. K. (2014). Reliability computation of mobile ad-hoc network using logistic regression. In 2014 Eleventh international conference on wireless and optical communications networks (WOCN) (pp. 1–5).

Kharbash, S., & Wang, W. (2007). Computing two-terminal reliability in mobile ad hoc networks. In 2007 IEEE wireless communications and networking conference (pp. 2831–2836).

Wang, T., Huang, C., Xiang, K., & Zhou, K. (2010) Survivability evaluation for MANET based on path reliability. In 2010 Second international conference on networks security, wireless communications and trusted computing (Vol. 1, pp. 378–381).

Pouyan, A., & Tabari, M. Y. (2014). Estimating reliability in mobile ad-hoc networks based on monte carlo simulation (technical note). International Journal of Engineering-Transactions B: Applications, 27(5), 739.

Dana, A., Zadeh, A. K., & Noori, S. A. S. (2008). Backup path set selection in ad hoc wireless network using link expiration time. Computers & Electrical Engineering, 34(6), 503–519.

Papadimitratos, P., Haas, Z. J., & Sirer, E. G. (2002). Path set selection in mobile ad hoc networks. In Proceedings of the 3rd ACM international symposium on mobile ad hoc networking & computing, MobiHoc ’02 (pp. 1–11).

Migov, D. A., & Shakhov, V. (2014). Reliability of ad hoc networks with imperfect nodes. In International workshop on multiple access communications (pp. 49–58). Cham: Springer.

Chaturvedi, S. K. (2016). Network reliability: Measures and evaluation. New York: Wiley.

Andel, T. R., & Yasinsac, A. (2006). On the credibility of manet simulations. Computer, 39(7), 48–54.

Manaseer, S. S. (2016). On the choice of parameter values for simulation based experiments on mobile ad hoc networks. International Journal of Communications, Network and System Sciences, 9(04), 90.

Meena, K., & Vasanthi, T. (2016). Reliability analysis of mobile ad hoc networks using universal generating function. Quality and Reliability Engineering International, 32(1), 111–122.

Rebaiaia, M.-L., & Ait-Kadi, D. (2015). Reliability evaluation of imperfect k-terminal stochastic networks using polygon-to chain and series-parallel reductions. In Proceedings of the 11th ACM symposium on QoS and security for wireless and mobile networks, Q2SWinet ’15 (pp. 115–122). ACM.

Rai, S., Kumar, A., & Prasad, E. (1986). Computing terminal reliability of computer network. Reliability Engineering, 16(2), 109–119.

Ahmad, M., & Mishra, D. K. (2012). A reliability calculations model for large-scale MANETs. International Journal of Computer Applications, 59(9), 17–21.

Egeland, G., & Engelstad, P. E. (2009). The availability and reliability of wireless multi-hop networks with stochastic link failures. IEEE Journal on Selected Areas in Communications, 27(7), 1132–1146.

Cook, J. L., & Ramirez-Marquez, J. E. (2009). Optimal design of cluster-based ad-hoc networks using probabilistic solution discovery. Reliability Engineering & System Safety, 94(2), 218–228.

Cook, J. L., & Ramirez-Marquez, J. E. (2008). Reliability analysis of cluster-based ad-hoc networks. Reliability Engineering & System Safety, 93(10), 1512–1522.

Soh, S., Rai, S., & Brooks, R. R. (2008). Performability Issues in Wireless Communication Networks. London: Springer.

Pellegrini, F. D., Miorandi, D., Carreras, I., & Chlamtac, I. (2007). A graph-based model for disconnected ad hoc networks. In IEEE INFOCOM 2007—26th IEEE international conference on computer communications (pp. 373–381).

Zhang, X., Liu, Q., Li, Z. (2014). A method to evaluate MANET connectivity based on communication demand and probability. In The proceedings of the second international conference on communications, signal processing, and systems (pp. 817–822). Springer.

Dasgupta, S., Mao, G., & Anderson, B. (2015). A new measure of wireless network connectivity. IEEE Transactions on Mobile Computing, 14(9), 1765–1779.

Brooks, R. R., Pillai, B., Racunas, S., & Rai, S. (2007). Mobile network analysis using probabilistic connectivity matrices. IEEE Transactions on Systems, Man, and Cybernetics Part C (Applications and Reviews), 37(4), 694–702.

Boukerche, A., Turgut, B., Aydin, N., Ahmad, M. Z., Blni, L., & Turgut, D. (2011). Routing protocols in ad hoc networks: A survey. Computer Networks, 55(13), 3032–3080.

Abolhasan, M., Wysocki, T., & Dutkiewicz, E. (2004). A review of routing protocols for mobile ad hoc networks. Ad Hoc Networks, 2(1), 1–22.

Giordano, S., & Stojmenovic, I. (2004). Position Based routing algorithms for ad hoc networks: A taxonomy. Boston: Springer.

Deng, J., Han, Y. S., Chen, P.-N., & Varshney, P. K. (2004). Optimum transmission range for wireless ad hoc networks. In 2004 IEEE wireless communications and networking conference (IEEE Cat. No. 04TH8733) (Vol. 2, pp. 1024–1029).

Santi, P., & Blough, D. M. (2003). The critical transmitting range for connectivity in sparse wireless ad hoc networks. IEEE Transactions on Mobile Computing, 2(1), 25–39.

Buchanan, M. (2003). Nexus: Small worlds and the groundbreaking theory of networks. New York, NY: W. W. Norton & Co., Inc.

Chaintreau, A., Mtibaa, A., Massoulie, L., & Diot, C. (2007). The diameter of opportunistic mobile networks. In Proceedings of the 2007 ACM CoNEXT conference (p. 12). ACM.

Tang, J., Scellato, S., Musolesi, M., Mascolo, C., & Latora, V. (2010). Small-world behavior in time-varying graphs. Physical Review E, 81(5), 055101.

Nishiyama, H., Ito, M., & Kato, N. (2014). Relay-by-smartphone: Realizing multihop device-to-device communications. IEEE Communications Magazine, 52(4), 56–65.

Measuring Resistance

Normal resistors have color codes on them. If you don't know what they mean, that's ok! There are plenty of online calculators that are easy to use. However, if you ever find yourself without internet access, a multimeter is very handy at measuring resistance.

Pick out a random resistor and set the multimeter to the 20kΩ setting. Then hold the probes against the resistor legs with the same amount of pressure you when pressing a key on a keyboard.

The meter will read one of three things, 0.00, 1, or the actual resistor value.

In this case, the meter reads 0.97, meaning this resistor has a value of 970Ω, or about 1kΩ (remember you are in the 20kΩ or 20,000 Ohm mode so you need to move the decimal three places to the right or 970 Ohms).

If the multimeter reads 1 or displays OL, it's overloaded. You will need to try a higher mode such as 200kΩ mode or 2MΩ (megaohm) mode. There is no harm if this happen, it simply means the range knob needs to be adjusted.

If the multimeter reads 0.00 or nearly zero, then you need to lower the mode to 2kΩ or 200Ω.

Remember that many resistors have a 5% tolerance. This means that the color codes may indicate 10,000 Ohms (10kΩ), but because of discrepancies in the manufacturing process a 10kΩ resistor could be as low as 9.5kΩ or as high as 10.5kΩ. Don't worry, it'll work just fine as a pull-up or general resistor.

Let's drop the meter down to the next lowest setting, 2K&ohm. What happens?

Not a whole lot changed. Because this resistor (a 1K&ohm) is less than 2K&ohm, it still shows up on the display. However, you'll notice that there is one more digit after the decimal point giving us a slightly higher resolution in our reading. What about the next lowest setting?

Now, since 1k&ohm is greater than 200&ohm, we've maxed out the meter, and it is telling you that it is overloaded and that you need to try a higher value setting.

As a rule of thumb, it's rare to see a resistor less than 1 Ohm. Remember that measuring resistance is not perfect. Temperature can affect the reading a lot. Also, measuring resistance of a device while it is physically installed in a circuit can be very tricky. The surrounding components on a circuit board can greatly affect the reading.

Stages LR (Dual-Sided) Power Meter In-Depth Review

It’s been a long time coming. For Stages, the development path of Stages LR has been over three years till it started shipping last month, with most of it in the public eye in the most watched bike on earth: Chris Froome’s. Of course, Stages got its start prior to Team Sky with their left only power meter (now rebranded Stages L), but it wasn’t till their sponsorship of Team Sky that the company and its products took off.

After all – if Chris Froome can win the Tour de France on a Stages Left-only unit, then it’s probably good enough for you too, right?

Well, perhaps not exactly, as Team Sky and others soon figured out. At Team Sky’s request, Stages was tasked with creating a dual version, so they could more accurately track riders progress. As they learned, aspects like fatigue and left/right leg differences really do impact overall accuracy. So the next few seasons we saw Team Sky quietly riding various prototype dual-leg models. It’s the resultant of these models that eventually became Stages LR that was announced last summer at Eurobike. (Side note: I detail the backstory on that here in this section.)

And as of last month, the company has started shipping this model to consumers. The big question though: Is it accurate? And more importantly for many – does it address some of the connectivity issues that seems to trouble existing Stages users. For those questions, I worked through two different Stages LR cranksets over the course of two months gathering boatloads of data.

(Oh, and as always, I’ll be sending back both loaner cranksets to Stages shortly. Especially because I’m pretty sure the airlines would be even more displeased than on my way down here with how many cranksets I’d have in my luggage coming back from Australia next month otherwise.)


In my case, the Stages LR was delivered as a single boxed product. However, you can actually buy it as an upgrade to an existing Stages Left-only unit (thus making the pair). Meaning, you’re buying the right side. The box wouldn’t likely differ very much, since the majority of the space in the box is for the drive side crankset.

Inside the box, you’ll find the drive-side crankset, as well as the left-side (non-drive side) crank arm. You’ll also find a small plug to twist into the non-left crank arm to make things look pretty. Then there’s some paper junk.

Here’s a closer look at the backside of both crank arms:

And then the non-drive side:

And finally, the little package of paper stuffs including ANT+ ID cards and a user guide that you can use to start a (very) small campfire to roast marshmallows on after you’re done reading them.

Oh, and there’s even a spare ‘o-ring’ in the package too.

What you see above is basically par for the course on cranksets, since it’s largely taking an existing Shimano Ultegra crankset and rebranding the box, plopping on the Stages power meter pieces, and then calling it macaroni. Just like Quarq, Power2Max and others do for cranksets.

Installation & Configuration:

As with most power meters, the installation will vary not so much on what you’re installing, but rather – the situation you’re coming from. By that, I mean that in the case of Stages LR, if you already have a Shimano crankset on your bike, then the swap to Stages LR could very well take you less than 5 minutes all-in. Quick and simple.

Whereas if you’re coming from a different crankset featuring a different bottom bracket standard, then you’re likely in for a longer journey. In my case, I was half-way in between. When I initially installed the Stages LR on my bike I was swapping out from my usual Quarq D-Zero. That had a very slightly different (smaller) bottom bracket standard than what the Shimano was using. So I had to swing around the corner to the bike shop to pick up a different bottom bracket, and then swap all that out.

At this point, I was already deep into leveraging the various tools of my bike toolbox – most notably the PressFit installation goods. I wouldn’t recommend buying such bottom bracket installation tools unless you plan to use them frequently (whereas I would recommend plenty of more general tools).

Once the bottom bracket swappage was done (unnecessary if you already have Shimano gear on your bike), then it’s as simple as sliding the drive side through the bottom bracket:

After that, there’s merely two bolts to tighten on the left crank arm, attaching it to the drive side.

Oh, and somewhere along the way you need to remove the small slips of plastic tape that separates the battery contacts from the coin cell batteries. On the left-crank arm, that’s easily done with your fingers.

Whereas on the right crank arm you’ll just need a small screwdriver to open up the battery compartment.

Next, anytime I do work on any bike crankset, I find it a good habit to toss the bike on a trainer and pedal for about a minute – starting easy at first and then building up intensity. Finishing with 2-4 sprints, something like 4-8 seconds each, pedaling reasonably hard.

I do this for two reasons:

A) If I’ve hosed something up on the crankset installation that causes a catastrophically viral-video worthy break, I don’t plant my face onto the pavement. Instead, the badness is contained to my bike secured atop a trainer.

B) The sprints help to settle the crankset, and tighten things up – which is good for power meters. Most power meters require a very short settling period, of which the above procedure will take care of.

At this point, you’ll do a zero-offset and you’re good to go. Don’t worry, I’ll cover that zero-offset in the next section.

General Use Overview:

The Stages LR in many ways acts and feels like an existing Stages product, except now on both sides. However, we’ll start with some of the basics and go from there. The first tidbit worth noting is that the unit has a status LED on the inside of the crank arm, allowing you to quickly validate that it’s alive:

This new status LED is also now found on all new Stages left-only units shipping as of about 1-2 weeks ago. They somewhat quietly clearanced inventory of existing units for this new generation.

Next to that status light is, of course, the battery compartment as noted earlier. This compartment, on each side, houses a single CR2032 coin cell battery. Stages says that the system should get about 200 hours of battery life per coin cell battery. You can of course just buy these in bulk super cheaply online, which is what I do (20 for $8).

One minor tip to point out (since I just learned this lesson yesterday), is that if you’re travelling with your bike, ensure your bike tool actually has a mini-screwdriver on it. Mine just had a standard one, and the AirBNB house I’m in didn’t have a mini-screwdriver. I ended up using a butter knife to get it open, but maybe something to add to your bike bag just in case you need to swap batteries mid-trip. Many other power meters have shifted to using screws as well, though some of those use hex screws instead – which usually match your bike tool. No biggie, just purely a pro tip I figured I’d share.

With everything all installed, we’ll need to get it paired up to your bike computer. Stages was in fact the very first company years ago to do dual/concurrent ANT+ & Bluetooth Smart transmitting power meters, and that continues today as well with the Stages LR. This means it transmits power over both ANT+ and Bluetooth Smart, within the respective power meter standards.

As such you can pair it to basically any device or app that supports power meters. Be it Garmin, Wahoo, or even Stage’s own Dash head unit. Same goes for apps like Zwift, Strava, TrainerRoad and more.

Now, I’ll talk at length about connectivity and drops in the next section, so for now let’s just get it paired. In my case, I’ve mostly been using a Garmin Edge 1030 to collect data from it. And in doing so, largely over ANT+, since most folks in the industry would recommend that since you’ll get more advanced data right now over ANT+ versus BLE.

As with most head units, you can change the name from the ANT+ ID to something else. In the case of Stages and newer Garmin devices that support Bluetooth Smart pairing, if you want to pair to the ANT+ variant, it’s the one listed without the name ‘Stages’ in it, within the list. Here’s a handy guide:

ANT+ side: ‘43016’
Bluetooth Smart side: ‘Stages 43016’

So in the case of the above photo, you’re seeing just the BLE channel (because I took the photo after I had paired the ANT+ channel already).

Most power meter companies follow that spec of putting the brand name in the Bluetooth Smart pairing ID, followed then by the ANT+ ID number (within the Bluetooth Smart ID). Finally, in the case of Stages, there’s no need to set a crank arm length, and thus your head unit shouldn’t ask you for one. That feature is mostly just used on pedal based power meters.

With that all set, you should do a zero offset. On Stages units, you need to ensure your crank arms are pointing straight up and down (vertical).

Then just tap calibrate, which triggers a zero offset:

The result in a calibration value that you can keep an eye on. I recommend doing this prior to each ride, mainly so that if something is amiss, you’ll spot it quickly. An example of something being amiss is that the unit either fails to calibrate, or the calibration value shifts massively. Typically you’ll see it within only a handful of digits, primarily based on temperature.

With all this completed, you’re ready to ride. Like most power meters, Stages LR transmits the following values to head units:

ANT+ Power (total)
ANT+ Power Balance (left/right)
ANT+ Cadence
ANT+ Pedal Smoothness
ANT+ Torque Effectiveness
Bluetooth Smart Power
Bluetooth Smart Power Balance
Bluetooth Smart Cadence

To see this a bit, here’s a file on Garmin Connect recorded on an Edge 1030 that shows data from a Stages LR ride via ANT+. Within it you can see the various metrics from above recorded in the file:

Note that if you record via Bluetooth Smart, you won’t get the pedal smoothness or torque effectiveness data, even if recording on a head unit that supports it.

Also worth noting is that if you use Stage’s own Stages Dash, you’ll get additional details recorded to the activity file around aspects like battery life, zero offsets, and firmware updates. I’ve long thought that might actually be one of the coolest features of the Stages Dash. I talk about that more in my Stages Dash review. The below is a sample screenshot from Stages, since I didn’t think to capture this data earlier.

In addition to all the standard pairing, Stages also supports connectivity via their smartphone app. This app is mostly used for updating firmware, but it also has other purposes:

For example, you can double-check torque values on it, as well as validate zero offsets:

One of those other purposes is Stage’s high-speed data rate capture, which allows you to record Stages data at up to 64 times per second, in effectively a high-speed data capture scenario. This was rolled out long ago on the Stages left-only cranks, however, it’s not yet enabled on Stages LR. Stages says it’s on their to-do list, but just simply fell lower down the priority totem pole. Since it’ mostly only used for track start type scenarios, it’s not really something I’d consider a high-priority item either.

With all that set, let’s dive into data accuracy.

Power Meter Accuracy Results:

I’ve long said that if your power meter isn’t accurate, then there’s no point in spending money on one. Strava can give you estimated power that’s ‘close enough’ for free, so if you’re gonna spend money on something it shouldn’t be a random number generator. Yet there are certain scenarios/products where a power meter may be less accurate than others, or perhaps it’s got known edge cases that don’t work. Neither product type is bad – but you just need to know what those use/edge cases are and whether it fits your budget or requirements.

As always, I set out to find that out. In power meters today one of the biggest challenges is outdoor conditions. Generally speaking, indoor conditions are pretty easy to handle, but I still start there nonetheless. It allows me to dig into areas like low and high cadence, as well as just how clean numbers are at steady-state power outputs. Whereas outdoors allows me to look into water ingest concerns, temperature and humidity variations, and the all-important road surface aspects (e.g. vibrations). For reference, Stages LR has a claimed accuracy rate of +/- 1.5%. It also does not require any magnets for cadence, while also automatically correcting for any temperature drift. Both of these are pretty common though on most power meters these days.

In my testing, I generally use between 2-4 other power meters on the bike at once. I find this is the best way to validate power meters in real-world conditions. In the case of most of these tests with the Stages LR I was using these other power meters concurrently:

Elite Direto Trainer
Garmin Vector 3
JetBlack WhisperDrive Smart Trainer
PowerTap G3 hub based power meter (three different units)
Wahoo KICKR 2017/V3 Trainer

All of which was tested over the course of about two months, on two different Stages LR cranksets. I’ve ignored the previous test rides I did last August on a pre-production unit.

In general, my use of other products is most often tied to other things I’m testing. Also, when it comes to data collection, I use a blend of the NPE WASP data collection devices, and a fleet of Garmin head units (mostly Edge 520/820/1000/1030 units). For the vast majority of tests on the Stages LR I used an Edge 1030 and FR935, along with a bit of the Edge 520 as alluded to elsewhere in the connectivity section. But I also recorded on apps as well, including Zwift.

Note all of the data can be found in the links next to each review. Also, at the end is a short table with the data used in this review. I’ll likely add in other data not in this review as well.

I’m going to start this review with the most recent data set – a ride from just 75 minutes ago. Why? Because it’s the ride with the freshest firmware (just yesterday’s) that appears to resolve a bunch of little quirks I’ve seen/reported over the last two months. In effect, it’s the first time that all the stars have aligned. Which isn’t to say previous rides have produced inaccurate power, or that Stages was even at fault. Rather, various rides had various connectivity things or other power meters go crap on me – but I’ll get into that later. As a general rule, I like to have no less than 3 power meters in a test, so when I ‘only’ have two, it bugs me a bit.

In any case, this ride was a straightforward outside ride with three units – Stages LR, Vector 3, and a PowerTap G3 hub. The road conditions were mostly smooth pavement, though some sections of less clean pavement. Plus speed bumps. Here’s the overview (and here’s the files in the Analyzer if you want to look at them online):

I captured data across a slew of head units, but notably captured Stages LR data on three different units – an Edge 1030, Edge 520, and FR935. I did this concurrently across both ANT+ & Bluetooth Smart, for reasons I’ll get into later on in this review.

As you can see from the overview, the units tracked virtually identically across the entire ride. We can dig into one of the points from a stoplight where I go from 0 power up to 400w pretty quickly. Note this is smoothed at 5-seconds, merely to make it easier to see.

Not that all power meters within +/-1 second of each other show the uptick in power. As is always the case, it’s rather difficult to get multiple head units to precisely align due to transmission and recording rates, as you see here.

The one thing you do notice though is that on the BLE connection to the FR935, you see those two little blips.

Those are technically drops. It’s just that when smoothed you don’t see them. But what’s weird is that they aren’t full drops – rather, they drop by 50w or so (versus normally a drop is considered 0w). Not entirely sure what to think there. But from my standpoint that technically falls under connectivity issues rather than accuracy (though, the net results impacts accuracy). But I’ll cover that separately in the next section in this review.

Here’s a mild sprint up to about 600w or so, and you can see that all of the units follow each other perfectly here (a slight variance on the BLE peak power):

Equally important, there’s no droppage issues from an accuracy standpoint after I conclude the sprint, which can sometimes be a sticky point for power meters.

If we look at cadence for this ride, that too looks identical across all the data sources. There are some little blips on the Vector 3 cadence while I’m semi-stopped. It’s almost as if it picked up the slight crank movements I did when moving forward on the bike a bit at the 33:25 marker (I was on the side of the road texting The Girl that I had hit my turnaround point, I moved halfway through that again to get further away from the road).

Next, let’s look at a ride I did immediately after the above outdoor ride – a trainer ride. Why on earth would I jump on the trainer after an outdoor ride? To poke at cadence and power values across a broad range. Specifically, I wanted to see what happened when I went really low on cadence (20RPM), and really high on cadence (160+ RPM). Plus of course everything in between. Here’s that quick 8-minute step-test:

To walk you through what I did, it’s pretty straightforward:

A) I started at about 80-90RPM, and then went up in 10RPM increments to 130RPM, holding

30 seconds each
B) Then I significantly increased the RPM to about 170RPM, briefly.

As you can see – the units tracked very nicely all the way up to about 160RPM, at which point it appears I lost the Stages unit briefly.

Most power meters have some sort of top-out point, usually in the 160-190RPM range. Stages lists 220RPM as their top cadence. In my case, it’s also plausible that since I only spent a few seconds at that high level of cadence, it could just be recording nuances causing the drops.

Continuing with the test, I then did:

C) Dropped back down to 70RPM, and continued slowing my RPM’s down gradually
D) Eventually, I got down to 20RPM (that’s three seconds per revolution!)
E) For fun, I threw down 180RPM (and Stages tracks just fine this time)

The cutoff point here appears to be 20RPM. Below that and the cadence drops out, above that and it’s just fine.

This too is specified on Stages site:

These limits are perfectly acceptable/reasonable/logical in my mind. Also, note that power values stayed constant with the Garmin Vector 3 throughout. I was also atop a trainer that transmits power, but the power firmware is beta there and wasn’t quite as stable as I wanted – so I removed that from the graphs to minimize confusion.

Now I’ve also done some more generic Zwift workouts and such, like this ride here with a pile of sprints throughout:

As you can see, throughout the sprints things aligned quite nicely against the Garmin Vector 3 – virtually identical save one drop at about the 2-minute marker (but this was prior to the firmware update to address that).

But let’s head back outside, it’s more interesting there as always.

Part of my challenge recently is that previous to the firmware of two days ago, Stages was experiencing drops depending on how you connected to it. Meaning, if I connected via ANT+ on certain head units, it’d drop the connection (but not other head units). Using BLE in theory made it better, but in reality I found it made everything worse (both ANT+ & BLE). So I ended up with some rides whereby the data when transmitting was perfectly accurate – but would be blemished by the occasional dropout.

That aside, this ride has virtually no dropouts. There’s some settling of power meters in the first portion of the ride after being installed on the bike, so here’s a look at the middle portion:

You can dig into the full file above, but it’s basically the same for the remainder of the ride, the three power meters are almost indistinguishable, despite boatloads of ups and downs on power (this was a river loop where there’s a lot of changes in power).

The workaround to the dropout issues (again, prior to yesterday’s firmware) was basically using the Edge 1030 – for which dropouts didn’t occur as long as I didn’t also connect Bluetooth Smart devices concurrently. And for rides where I did that, the tracks looked beautiful. They matched Garmin Vector 3 and a PowerTap G3 hub quite well. Such as this snippet from a couple hour ride:

There’s some slight offsets between the units, which makes sense as, in theory, the PowerTap G3 should be the lowest and the Garmin Vector 3 the highest. In this case, about +/- 3.7% from the Stages centerline, which would account for drivetrain efficiencies as well as any accuracy differences.

And then this hour or so long lunch ride here where again, all three units aligned very nicely. Here’s a closer look at an 800w+ sprint:

And then there’s this sub-hour long ride (which lacks a functional G3 hub as it had to be replaced), but you can at least see how it compares against Vector 3 on a mean-max graph. What you notice is very slight differences/offsets when you get to the sub-10 second power, which is pretty common on these graphs.

Prior to these rides, there were other firmware issues whereby the right side of the unit would output slightly lower power values (1-3%) than it should have. So while the left side matched perfectly to Vector 3’s left side, the right side dragged down the picture. On these rides (if using an Edge 1030), I didn’t experience any drops. This issue was fixed back on Jan 26th. Given it’s been fixed, I’m not going to re-analyze those rides, since we already know the story there.

Now, I’m going to talk about droppage in a second – but on the accuracy front I’ve been seeing good things since late January – so I think we’re definitely good there for both power and cadence. The multitude of data sets above shows that pretty easily as well.

Update – January 2020: It’s worthwhile reading GPLama’s thoughts on Shimano R8000 & R9000-series based cranksets (for which the Stages LR is based upon). There’s some worthwhile issues there that are very real. What’s more challenging is determining how universal they impact every crankset (which, is part of the problem). On some cranksets off the manufacturing line, the impact can be zero to negligible. While on other cranksets it can be substantial, especially coupled with how different people pedal from a force standpoint. In the data from this review, you’ll see that things are largely quite good – without much issue. And in fact, I went on to buy another Stages LR for my own usage longer term. And the vast majority of that data from the last two years mirrors that of this review. Yet, at the same time – there are also rare days where things don’t match and I see the right-side low. That’s non-awesome. Yet, at the same time, I’ve also had non-awesome days on Favero Assioma pedals too. Sometimes you just can’t win. As of January 2020, I’m currently using a non-Shimano based Quarq DZero unit as my main baseline power meter.

(Note: All of the charts in these accuracy sections were created using the DCR Analyzer tool. It allows you to compare power meters/trainers, heart rate, cadence, speed/pace, GPS tracks and plenty more. You can use it as well for your own gadget comparisons, more details here.)

Does it drop?

Back a year or so ago, there were media reports that Chris Froome had made an interesting off-hand comment when responding to a reporters question on why he used the older Edge 810 versus anything newer (the team was otherwise outfitted with Edge 820’s last season, at Team Sky’s – not a sponsors – expense). He noted that he had found that the ‘newer Garmin’s had data drops’. At the time I found this a peculiar comment because it just wasn’t something people were seeing in ‘real life’ on road bikes. But now in looking back at things, I get it: The real wording should have been “I was seeing drops with Stages LR”. But of course, he couldn’t say that – Stages was a sponsor. Garmin wasn’t.

When I first started this testing process in December – I quickly saw those same drops as well…also on newer Garmin devices. Specifically in my case the Edge 520 and FR935. Both highly popular devices. And neither are devices that have ever dropped on any other power meter for me (I use 2-3 Edge 520’s per ride, connected to 2-3 different power meters per ride) nor are drops even remotely common for either unit on other power meters. So in effect…if it quakes like a duck, it’s probably a duck.

I went back to Stages on this and they did some more digging. In fact, the topic of Stages and drops is as old as power meter time itself. What I was surprised about was that this was somehow still a thing. Still, I let them dig.

In doing so they showed they could reproduce Edge 520 drops like I saw relatively easy, and that it was stable on the Edge 1030 (which I saw too). But to me that’s not really an acceptable admittance. Again, regardless of whether Garmin is at fault – nobody else has this problem (sidestepping the mess that is the Fenix 5/5S connectivity debacle, which I don’t use). On this old dataset, I highlighted each of the drops in this 41-minute ride (15 total drops on the Edge 520).

So they continued to dig a bit – and the outcome of that was a change in firmware that tweaked the way the communications stack delivered power to both ANT+ & BLE signals. Specifically, two changes occurred. In Stage’s own words, they were (geek detail ahead):

“The firmware change was directly related to the timing of when the radio was transmitting and receiving both ANT and BLE messages. The easiest way I can explain it being the non-programmer that I am, is that at times we were trying to use the radio at the same time to get ANT and BLE messages out and in (with BLE) via the radio. The change was really just a refinement of the timing and length of when each message was sent and when and how long the radio was on listening for a BLE return message. As I described previously, there was always messages going out but not all of the 4hz for both BLE an ANT were getting out, so it worked but was not perfect. This issue was greatly amplified when there were other interferences such as multiple head units, wifi, trainers etc. Now that we have made this change all the messages are properly being sent at 4hz. This makes it much more likely that the head unit will receive and record at least on message a second and with a Dash that we receive all 4hz.” [DCR Note: 4hz means 4 times a second]

“The other change was to deal with how some BLE head units deal with coasting, on most ANT devices they recognize if you coast and drive your power to zero. For some reason on some BLE devices were holding onto your last power number when you coasted. So we made the power meter smart and it will drive your power to zero if you coast.”

Note that the Stages LR already was broadcasting at a higher rate than existing Stages left-only units (ignoring the new left-only units they just started shipping a week or two ago, which I’ll post about separately shortly). That update for the LR units was issued as firmware 1.1.8 and was released two days ago and incorporates the changes noted in the above two paragraphs.

So where does this leave things?

Well, I think we’re (finally) good. Basically, two specific firmware updates got me into a good state:

Jan 26th: Fixed low-right side issue in interim beta update (thus, power is accurate after this on all my rides)
Feb 13th: Fixed dropouts for Bluetooth Smart, and ANT+ on certain devices (for me anyway – minus one little blip)

Now you may be saying – how do you know the dropout issues are fixed? Well previously with the dropouts they manifested themselves pretty darn quickly. Perhaps every 5-8 minutes on ANT+, and almost instantly/wonky on Bluetooth Smart (I have some crazy ugly charts from that). I could repro those situations on every ride if I wanted to since December.

Now on both of today’s rides, it’s almost perfectly clean. Stages said they found two specific issues, which they said was related to power handling on the communications stack. The proof appears to be in the pudding – no full drops (just one brief jitter that may or may not be Stages’ fault).

I will note though that the broadcasting power on Stages’ units (including the LR) continues to be substantially lower than other ANT+ sensors. For example, take a look at the RSSI values on Vector 3 (ID 324103) and Stages LR (ID 34770) side by side. A higher value is better (meaning, –21 is better than –35), the closer to zero the better. On the left is what happens if I place the test WASP device on my out-front mount, and then to the right is what happens when I place it on my stem. Even when placed directly atop the bottom bracket (in between the crank arms), Stages is still significantly lower than Garmin Vector 3.

While in my tests the signal strength now seems strong enough for my head units, where this matters is if your specific bike/body configuration puts it on the edge of reception, then having that bit extra means the difference between good and bad. That’s historically where Stages got itself into trouble, primarily on triathlon/TT bikes, mostly for people with wrist-worn watches, due to body/bike interference. Since I don’t have a triathlon/TT bike on this trip – I can’t test that at this time.

Of course, I’d love to hear anyone’s results on that in the comments (just be sure to be on 1.1.8 first!).

Power Meter Recommendations:

With so many power meters on the market, your choices have expanded greatly in the last few years. So great in fact that I’ve written up an entire post dedicated to power meter selection: The Annual Power Meters Guide.

I refresh that annual guide each fall, and in this case that was November – which is inclusive of all the power meter players on the market.

The above-noted guide covers every model of power meter on the market (and upcoming) and gives you recommendations for whether a given unit is appropriate for you. There is no ‘best’ power meter. There’s simply the most appropriate power meter for your situation. If you have only one type of bike I’d recommend one power meter versus another. Or if you have different needs for swapping bikes I’d recommend one unit versus another. Or if you have a specific budget or crankset compatibility, it’d influence the answers.

I’ll be publishing a pricing update in March, covering where pricing stands for the year, though I don’t expect too many shifts between now and then. Nor do I expect much in the way of additional new entrants not already known/released.


After a two month journey on Stages LR, I’m finally at the point where I’m happy with the results it’s giving me across all fronts – both accuracy as well as connectivity. Of course, for many consumers, those are kinda considered baseline starting points (or, I hope they are anyway). The next question is pricing.

In that realm, Stages sits at $999 for the Ultegra edition I tested (inclusive of the full crankset). That’s identical to Pioneer’s offering at $999 as well, and the same as Garmin and PowerTap with their pedals at $999 (and Dura-Ace at $1,299 also matches Pioneer). All of which give you distinct left/right power. There are nuances to each implementation though from a tech standpoint. Pioneer has high speed and detail data metrics, but only on their platform (and lacks Bluetooth Smart connectivity). Garmin gives you less detailed metrics than Pioneer, but on a more widely adopted file standard (for example, WKO4 can see the data). Stages gives you a full crankset, so if you don’t want to deal with changing pedal types or just prefer crankset power meters – then that’s a pro for them. Like I said in the previous section – there’s no right answer here, just solutions for your specific requirements.

The end of which is that I’d have no problem riding a Stages LR unit on my road bike at this point with the latest firmware. I can’t speak to a triathlon/TT setup at this moment, but hopefully others can chime in. Historically I never had issues with my Stages left-only units and connectivity on my tri bike, but as noted, connectivity issues when seen seemed highly dependent on your specific bike and body.

In any case – thanks for reading – and if you’ve got specific questions feel free to drop them down in the comments below – happy to try and track them down.

Found this review useful? Support the site!

Hopefully you found this review useful. At the end of the day, I’m an athlete just like you looking for the most detail possible on a new purchase – so my review is written from the standpoint of how I used the device. The reviews generally take a lot of hours to put together, so it’s a fair bit of work (and labor of love). As you probably noticed by looking below, I also take time to answer all the questions posted in the comments – and there’s quite a bit of detail in there as well.


have partnered with the retailers on the left, and any shopping you do through those links or the ones listed below, helps support this website. Thanks!

For European/Australian/New Zealand readers, you can also pick up the unit via Wiggle at the links below, which helps support the site too!

Additionally, you can also use Amazon to purchase anything else on your wish list. Any shopping done through these links also really helps support the site (socks, laundry detergent, cowbells). If you’re outside the US, I’ve got links to all of the major individual country Amazon stores on the sidebar towards the top.

7 Answers 7

This is really the same as a couple of the other answers, but I note that in the comments to those answers you are insistent that your experiment is a test of general relativity. However this is not the case. As long as spacetime is flat the experiment can be analysed using special relativity, and in this answer I shall explain why.

It's commonly believed that special relativity cannot be used for accelerating frames, but this is wholly false. Special relativity only fails when spacetime is not flat, i.e. when the metric that describes the spacetime is not the Minkowski metric.

The analysis I'll give here originally formed part of my answer to Is gravitational time dilation different from other forms of time dilation?, but I'll repeat it here since it is the core issure in your question.

In the centrifuge the observer is rotating about the pivot with some velocity $v$ at some radius $r$. We are watching the observer from the laboratory frame, and we measure position of the observer using polar coordinates $(t, r, heta,phi)$. Since spacetime is flat the line interval is given by the Minkowski metric, and in polar coordinates the Minkowski metric is:

$ ds^2 = -c^2dt^2 + dr^2 + r^2(d heta^2 + sin^2 heta dphi^2) $

We can choose our axes so the rotating observer is rotating in the plane $ heta = pi/2$, and since it is moving at constant radius both $dr$ and $d heta$ are zero. The metric simplifies to:

We can simplify this further because in the laboratory frame the rotating observer is moving at velocity $v$ so $dphi$ is given by:

and therefore our equation for the line element becomes:

$ ds^2 = -c^2dt^2 + v^2dt^2 = (v^2 - c^2)dt^2 ag <1>$

Now we switch to the frame of the rotating observer. In their frame they are at rest, so the value of the line element they measure is simply:

where I'm using the primed coordinate $t'$ to distinguish the time measured by the rotating observer from the time we measure $t$.

The fundamental symmetry of special relativity is that all observers agree on the value of the line element $ds$, so our value given by equation (1) and the rotating observer's value given by equation (2) must be the same. If we equate equations (1) and (2) we get:

and rearranging this gives:

which you should immediately recognise as the usual expression for time dilation in SR.

So the time dilation for the rotating observer is given by the same function as for an observer moving in a straight line at constant speed. This is why it's perfectly valid for the other answers to calculate time dilation using the normal special relativity formula. The centripetal force/acceleration does not appear in this expression and general relativity is not required.

Short answer: I'm afraid this is not a test of general relativity. I'll tell you why. I'll try to keep simple.

You may use special relativity when your frame of reference is inertial. Let's say you see a Ultra-centrifuge spinning. You are experiencing no gravity at all (Earth's gravity is negligible for time dilation effects). You are experiencing no non-inertial forces (Earth spin and Coriolis force are in a very small scale for time dilation). Therefore, you are (approximately) a valid inertial frame of reference, and therefore, you can use special relativity for the spinning radioactive sample.

But, let's move to the reference frame of the sample. On there, the sample is experiencing 2 million Gs of non-inertial forces. It is cleary a non-inertial frame of reference. Thus, you cannot use special relativity here. You must use general relativity.

However, both observers, you and the sample, must agree and came up with the same results. Since it is far easier to treat the problem using special relativity, we can do so, using your inertial frame of reference. Let's calculate the $gamma$-factor: $ gamma = frac<1>>> = frac<1>>>, quadquad g = frac $

I'm keeping the calculations very simple so you can understand. This is not right, but likely holds as good approximation. The non-inertial acceleration as you said has the value: $g = 10^6m/s^2$. I'll exagerate the radius: $r=1m$. Therefore, the gamma factor: $ gammaapproxfrac<1>>>approx 1 $

Therefore, the time dilation: $Delta t' = gammaDelta t$, is negligible, in your small experiment.

It was a nice ideia. It would work with atomic clocks. For instance, take a look in the Hafele–Keating experiment.

Measure my Cats movements , Accuracy of Mobile phone GPS at metre scale . how to improve it - Geographic Information Systems

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited.

Feature Papers represent the most advanced research with significant potential for high impact in the field. Feature Papers are submitted upon individual invitation or recommendation by the scientific editors and undergo peer review prior to publication.

The Feature Paper can be either an original research article, a substantial novel research study that often involves several techniques or approaches, or a comprehensive review paper with concise and precise updates on the latest progress in the field that systematically reviews the most exciting advances in scientific literature. This type of paper provides an outlook on future directions of research or possible applications.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to authors, or important in this field. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.